diff --git a/website/src/pages/ar/about.mdx b/website/src/pages/ar/about.mdx index 8005f34aef5f..93dbeb51f658 100644 --- a/website/src/pages/ar/about.mdx +++ b/website/src/pages/ar/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The diagram below provides more detailed information about the flow of data afte 1. A dapp adds data to Ethereum through a transaction on a smart contract. 2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء. -3. يقوم الـ Graph Node بمسح الـ Ethereum باستمرار بحثا عن الكتل الجديدة وبيانات الـ subgraph الخاص بك. -4. يعثر الـ Graph Node على أحداث الـ Ethereum لـ subgraph الخاص بك في هذه الكتل ويقوم بتشغيل mapping handlers التي قدمتها. الـ mapping عبارة عن وحدة WASM والتي تقوم بإنشاء أو تحديث البيانات التي يخزنها Graph Node استجابة لأحداث الـ Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## الخطوات التالية -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx index 898175b05cad..e1dbbea03383 100644 --- a/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -39,7 +39,7 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx index 9c949027b41f..965c96f7355a 100644 --- a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con The L2 Transfer Tools use Arbitrum’s native mechanism to send messages from L1 to L2. This mechanism is called a “retryable ticket” and is used by all native token bridges, including the Arbitrum GRT bridge. You can read more about retryable tickets in the [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -When you transfer your assets (subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## نقل الـ Subgraph (الرسم البياني الفرعي) -### كيفكيف أقوم بتحويل الـ subgraph الخاص بي؟ +### How do I transfer my Subgraph? -لنقل الـ subgraph الخاص بك ، ستحتاج إلى إكمال الخطوات التالية: +To transfer your Subgraph, you will need to complete the following steps: 1. ابدأ التحويل على شبكة Ethereum mainnet 2. انتظر 20 دقيقة للتأكيد -3. قم بتأكيد نقل الـ subgraph على Arbitrum \ \* +3. Confirm Subgraph transfer on Arbitrum\* -4. قم بإنهاء نشر الـ subgraph على Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. جدث عنوان URL للاستعلام (مستحسن) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### من أين يجب أن أبدأ التحويل ؟ -يمكنك بدء عملية النقل من [Subgraph Studio] (https://thegraph.com/studio/) ، [Explorer ،] (https://thegraph.com/explorer) أو من أي صفحة تفاصيل subgraph. انقر فوق الزر "Transfer Subgraph" في صفحة تفاصيل الرسم الـ subgraph لبدء النقل. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### كم من الوقت سأنتظر حتى يتم نقل الـ subgraph الخاص بي +### How long do I need to wait until my Subgraph is transferred يستغرق وقت النقل حوالي 20 دقيقة. يعمل جسر Arbitrum في الخلفية لإكمال نقل الجسر تلقائيًا. في بعض الحالات ، قد ترتفع تكاليف الغاز وستحتاج إلى تأكيد المعاملة مرة أخرى. -### هل سيظل الـ subgraph قابلاً للاكتشاف بعد أن أنقله إلى L2؟ +### Will my Subgraph still be discoverable after I transfer it to L2? -سيكون الـ subgraph الخاص بك قابلاً للاكتشاف على الشبكة التي تم نشرها عليها فقط. على سبيل المثال ، إذا كان الـ subgraph الخاص بك موجودًا على Arbitrum One ، فيمكنك العثور عليه فقط في Explorer على Arbitrum One ولن تتمكن من العثور عليه على Ethereum. يرجى التأكد من تحديد Arbitrum One في مبدل الشبكة في أعلى الصفحة للتأكد من أنك على الشبكة الصحيحة. بعد النقل ، سيظهر الـ L1 subgraph على أنه مهمل. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### هل يلزم نشر الـ subgraph الخاص بي لنقله؟ +### Does my Subgraph need to be published to transfer it? -للاستفادة من أداة نقل الـ subgraph ، يجب أن يكون الرسم البياني الفرعي الخاص بك قد تم نشره بالفعل على شبكة Ethereum الرئيسية ويجب أن يكون لديه إشارة تنسيق مملوكة للمحفظة التي تمتلك الرسم البياني الفرعي. إذا لم يتم نشر الرسم البياني الفرعي الخاص بك ، فمن المستحسن أن تقوم ببساطة بالنشر مباشرة على Arbitrum One - ستكون رسوم الغاز أقل بكثير. إذا كنت تريد نقل رسم بياني فرعي منشور ولكن حساب المالك لا يملك إشارة تنسيق عليه ، فيمكنك الإشارة بمبلغ صغير (على سبيل المثال 1 GRT) من ذلك الحساب ؛ تأكد من اختيار إشارة "auto-migrating". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### ماذا يحدث لإصدار Ethereum mainnet للرسم البياني الفرعي الخاص بي بعد أن النقل إلى Arbitrum؟ +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -بعد نقل الرسم البياني الفرعي الخاص بك إلى Arbitrum ، سيتم إهمال إصدار Ethereum mainnet. نوصي بتحديث عنوان URL للاستعلام في غضون 48 ساعة. ومع ذلك ، هناك فترة سماح تحافظ على عمل عنوان URL للشبكة الرئيسية الخاصة بك بحيث يمكن تحديث أي دعم dapp لجهة خارجية. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### بعد النقل ، هل أحتاج أيضًا إلى إعادة النشر على Arbitrum؟ @@ -80,21 +80,21 @@ If you have the L1 transaction hash (which you can find by looking at the recent ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### هل يتم نشر وتخطيط الإصدار بنفس الطريقة في الـ L2 كما هو الحال في شبكة Ethereum Ethereum mainnet؟ -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### هل سينتقل تنسيق الـ subgraph مع الـ subgraph ؟ +### Will my Subgraph's curation move with my Subgraph? -إذا اخترت إشارة الترحيل التلقائي auto-migrating ، فسيتم نقل 100٪ من التنسيق مع الرسم البياني الفرعي الخاص بك إلى Arbitrum One. سيتم تحويل كل إشارة التنسيق الخاصة بالرسم الفرعي إلى GRT في وقت النقل ، وسيتم استخدام GRT المقابل لإشارة التنسيق الخاصة بك لصك الإشارة على L2 subgraph. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -يمكن للمنسقين الآخرين اختيار ما إذا كانوا سيسحبون أجزاء من GRT ، أو ينقلونه أيضًا إلى L2 لإنتاج إشارة على نفس الرسم البياني الفرعي. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### هل يمكنني إعادة الرسم البياني الفرعي الخاص بي إلى Ethereum mainnet بعد أن أقوم بالنقل؟ +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -بمجرد النقل ، سيتم إهمال إصدار شبكة Ethereum mainnet للرسم البياني الفرعي الخاص بك. إذا كنت ترغب في العودة إلى mainnet ، فستحتاج إلى إعادة النشر (redeploy) والنشر مرة أخرى على mainnet. ومع ذلك ، لا يُنصح بشدة بالتحويل مرة أخرى إلى شبكة Ethereum mainnet حيث سيتم في النهاية توزيع مكافآت الفهرسة بالكامل على Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### لماذا أحتاج إلى Bridged ETH لإكمال النقل؟ @@ -206,19 +206,19 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans \ \* إذا لزم الأمر -أنت تستخدم عنوان عقد. -### كيف سأعرف ما إذا كان الرسم البياني الفرعي الذي قمت بعمل إشارة تنسيق عليه قد انتقل إلى L2؟ +### How will I know if the Subgraph I curated has moved to L2? -عند عرض صفحة تفاصيل الرسم البياني الفرعي ، ستعلمك لافتة بأنه تم نقل هذا الرسم البياني الفرعي. يمكنك اتباع التعليمات لنقل إشارة التنسيق الخاص بك. يمكنك أيضًا العثور على هذه المعلومات في صفحة تفاصيل الرسم البياني الفرعي لأي رسم بياني فرعي تم نقله. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### ماذا لو كنت لا أرغب في نقل إشارة التنسيق الخاص بي إلى L2؟ -عندما يتم إهمال الرسم البياني الفرعي ، يكون لديك خيار سحب الإشارة. وبالمثل ، إذا انتقل الرسم البياني الفرعي إلى L2 ، فيمكنك اختيار سحب الإشارة في شبكة Ethereum الرئيسية أو إرسال الإشارة إلى L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### كيف أعرف أنه تم نقل إشارة التنسيق بنجاح؟ يمكن الوصول إلى تفاصيل الإشارة عبر Explorer بعد حوالي 20 دقيقة من بدء أداة النقل للـ L2. -### هل يمكنني نقل إشاة التنسيق الخاص بي على أكثر من رسم بياني فرعي في وقت واحد؟ +### Can I transfer my curation on more than one Subgraph at a time? لا يوجد خيار كهذا حالياً. @@ -266,7 +266,7 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans ### هل يجب أن أقوم بالفهرسة على Arbitrum قبل أن أنقل حصتي؟ -يمكنك تحويل حصتك بشكل فعال أولاً قبل إعداد الفهرسة ، ولكن لن تتمكن من المطالبة بأي مكافآت على L2 حتى تقوم بتخصيصها لـ subgraphs على L2 وفهرستها وعرض POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### هل يستطيع المفوضون نقل تفويضهم قبل نقل indexing stake الخاص بي؟ diff --git a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx index af5a133538d6..5863ff2de0a2 100644 --- a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ title: L2 Transfer Tools Guide Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## كيف تنقل الغراف الفرعي الخاص بك إلى شبكة آربترم (الطبقة الثانية) +## How to transfer your Subgraph to Arbitrum (L2) -## فوائد نقل الغراف الفرعي الخاصة بك +## Benefits of transferring your Subgraphs مجتمع الغراف والمطورون الأساسيون كانوا [يستعدون] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) للإنتقال إلى آربترم على مدى العام الماضي. وتعتبر آربترم سلسلة كتل من الطبقة الثانية أو "L2"، حيث ترث الأمان من سلسلة الإيثيريوم ولكنها توفر رسوم غازٍ أقل بشكلٍ كبير. -عندما تقوم بنشر أو ترقية الغرافات الفرعية الخاصة بك إلى شبكة الغراف، فأنت تتفاعل مع عقودٍ ذكيةٍ في البروتوكول وهذا يتطلب دفع رسوم الغاز باستخدام عملة الايثيريوم. من خلال نقل غرافاتك الفرعية إلى آربترم، فإن أي ترقيات مستقبلية لغرافك الفرعي ستتطلب رسوم غازٍ أقل بكثير. الرسوم الأقل، وكذلك حقيقة أن منحنيات الترابط التنسيقي على الطبقة الثانية مستقيمة، تجعل من الأسهل على المنسِّقين الآخرين تنسيق غرافك الفرعي، ممّا يزيد من مكافآت المفهرِسين على غرافك الفرعي. هذه البيئة ذات التكلفة-الأقل كذلك تجعل من الأرخص على المفهرسين أن يقوموا بفهرسة وخدمة غرافك الفرعي. سوف تزداد مكافآت الفهرسة على آربترم وتتناقص على شبكة إيثيريوم الرئيسية على مدى الأشهر المقبلة، لذلك سيقوم المزيد والمزيد من المُفَهرِسين بنقل ودائعهم المربوطة وتثبيت عملياتهم على الطبقة الثانية. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## فهم ما يحدث مع الإشارة وغرافك الفرعي على الطبقة الأولى وعناوين مواقع الإستعلام +## Understanding what happens with signal, your L1 Subgraph and query URLs -عند نقل سبجراف إلى Arbitrum، يتم استخدام جسر Arbitrum GRT، الذي بدوره يستخدم جسر Arbitrum الأصلي، لإرسال السبجراف إلى L2. سيؤدي عملية "النقل" إلى إهمال السبجراف على شبكة الإيثيريوم الرئيسية وإرسال المعلومات لإعادة إنشاء السبجراف على L2 باستخدام الجسر. ستتضمن أيضًا رصيد GRT المرهون المرتبط بمالك السبجراف، والذي يجب أن يكون أكبر من الصفر حتى يقبل الجسر النقل. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -عندما تختار نقل الرسم البياني الفرعي ، سيؤدي ذلك إلى تحويل جميع إشارات التنسيق الخاصة بالرسم الفرعي إلى GRT. هذا يعادل "إهمال" الرسم البياني الفرعي على الشبكة الرئيسية. سيتم إرسال GRT المستخدمة لعملية التنسيق الخاصة بك إلى L2 جمباً إلى جمب مع الرسم البياني الفرعي ، حيث سيتم استخدامها لإنتاج الإشارة نيابة عنك. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -يمكن للمنسقين الآخرين اختيار ما إذا كانوا سيسحبون جزء من GRT الخاص بهم ، أو نقله أيضًا إلى L2 لصك إشارة على نفس الرسم البياني الفرعي. إذا لم يقم مالك الرسم البياني الفرعي بنقل الرسم البياني الفرعي الخاص به إلى L2 وقام بإيقافه يدويًا عبر استدعاء العقد ، فسيتم إخطار المنسقين وسيتمكنون من سحب تنسيقهم. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -بمجرد نقل الرسم البياني الفرعي ، لن يتلقى المفهرسون بعد الآن مكافآت لفهرسة الرسم البياني الفرعي، نظرًا لأنه يتم تحويل كل التنسيق لـ GRT. ومع ذلك ، سيكون هناك مفهرسون 1) سيستمرون في خدمة الرسوم البيانية الفرعية المنقولة لمدة 24 ساعة ، و 2) سيبدأون فورًا في فهرسة الرسم البياني الفرعي على L2. ونظرًا لأن هؤلاء المفهرسون لديهم بالفعل رسم بياني فرعي مفهرس ، فلا داعي لانتظار مزامنة الرسم البياني الفرعي ، وسيكون من الممكن الاستعلام عن الرسم البياني الفرعي على L2 مباشرة تقريبًا. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -يجب إجراء الاستعلامات على الرسم البياني الفرعي في L2 على عنوان URL مختلف (على \`` Arbitrum-gateway.thegraph.com`) ، لكن عنوان URL L1 سيستمر في العمل لمدة 48 ساعة على الأقل. بعد ذلك ، ستقوم بوابة L1 بإعادة توجيه الاستعلامات إلى بوابة L2 (لبعض الوقت) ، ولكن هذا سيضيف زمن تأخير لذلك يوصى تغيير جميع استعلاماتك إلى عنوان URL الجديد في أقرب وقت ممكن. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## اختيار محفظة L2 الخاصة بك -عندما قمت بنشر subgraph الخاص بك على الشبكة الرئيسية ، فقد استخدمت محفظة متصلة لإنشاء subgraph ، وتمتلك هذه المحفظة NFT الذي يمثل هذا subgraph ويسمح لك بنشر التحديثات. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -عند نقل الرسم البياني الفرعي إلى Arbitrum ، يمكنك اختيار محفظة مختلفة والتي ستمتلك هذا الـ subgraph NFT على L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. إذا كنت تستخدم محفظة "عادية" مثل MetaMask (حساب مملوك خارجيًا EOA ، محفظة ليست بعقد ذكي) ، فهذا اختياري ويوصى بالاحتفاظ بعنوان المالك نفسه كما في L1. -إذا كنت تستخدم محفظة بعقد ذكي ، مثل multisig (على سبيل المثال Safe) ، فإن اختيار عنوان مختلف لمحفظة L2 أمر إلزامي ، حيث من المرجح أن هذا الحساب موجود فقط على mainnet ولن تكون قادرًا على إجراء المعاملات على Arbitrum باستخدام هذه المحفظة. إذا كنت ترغب في الاستمرار في استخدام محفظة عقد ذكية أو multisig ، فقم بإنشاء محفظة جديدة على Arbitrum واستخدم عنوانها كمالك للرسم البياني الفرعي الخاص بك على L2. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -** من المهم جدًا استخدام عنوان محفظة تتحكم فيه ، ويمكنه إجراء معاملات على Arbitrum. وإلا فسيتم فقد الرسم البياني الفرعي ولا يمكن استعادته. ** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## التحضير لعملية النقل: إنشاء جسر لـبعض ETH -يتضمن نقل الغراف الفرعي إرسال معاملة عبر الجسر ، ثم تنفيذ معاملة أخرى على شبكة أربترم. تستخدم المعاملة الأولى الإيثيريوم على الشبكة الرئيسية ، وتتضمن بعضًا من إيثيريوم لدفع ثمن الغاز عند استلام الرسالة على الطبقة الثانية. ومع ذلك ، إذا كان هذا الغاز غير كافٍ ، فسيتعين عليك إعادة إجراء المعاملة ودفع ثمن الغاز مباشرةً على الطبقة الثانية (هذه هي "الخطوة 3: تأكيد التحويل" أدناه). يجب تنفيذ هذه الخطوة ** في غضون 7 أيام من بدء التحويل **. علاوة على ذلك ، سيتم إجراء المعاملة الثانية مباشرة على شبكة أربترم ("الخطوة 4: إنهاء التحويل على الطبقة الثانية"). لهذه الأسباب ، ستحتاج بعضًا من إيثيريوم في محفظة أربترم. إذا كنت تستخدم متعدد التواقيع أو عقداً ذكياً ، فيجب أن يكون هناك بعضًا من إيثيريوم في المحفظة العادية (حساب مملوك خارجيا) التي تستخدمها لتنفيذ المعاملات ، وليس على محفظة متعددة التواقيع. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. يمكنك شراء إيثيريوم من بعض المنصات وسحبها مباشرة إلى أربترم، أو يمكنك استخدام جسر أربترم لإرسال إيثيريوم من محفظة الشبكة الرئيسيةإلى الطبقة الثانية: [bridge.arbitrum.io] (http://bridge.arbitrum.io). نظرًا لأن رسوم الغاز على أربترم أقل ، فستحتاج فقط إلى مبلغ صغير. من المستحسن أن تبدأ بمبلغ منخفض (0 على سبيل المثال ، 01 ETH) للموافقة على معاملتك. -## العثور على أداة نقل الغراف الفرعي +## Finding the Subgraph Transfer Tool -يمكنك العثور على أداة نقل L2 في صفحة الرسم البياني الفرعي الخاص بك على Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![أداة النقل](/img/L2-transfer-tool1.png) -إذا كنت متصلاً بالمحفظة التي تمتلك الغراف الفرعي، فيمكنك الوصول إليها عبر المستكشف، وذلك عن طريق الانتقال إلى صفحة الغراف الفرعي على المستكشف: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## الخطوة 1: بدء عملية النقل -قبل بدء عملية النقل، يجب أن تقرر أي عنوان سيكون مالكًا للغراف الفرعي على الطبقة الثانية (انظر "اختيار محفظة الطبقة الثانية" أعلاه)، ويُوصَى بشدة بأن يكون لديك بعضًا من الإيثيريوم لرسوم الغاز على أربترم. يمكنك الاطلاع على (التحضير لعملية النقل: تحويل بعضًا من إيثيريوم عبر الجسر." أعلاه). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -يرجى أيضًا ملاحظة أن نقل الرسم البياني الفرعي يتطلب وجود كمية غير صفرية من إشارة التنسيق عليه بنفس الحساب الذي يمتلك الرسم البياني الفرعي ؛ إذا لم تكن قد أشرت إلى الرسم البياني الفرعي ، فسيتعين عليك إضافة القليل من إشارة التنسيق (يكفي إضافة مبلغ صغير مثل 1 GRT). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -بعد فتح أداة النقل، ستتمكن من إدخال عنوان المحفظة في الطبقة الثانية في حقل "عنوان محفظة الاستلام". تأكد من إدخال العنوان الصحيح هنا. بعد ذلك، انقر على "نقل الغراف الفرعي"، وسيتم طلب تنفيذ العملية في محفظتك. (يُرجى ملاحظة أنه يتم تضمين بعضًا من الإثيريوم لدفع رسوم الغاز في الطبقة الثانية). بعد تنفيذ العملية، سيتم بدء عملية النقل وإهمال الغراف الفرعي في الطبقة الأولى. (يمكنك الاطلاع على "فهم ما يحدث مع الإشارة والغراف الفرعي في الطبقة الأولى وعناوين الاستعلام" أعلاه لمزيد من التفاصيل حول ما يحدث خلف الكواليس). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -إذا قمت بتنفيذ هذه الخطوة، \*\*يجب عليك التأكد من أنك ستستكمل الخطوة 3 في غضون 7 أيام، وإلا فإنك ستفقد الغراف الفرعي والإشارة GRT الخاصة بك. يرجع ذلك إلى آلية التواصل بين الطبقة الأولى والطبقة الثانية في أربترم: الرسائل التي ترسل عبر الجسر هي "تذاكر قابلة لإعادة المحاولة" يجب تنفيذها في غضون 7 أيام، وقد يتطلب التنفيذ الأولي إعادة المحاولة إذا كان هناك زيادة في سعر الغاز على أربترم. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## الخطوة 2: الانتظار حتى يتم نقل الغراف الفرعي إلى الطبقة الثانية +## Step 2: Waiting for the Subgraph to get to L2 -بعد بدء عملية النقل، يتعين على الرسالة التي ترسل الـ subgraph من L1 إلى L2 أن يتم نشرها عبر جسر Arbitrum. يستغرق ذلك حوالي 20 دقيقة (ينتظر الجسر لكتلة الشبكة الرئيسية التي تحتوي على المعاملة حتى يتأكد أنها "آمنة" من إمكانية إعادة ترتيب السلسلة). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). بمجرد انتهاء وقت الانتظار ، ستحاول Arbitrum تنفيذ النقل تلقائيًا على عقود L2. @@ -80,7 +80,7 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## الخطوة الثالثة: تأكيد التحويل -في معظم الحالات ، سيتم تنفيذ هذه الخطوة تلقائيًا لأن غاز الطبقة الثانية المضمن في الخطوة 1 يجب أن يكون كافيًا لتنفيذ المعاملة التي تتلقى الغراف الفرعي في عقود أربترم. ومع ذلك ، في بعض الحالات ، من الممكن أن يؤدي ارتفاع أسعار الغاز على أربترم إلى فشل هذا التنفيذ التلقائي. وفي هذه الحالة ، ستكون "التذكرة" التي ترسل غرافك الفرعي إلى الطبقة الثانية معلقة وتتطلب إعادة المحاولة في غضون 7 أيام. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. في هذا الحالة ، فستحتاج إلى الاتصال باستخدام محفظة الطبقة الثانية والتي تحتوي بعضاً من إيثيريوم على أربترم، قم بتغيير شبكة محفظتك إلى أربترم، والنقر فوق "تأكيد النقل" لإعادة محاولة المعاملة. @@ -88,33 +88,33 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## الخطوة 4: إنهاء عملية النقل على L2 -في هذه المرحلة، تم استلام الغراف الفرعي والـ GRT الخاص بك على أربترم، ولكن الغراف الفرعي لم يتم نشره بعد. ستحتاج إلى الربط باستخدام محفظة الطبقة الثانية التي اخترتها كمحفظة استلام، وتغيير شبكة محفظتك إلى أربترم، ثم النقر على "نشر الغراف الفرعي" +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![نشر الغراف الفرعي](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![انتظر حتى يتم نشر الغراف الفرعي](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -سيؤدي هذا إلى نشر الغراف الفرعي حتى يتمكن المفهرسون الذين يعملون في أربترم بالبدء في تقديم الخدمة. كما أنه سيعمل أيضًا على إصدار إشارة التنسيق باستخدام GRT التي تم نقلها من الطبقة الأولى. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Step 5: Updating the query URL -تم نقل غرافك الفرعي بنجاح إلى أربترم! للاستعلام عن الغراف الفرعي ، سيكون عنوان URL الجديد هو: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -لاحظ أن ID الغراف الفرعي على أربترم سيكون مختلفًا عن الذي لديك في الشبكة الرئيسية، ولكن يمكنك العثور عليه في المستكشف أو استوديو. كما هو مذكور أعلاه (راجع "فهم ما يحدث للإشارة والغراف الفرعي في الطبقة الأولى وعناوين الاستعلام") سيتم دعم عنوان URL الطبقة الأولى القديم لفترة قصيرة ، ولكن يجب عليك تبديل استعلاماتك إلى العنوان الجديد بمجرد مزامنة الغراف الفرعي على الطبقة الثانية. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## كيفية نقل التنسيق الخاص بك إلى أربترم (الطبقة الثانية) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## اختيار محفظة L2 الخاصة بك @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ In most cases, this step will auto-execute as the L2 gas included in step 1 shou ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/ar/archived/sunrise.mdx b/website/src/pages/ar/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/ar/archived/sunrise.mdx +++ b/website/src/pages/ar/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/ar/global.json b/website/src/pages/ar/global.json index b543fd624f0e..d9110259f5cb 100644 --- a/website/src/pages/ar/global.json +++ b/website/src/pages/ar/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "متعدد-السلاسل", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "الوصف", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "الوصف", + "liveResponse": "Live Response", + "example": "Example" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/ar/index.json b/website/src/pages/ar/index.json index 0f2dfc58967a..2443372843a8 100644 --- a/website/src/pages/ar/index.json +++ b/website/src/pages/ar/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -37,10 +37,86 @@ }, "supportedNetworks": { "title": "الشبكات المدعومة", + "details": "Network Details", + "services": "Services", + "type": "النوع", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "التوثيق", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { "base": "The Graph supports {0}. To add a new network, {1}", "networks": "networks", "completeThisForm": "complete this form" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Name", + "id": "ID", + "subgraphs": "Subgraphs", + "substreams": "متعدد-السلاسل", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "متعدد-السلاسل", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "الفوترة", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { @@ -80,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/ar/indexing/chain-integration-overview.mdx b/website/src/pages/ar/indexing/chain-integration-overview.mdx index e6b95ec0fc17..af9a582b58d3 100644 --- a/website/src/pages/ar/indexing/chain-integration-overview.mdx +++ b/website/src/pages/ar/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ Ready to shape the future of The Graph Network? [Start your proposal](https://gi ### 2. ماذا يحدث إذا تم دعم فايرهوز و سبستريمز بعد أن تم دعم الشبكة على الشبكة الرئيسية؟ -هذا سيؤثر فقط على دعم البروتوكول لمكافآت الفهرسة على الغرافات الفرعية المدعومة من سبستريمز. تنفيذ الفايرهوز الجديد سيحتاج إلى الفحص على شبكة الاختبار، وفقًا للمنهجية الموضحة للمرحلة الثانية في هذا المقترح لتحسين الغراف. وعلى نحو مماثل، وعلى افتراض أن التنفيذ فعال وموثوق به، سيتتطالب إنشاء طلب سحب على [مصفوفة دعم الميزات] (https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) ("مصادر بيانات سبستريمز" ميزة للغراف الفرعي)، بالإضافة إلى مقترح جديد لتحسين الغراف، لدعم البروتوكول لمكافآت الفهرسة. يمكن لأي شخص إنشاء طلب السحب ومقترح تحسين الغراف؛ وسوف تساعد المؤسسة في الحصول على موافقة المجلس. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/ar/indexing/new-chain-integration.mdx b/website/src/pages/ar/indexing/new-chain-integration.mdx index bff012725d9d..bcd82dafed18 100644 --- a/website/src/pages/ar/indexing/new-chain-integration.mdx +++ b/website/src/pages/ar/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`، ضمن طلب دفعة استدعاء الإجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ ### 2. Firehose Integration @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## تكوين عقدة الغراف -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [استنسخ عقدة الغراف](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/ar/indexing/overview.mdx b/website/src/pages/ar/indexing/overview.mdx index 3bfd1cc210c3..f543bca55f32 100644 --- a/website/src/pages/ar/indexing/overview.mdx +++ b/website/src/pages/ar/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. -يختار المفهرسون subgraphs للقيام بالفهرسة بناء على إشارة تنسيق subgraphs ، حيث أن المنسقون يقومون ب staking ل GRT وذلك للإشارة ل Subgraphs عالية الجودة. يمكن أيضا للعملاء (مثل التطبيقات) تعيين بارامترات حيث يقوم المفهرسون بمعالجة الاستعلامات ل Subgraphs وتسعير رسوم الاستعلام. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/ar/indexing/supported-network-requirements.mdx b/website/src/pages/ar/indexing/supported-network-requirements.mdx index 9c820d055399..d2364e39c668 100644 --- a/website/src/pages/ar/indexing/supported-network-requirements.mdx +++ b/website/src/pages/ar/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| بوليجون | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| بوليجون | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/ar/indexing/tap.mdx b/website/src/pages/ar/indexing/tap.mdx index ee96a02cd5b8..e7085e5680bb 100644 --- a/website/src/pages/ar/indexing/tap.mdx +++ b/website/src/pages/ar/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## نظره عامة -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/ar/indexing/tooling/graph-node.mdx b/website/src/pages/ar/indexing/tooling/graph-node.mdx index 0250f14a3d08..f5778789213d 100644 --- a/website/src/pages/ar/indexing/tooling/graph-node.mdx +++ b/website/src/pages/ar/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ar/indexing/tooling/graphcast.mdx b/website/src/pages/ar/indexing/tooling/graphcast.mdx index 8fc00976ec28..d084edcd7067 100644 --- a/website/src/pages/ar/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ar/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Learn More diff --git a/website/src/pages/ar/resources/benefits.mdx b/website/src/pages/ar/resources/benefits.mdx index 2e1a0834591c..6899e348a912 100644 --- a/website/src/pages/ar/resources/benefits.mdx +++ b/website/src/pages/ar/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | The Graph Network | +| :--------------------------: | :-------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | The Graph Network | +| :--------------------------: | :----------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | The Graph Network | +| :--------------------------: | :-----------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ar/resources/glossary.mdx b/website/src/pages/ar/resources/glossary.mdx index f922950390a6..d456a94f63ab 100644 --- a/website/src/pages/ar/resources/glossary.mdx +++ b/website/src/pages/ar/resources/glossary.mdx @@ -4,51 +4,51 @@ title: قائمة المصطلحات - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: قائمة المصطلحات - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx index 9fe263f2f8b2..40086bb24579 100644 --- a/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: دليل ترحيل AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -سيمكن ذلك لمطوري ال Subgraph من استخدام مميزات أحدث للغة AS والمكتبة القياسية. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## مميزات @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## كيف تقوم بالترقية؟ -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -إذا لم تكن متأكدا من اختيارك ، فنحن نوصي دائما باستخدام الإصدار الآمن. إذا كانت القيمة غير موجودة ، فقد ترغب في القيام بعبارة if المبكرة مع قيمة راجعة في معالج الـ subgraph الخاص بك. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ in assembly/index.ts(4,3) ### مقارانات Null -من خلال إجراء الترقية على ال Subgraph الخاص بك ، قد تحصل أحيانًا على أخطاء مثل هذه: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -لقد فتحنا مشكلة في مترجم AssemblyScript ، ولكن في الوقت الحالي إذا أجريت هذا النوع من العمليات في Subgraph mappings ، فيجب عليك تغييرها لإجراء فحص ل null قبل ذلك. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -سيتم تجميعها لكنها ستتوقف في وقت التشغيل ، وهذا يحدث لأن القيمة لم تتم تهيئتها ، لذا تأكد من أن ال subgraph قد قام بتهيئة قيمها ، على النحو التالي: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/ar/resources/roles/curating.mdx b/website/src/pages/ar/resources/roles/curating.mdx index d2f355055aac..e73785e92590 100644 --- a/website/src/pages/ar/resources/roles/curating.mdx +++ b/website/src/pages/ar/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## كيفية الإشارة -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -يمكن للمنسق الإشارة إلى إصدار معين ل subgraph ، أو يمكنه اختيار أن يتم ترحيل migrate إشاراتهم تلقائيا إلى أحدث إصدار لهذا ال subgraph. كلاهما استراتيجيات سليمة ولها إيجابيات وسلبيات. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## المخاطر 1. سوق الاستعلام يعتبر حديثا في The Graph وهناك خطر من أن يكون٪ APY الخاص بك أقل مما تتوقع بسبب ديناميكيات السوق الناشئة. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. يمكن أن يفشل ال subgraph بسبب خطأ. ال subgraph الفاشل لا يمكنه إنشاء رسوم استعلام. نتيجة لذلك ، سيتعين عليك الانتظار حتى يصلح المطور الخطأ وينشر إصدارا جديدا. - - إذا كنت مشتركا في أحدث إصدار من subgraph ، فسيتم ترحيل migrate أسهمك تلقائيا إلى هذا الإصدار الجديد. هذا سيتحمل ضريبة تنسيق بنسبة 0.5٪. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## الأسئلة الشائعة حول التنسيق ### 1. ما هي النسبة المئوية لرسوم الاستعلام التي يكسبها المنسقون؟ -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. كيف يمكنني تقرير ما إذا كان ال subgraph عالي الجودة لكي أقوم بالإشارة إليه؟ +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. هل يمكنني بيع أسهم التنسيق الخاصة بي؟ diff --git a/website/src/pages/ar/resources/roles/delegating/undelegating.mdx b/website/src/pages/ar/resources/roles/delegating/undelegating.mdx index 274fd08e0269..0756092ea10e 100644 --- a/website/src/pages/ar/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/ar/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## مصادر إضافية diff --git a/website/src/pages/ar/resources/subgraph-studio-faq.mdx b/website/src/pages/ar/resources/subgraph-studio-faq.mdx index 74c0228e4093..ec613ed68df2 100644 --- a/website/src/pages/ar/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ar/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: الأسئلة الشائعة حول الفرعيةرسم بياني اس ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -تذكر أنه يمكنك إنشاء API key والاستعلام عن أي subgraph منشور على الشبكة ، حتى إذا قمت ببناء subgraph بنفسك. حيث أن الاستعلامات عبر API key الجديد ، هي استعلامات مدفوعة مثل أي استعلامات أخرى على الشبكة. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ar/resources/tokenomics.mdx b/website/src/pages/ar/resources/tokenomics.mdx index 511af057534f..fa0f098b22c8 100644 --- a/website/src/pages/ar/resources/tokenomics.mdx +++ b/website/src/pages/ar/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## نظره عامة -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. المنسقون (Curators) - يبحثون عن أفضل subgraphs للمفهرسين +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. المفهرسون (Indexers) - العمود الفقري لبيانات blockchain @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### إنشاء subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### الاستعلام عن subgraph موجود +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/ar/sps/introduction.mdx b/website/src/pages/ar/sps/introduction.mdx index 2336653c0e06..e74abf2f0998 100644 --- a/website/src/pages/ar/sps/introduction.mdx +++ b/website/src/pages/ar/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: مقدمة --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## نظره عامة -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### مصادر إضافية @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ar/sps/sps-faq.mdx b/website/src/pages/ar/sps/sps-faq.mdx index 88f4ddbb66d7..c19b0a950297 100644 --- a/website/src/pages/ar/sps/sps-faq.mdx +++ b/website/src/pages/ar/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## ما هي الغرافات الفرعية المدعومة بسبستريمز؟ +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## كيف تختلف الغرافات الفرعية التي تعمل بسبستريمز عن الغرافات الفرعية؟ +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## ما هي فوائد استخدام الغرافات الفرعية المدعومة بسبستريمز؟ +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## ماهي فوائد سبستريمز؟ @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- التوجيه لأي مكان: يمكنك توجيه بياناتك لأي مكان ترغب فيه: بوستجريسكيو، مونغو دي بي، كافكا، الغرافات الفرعية، الملفات المسطحة، جداول جوجل. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - يستفيد من الملفات المسطحة: يتم استخراج بيانات سلسلة الكتل إلى ملفات مسطحة، وهي أرخص وأكثر موارد الحوسبة تحسيناً. -## أين يمكن للمطورين الوصول إلى مزيد من المعلومات حول الغرافات الفرعية المدعومة بسبستريمز و سبستريمز؟ +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -تعتبر وحدات رست مكافئة لمعينات أسمبلي اسكريبت في الغرافات الفرعية. يتم ترجمتها إلى ويب أسيمبلي بنفس الطريقة، ولكن النموذج البرمجي يسمح بالتنفيذ الموازي. تحدد وحدات رست نوع التحويلات والتجميعات التي ترغب في تطبيقها على بيانات سلاسل الكتل الخام. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -على سبيل المثال، يمكن لأحمد بناء وحدة أسعار اسواق الصرف اللامركزية، ويمكن لإبراهيم استخدامها لبناء مجمِّع حجم للتوكن المهتم بها، ويمكن لآدم دمج أربع وحدات أسعار ديكس فردية لإنشاء مورد أسعار. سيقوم طلب واحد من سبستريمز بتجميع جميع هذه الوحدات الفردية، وربطها معًا لتقديم تدفق بيانات أكثر تطوراً ودقة. يمكن استخدام هذا التدفق لملءغراف فرعي ويمكن الاستعلام عنه من قبل المستخدمين. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## كيف يمكنك إنشاء ونشر غراف فرعي مدعوم بسبستريمز؟ After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## أين يمكنني العثور على أمثلة على سبستريمز والغرافات الفرعية المدعومة بسبستريمز؟ +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -يمكنك زيارة [جيت هب](https://github.com/pinax-network/awesome-substreams) للعثور على أمثلة للسبستريمز والغرافات الفرعية المدعومة بسبستريمز. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## ماذا تعني السبستريمز والغرافات الفرعية المدعومة بسبستريمز بالنسبة لشبكة الغراف؟ +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? إن التكامل مع سبستريمز والغرافات الفرعية المدعومة بسبستريمز واعدة بالعديد من الفوائد، بما في ذلك عمليات فهرسة عالية الأداء وقابلية أكبر للتركيبية من خلال استخدام وحدات المجتمع والبناء عليها. diff --git a/website/src/pages/ar/sps/triggers.mdx b/website/src/pages/ar/sps/triggers.mdx index 05eccf4d55fb..1bf1a2cf3f51 100644 --- a/website/src/pages/ar/sps/triggers.mdx +++ b/website/src/pages/ar/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## نظره عامة -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### مصادر إضافية diff --git a/website/src/pages/ar/sps/tutorial.mdx b/website/src/pages/ar/sps/tutorial.mdx index 21f99fff2832..dd85fa999764 100644 --- a/website/src/pages/ar/sps/tutorial.mdx +++ b/website/src/pages/ar/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Get Started @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/ar/subgraphs/_meta-titles.json b/website/src/pages/ar/subgraphs/_meta-titles.json index 0556abfc236c..3fd405eed29a 100644 --- a/website/src/pages/ar/subgraphs/_meta-titles.json +++ b/website/src/pages/ar/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", + "guides": "How-to Guides", "best-practices": "Best Practices" } diff --git a/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx index b77a40a5be90..d8de3e7a1fa2 100644 --- a/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### نظره عامة -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## مصادر إضافية - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/ar/subgraphs/best-practices/pruning.mdx b/website/src/pages/ar/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/ar/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx index 74e56c406044..d713d6cd8864 100644 --- a/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## نظره عامة @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ar/subgraphs/billing.mdx b/website/src/pages/ar/subgraphs/billing.mdx index e5b5deb5c4ef..71e44f86c1ab 100644 --- a/website/src/pages/ar/subgraphs/billing.mdx +++ b/website/src/pages/ar/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: الفوترة ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/ar/subgraphs/cookbook/arweave.mdx b/website/src/pages/ar/subgraphs/cookbook/arweave.mdx index c1ec421993b4..4bb8883b4bd0 100644 --- a/website/src/pages/ar/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/ar/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Building Subgraphs on Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are To be able to build and deploy Arweave Subgraphs, you need two packages: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Subgraph's components -There are three components of a subgraph: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ Defines the data sources of interest, and how they should be processed. Arweave Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ``` $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## تعريف Subgraph Manifest -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet @@ -99,7 +99,7 @@ Arweave data sources support two types of handlers: ## تعريف المخطط -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript Mappings @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Querying an Arweave Subgraph -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## أمثلة على الـ Subgraphs -Here is an example subgraph for reference: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a subgraph index Arweave and other chains? +### Can a Subgraph index Arweave and other chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. ### Can I index the stored files on Arweave? Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). -### Can I identify Bundlr bundles in my subgraph? +### Can I identify Bundlr bundles in my Subgraph? This is not currently supported. @@ -188,7 +188,7 @@ The source.owner can be the user's public key or account address. ### What is the current encryption format? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/ar/subgraphs/cookbook/enums.mdx b/website/src/pages/ar/subgraphs/cookbook/enums.mdx index 9508aa864b6c..846faecc1706 100644 --- a/website/src/pages/ar/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/ar/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/ar/subgraphs/cookbook/grafting.mdx b/website/src/pages/ar/subgraphs/cookbook/grafting.mdx index 704e7df3f3f6..4b7dad1a54d9 100644 --- a/website/src/pages/ar/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/ar/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Replace a Contract and Keep its History With Grafting --- -In this guide, you will learn how to build and deploy new subgraphs by grafting existing subgraphs. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## What is Grafting? -Grafting reuses the data from an existing subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. Also, it can be used when adding a feature to a subgraph that takes long to index from scratch. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - يضيف أو يزيل أنواع الكيانات - يزيل الصفات من أنواع الكيانات @@ -22,38 +22,38 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## Building an Existing Subgraph -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## تعريف Subgraph Manifest -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Grafting Manifest Definition -Grafting requires adding two new items to the original subgraph manifest: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Deploying the Base Subgraph -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the subgraph is indexing properly, you can quickly update the subgraph with grafting. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Deploying the Grafting Subgraph The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ It should return the following: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## مصادر إضافية diff --git a/website/src/pages/ar/subgraphs/cookbook/near.mdx b/website/src/pages/ar/subgraphs/cookbook/near.mdx index bdbe8e518a6b..04daec8b6ac7 100644 --- a/website/src/pages/ar/subgraphs/cookbook/near.mdx +++ b/website/src/pages/ar/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: بناء Subgraphs على NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## ما هو NEAR؟ [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## ماهي NEAR subgraphs؟ +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - معالجات الكتل(Block handlers): يتم تشغيلها على كل كتلة جديدة - معالجات الاستلام (Receipt handlers): يتم تشغيلها في كل مرة يتم فيها تنفيذ رسالة على حساب محدد @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## بناء NEAR Subgraph -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -هناك ثلاثة جوانب لتعريف الـ subgraph: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ```bash $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### تعريف Subgraph Manifest -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ accounts: ### تعريف المخطط -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript Mappings @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## نشر NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -The node configuration will depend on where the subgraph is being deployed. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -بمجرد نشر الـ subgraph الخاص بك ، سيتم فهرسته بواسطة Graph Node. يمكنك التحقق من تقدمه عن طريق الاستعلام عن الـ subgraph نفسه: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## نظره عامة + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Get Started + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## مصادر إضافية + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ar/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/ar/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..68f637752b46 --- /dev/null +++ b/website/src/pages/ar/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## مقدمة + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Get Started + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## مصادر إضافية + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/ar/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/ar/subgraphs/cookbook/subgraph-debug-forking.mdx index 3bacc1f60003..364fb8ce4d9c 100644 --- a/website/src/pages/ar/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/ar/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Quick and Easy Subgraph Debugging Using Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## حسنا، ما هو؟ -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## ماذا؟! كيف؟ -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## من فضلك ، أرني بعض الأكواد! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. الطريقة المعتادة لمحاولة الإصلاح هي: 1. إجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة (وأنا أعلم أنه لن يحلها). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. الانتظار حتى تتم المزامنة. 4. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. قم بإجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1! الآن ، قد يكون لديك سؤالان: @@ -69,18 +69,18 @@ Using **subgraph forking** we can essentially eliminate this step. Here is how i وأنا أجيب: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. الـتفريع سهل ، فلا داعي للقلق: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! لذلك ، هذا ما أفعله: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/ar/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/ar/subgraphs/cookbook/subgraph-uncrashable.mdx index 0cc91a0fa2c3..a08e2a7ad8c9 100644 --- a/website/src/pages/ar/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/ar/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Safe Subgraph Code Generator --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Why integrate with Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. @@ -26,4 +26,4 @@ Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/ar/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/ar/subgraphs/cookbook/transfer-to-the-graph.mdx index f713ec3a5e76..4be3dcedffe8 100644 --- a/website/src/pages/ar/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/ar/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Example -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### مصادر إضافية -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx index d0f9bb2cc348..c35d101f373e 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## نظره عامة -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## أخطاء غير فادحة -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - يضيف أو يزيل أنواع الكيانات - يزيل الصفات من أنواع الكيانات @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - يغير للكيانات التي يتم تنفيذ الواجهة لها -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2518d7620204..3062fe900657 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## توليد الكود -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript 'import { Gravatar } from '../generated/schema ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx index 8245a637cc8a..ef43760cfdbf 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### إصدارات -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| الاصدار | ملاحظات الإصدار | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| الاصدار | ملاحظات الإصدار | +| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### الأنواع المضمنة (Built-in) @@ -223,7 +223,7 @@ It adds the following method on top of the `Bytes` API: The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### إنشاء الكيانات @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### دعم أنواع الإيثيريوم -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### الوصول إلى حالة العقد الذكي Smart Contract -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### معالجة الاستدعاءات المعادة @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx index 6c50af984ad0..b0ce00e687e3 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: مشاكل شائعة في أسمبلي سكريبت (AssemblyScript) --- -هناك بعض مشاكل [أسمبلي سكريبت](https://github.com/AssemblyScript/assemblyscript) المحددة، التي من الشائع الوقوع فيها أثتاء تطوير غرافٍ فرعي. وهي تتراوح في صعوبة تصحيح الأخطاء، ومع ذلك، فإنّ إدراكها قد يساعد. وفيما يلي قائمة غير شاملة لهذه المشاكل: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - لا يتم توريث النطاق في [دوال الإغلاق](https://www.assemblyscript.org/status.html#on-closures)، أي لا يمكن استخدام المتغيرات المعلنة خارج دوال الإغلاق. الشرح في [ النقاط الهامة للمطورين #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx index b55d24367e50..81469bc1837b 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: قم بتثبيت Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## نظره عامة -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## إنشاء الـ Subgraph ### من عقد موجود -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### من مثال Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is يجب أن تتطابق ملف (ملفات) ABI مع العقد (العقود) الخاصة بك. هناك عدة طرق للحصول على ملفات ABI: - إذا كنت تقوم ببناء مشروعك الخاص ، فمن المحتمل أن تتمكن من الوصول إلى أحدث ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| الاصدار | ملاحظات الإصدار | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx index 56d9abb39ae7..c5b869610abd 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## نظره عامة -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| النوع | الوصف | -| --- | --- | -| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| النوع | الوصف | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### إضافة تعليقات إلى المخطط (schema) @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## اللغات المدعومة @@ -318,7 +318,7 @@ Supported language dictionaries: Supported algorithms for ordering results: -| Algorithm | Description | -| ------------- | --------------------------------------------------------------- | -| rank | استخدم جودة مطابقة استعلام النص-الكامل (0-1) لترتيب النتائج. | -| proximityRank | Similar to rank but also includes the proximity of the matches. | +| Algorithm | Description | +| ------------- | ----------------------------------------------------------------------- | +| rank | استخدم جودة مطابقة استعلام النص-الكامل (0-1) لترتيب النتائج. | +| proximityRank | Similar to rank but also includes the proximity of the matches. | diff --git a/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx index 8f2e787688c2..b7d5f7168427 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## نظره عامة -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| الاصدار | ملاحظات الإصدار | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx index ba893838ca4e..8cc64d5cdd22 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## نظره عامة -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). الإدخالات الهامة لتحديث manifest هي: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## معالجات الاستدعاء(Call Handlers) -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### تعريف معالج الاستدعاء @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### دالة الـ Mapping -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## معالجات الكتلة -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### الفلاتر المدعومة @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### دالة الـ Mapping -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## كتل البدء -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| الاصدار | ملاحظات الإصدار | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx index e72d68bef7c8..44c9fedacb10 100644 --- a/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: اختبار وحدة Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## مصادر إضافية -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx index d8880ef1a196..1e0826bfe148 100644 --- a/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- إنشاء وإدارة مفاتيح API الخاصة بك لـ subgraphs محددة +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### توافق الـ Subgraph مع شبكة The Graph -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- يجب ألا تستخدم أيًا من الميزات التالية: - - ipfs.cat & ipfs.map - - أخطاء غير فادحة - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## الأرشفة التلقائية لإصدارات الـ Subgraph -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ar/subgraphs/developing/developer-faq.mdx b/website/src/pages/ar/subgraphs/developing/developer-faq.mdx index f0e9ba0cd865..016a7a8e5a04 100644 --- a/website/src/pages/ar/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ar/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -يجب عليك إعادة نشر ال الفرعيةرسم بياني ، ولكن إذا لم يتغير الفرعيةرسم بياني (ID (IPFS hash ، فلن يضطر إلى المزامنة من البداية. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? نعم! جرب الأمر التالي ، مع استبدال "Organization / subgraphName" بالمؤسسة واسم الـ subgraph الخاص بك: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/ar/subgraphs/developing/introduction.mdx b/website/src/pages/ar/subgraphs/developing/introduction.mdx index d3b71aaab704..946e62affbe7 100644 --- a/website/src/pages/ar/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ar/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ar/subgraphs/developing/subgraphs.mdx b/website/src/pages/ar/subgraphs/developing/subgraphs.mdx index b52ec5cd2843..b2d94218cd67 100644 --- a/website/src/pages/ar/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ar/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## دورة حياة الـ Subgraph -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ar/subgraphs/explorer.mdx b/website/src/pages/ar/subgraphs/explorer.mdx index 512be28e8322..57d7712cc383 100644 --- a/website/src/pages/ar/subgraphs/explorer.mdx +++ b/website/src/pages/ar/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## نظره عامة -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- أشر/الغي الإشارة على Subgraphs +- Signal/Un-signal on Subgraphs - اعرض المزيد من التفاصيل مثل المخططات و ال ID الحالي وبيانات التعريف الأخرى -- بدّل بين الإصدارات وذلك لاستكشاف التكرارات السابقة ل subgraphs -- استعلم عن subgraphs عن طريق GraphQL -- اختبار subgraphs في playground -- اعرض المفهرسين الذين يفهرسون Subgraphs معين +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - إحصائيات subgraphs (المخصصات ، المنسقين ، إلخ) -- اعرض من قام بنشر ال Subgraphs +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 3. المفوضون Delegators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### تبويب ال Subgraphs -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### تبويب الفهرسة -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. هذا القسم سيتضمن أيضا تفاصيل حول صافي مكافآت المفهرس ورسوم الاستعلام الصافي الخاصة بك. سترى المقاييس التالية: @@ -223,13 +223,13 @@ In the Delegators tab, you can find the details of your active and historical de ### تبويب التنسيق Curating -في علامة التبويب Curation ، ستجد جميع ال subgraphs التي تشير إليها (مما يتيح لك تلقي رسوم الاستعلام). الإشارة تسمح للمنسقين التوضيح للمفهرسين ماهي ال subgraphs ذات الجودة العالية والموثوقة ، مما يشير إلى ضرورة فهرستها. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. ضمن علامة التبويب هذه ، ستجد نظرة عامة حول: -- جميع ال subgraphs التي تقوم بتنسيقها مع تفاصيل الإشارة -- إجمالي الحصة لكل subgraph -- مكافآت الاستعلام لكل subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - تحديث في تفاصيل التاريخ ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/ar/subgraphs/guides/arweave.mdx b/website/src/pages/ar/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..4bb8883b4bd0 --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Building Subgraphs on Arweave +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. + +## What is Arweave? + +The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. + +Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## What are Arweave Subgraphs? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Building an Arweave Subgraph + +To be able to build and deploy Arweave Subgraphs, you need two packages: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## Subgraph's components + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. + +### 2. Schema - `schema.graphql` + +Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## تعريف Subgraph Manifest + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet + +Arweave data sources support two types of handlers: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> The source.owner can be the owner's address, or their Public Key. +> +> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## تعريف المخطط + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Querying an Arweave Subgraph + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## أمثلة على الـ Subgraphs + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can I index the stored files on Arweave? + +Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). + +### Can I identify Bundlr bundles in my Subgraph? + +This is not currently supported. + +### How can I filter transactions to a specific account? + +The source.owner can be the user's public key or account address. + +### What is the current encryption format? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..84aeda12e0fc --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## نظره عامة + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +or + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ar/subgraphs/guides/enums.mdx b/website/src/pages/ar/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..846faecc1706 --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## مصادر إضافية + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/ar/subgraphs/guides/grafting.mdx b/website/src/pages/ar/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..4b7dad1a54d9 --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Replace a Contract and Keep its History With Grafting +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## What is Grafting? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- يضيف أو يزيل أنواع الكيانات +- يزيل الصفات من أنواع الكيانات +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- يغير للكيانات التي يتم تنفيذ الواجهة لها + +For more information, you can check: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Important Note on Grafting When Upgrading to the Network + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Why Is This Important? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Best Practices + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +By adhering to these guidelines, you minimize risks and ensure a smoother migration process. + +## Building an Existing Subgraph + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## تعريف Subgraph Manifest + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Grafting Manifest Definition + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## Deploying the Base Subgraph + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It returns something like this: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## Deploying the Grafting Subgraph + +The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It should return the following: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## مصادر إضافية + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/ar/subgraphs/guides/near.mdx b/website/src/pages/ar/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..04daec8b6ac7 --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: بناء Subgraphs على NEAR +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## ما هو NEAR؟ + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- معالجات الكتل(Block handlers): يتم تشغيلها على كل كتلة جديدة +- معالجات الاستلام (Receipt handlers): يتم تشغيلها في كل مرة يتم فيها تنفيذ رسالة على حساب محدد + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> الاستلام (Receipt) هو الكائن الوحيد القابل للتنفيذ في النظام. عندما نتحدث عن "معالجة الإجراء" على منصة NEAR ، فإن هذا يعني في النهاية "تطبيق الاستلامات" في مرحلة ما. + +## بناء NEAR Subgraph + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### تعريف Subgraph Manifest + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +مصادر بيانات NEAR تدعم نوعين من المعالجات: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### تعريف المخطط + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## نشر NEAR Subgraph + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Local Graph Node (based on default configuration) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexing NEAR with a Local Graph Node + +تشغيل Graph Node التي تقوم بفهرسة NEAR لها المتطلبات التشغيلية التالية: + +- NEAR Indexer Framework مع أجهزة Firehose +- مكونات NEAR Firehose +- تكوين Graph Node مع Firehose endpoint + +سوف نقدم المزيد من المعلومات حول تشغيل المكونات أعلاه قريبًا. + +## الاستعلام عن NEAR Subgraph + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## أمثلة على الـ Subgraphs + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### How does the beta work? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +حاليًا ، يتم دعم مشغلات الكتلة(Block) والاستلام(Receipt). نحن نبحث في مشغلات استدعاءات الدوال لحساب محدد. نحن مهتمون أيضًا بدعم مشغلات الأحداث ، بمجرد حصول NEAR على دعم محلي للأحداث. + +### Will receipt handlers trigger for accounts and their sub-accounts? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +هذا غير مدعوم. نحن بصدد تقييم ما إذا كانت هذه الميزة مطلوبة للفهرسة. + +### Can I use data source templates in my NEAR Subgraph? + +هذا غير مدعوم حاليا. نحن بصدد تقييم ما إذا كانت هذه الميزة مطلوبة للفهرسة. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## المراجع + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/ar/subgraphs/guides/polymarket.mdx b/website/src/pages/ar/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..21ac0b74d31d --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## نظره عامة + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..080de99b5ba1 --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## مقدمة + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Get Started + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## مصادر إضافية + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..364fb8ce4d9c --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Quick and Easy Subgraph Debugging Using Forks +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## حسنا، ما هو؟ + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## ماذا؟! كيف؟ + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## من فضلك ، أرني بعض الأكواد! + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +الطريقة المعتادة لمحاولة الإصلاح هي: + +1. إجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة (وأنا أعلم أنه لن يحلها). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. الانتظار حتى تتم المزامنة. +4. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1! + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. قم بإجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1! + +الآن ، قد يكون لديك سؤالان: + +1. ماهو fork-base؟؟؟ +2. ما الذي نقوم بتفريعه (Forking)؟! + +وأنا أجيب: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. الـتفريع سهل ، فلا داعي للقلق: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +لذلك ، هذا ما أفعله: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..a08e2a7ad8c9 --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Safe Subgraph Code Generator +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Why integrate with Subgraph Uncrashable? + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..4be3dcedffe8 --- /dev/null +++ b/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfer to The Graph +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### Example + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### مصادر إضافية + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ar/subgraphs/querying/best-practices.mdx b/website/src/pages/ar/subgraphs/querying/best-practices.mdx index 23dcd2cb8920..f469ff02de9c 100644 --- a/website/src/pages/ar/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ar/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: أفضل الممارسات للاستعلام The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - نتيجة مكتوبة بالكامل @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ar/subgraphs/querying/from-an-application.mdx b/website/src/pages/ar/subgraphs/querying/from-an-application.mdx index 767a2caa9021..08c71fa4ad1f 100644 --- a/website/src/pages/ar/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ar/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: الاستعلام من التطبيق +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - نتيجة مكتوبة بالكامل @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Step 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Step 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Step 1 diff --git a/website/src/pages/ar/subgraphs/querying/graph-client/README.md b/website/src/pages/ar/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/ar/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ar/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/ar/subgraphs/querying/graphql-api.mdx b/website/src/pages/ar/subgraphs/querying/graphql-api.mdx index d73381f88a7d..801e95fa66de 100644 --- a/website/src/pages/ar/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ar/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Fulltext search operators: -| رمز | عامل التشغيل | الوصف | -| --- | --- | --- | -| `&` | `And` | لدمج عبارات بحث متعددة في فلتر للكيانات التي تتضمن جميع العبارات المتوفرة | -| | | `Or` | الاستعلامات التي تحتوي على عبارات بحث متعددة مفصولة بواسطة عامل التشغيل or ستعيد جميع الكيانات المتطابقة من أي عبارة متوفرة | -| `<->` | `Follow by` | يحدد المسافة بين كلمتين. | -| `:*` | `Prefix` | يستخدم عبارة البحث prefix للعثور على الكلمات التي تتطابق بادئتها (مطلوب حرفان.) | +| رمز | عامل التشغيل | الوصف | +| ------ | ------------ | --------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | لدمج عبارات بحث متعددة في فلتر للكيانات التي تتضمن جميع العبارات المتوفرة | +| | | `Or` | الاستعلامات التي تحتوي على عبارات بحث متعددة مفصولة بواسطة عامل التشغيل or ستعيد جميع الكيانات المتطابقة من أي عبارة متوفرة | +| `<->` | `Follow by` | يحدد المسافة بين كلمتين. | +| `:*` | `Prefix` | يستخدم عبارة البحث prefix للعثور على الكلمات التي تتطابق بادئتها (مطلوب حرفان.) | #### Examples @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ar/subgraphs/querying/introduction.mdx b/website/src/pages/ar/subgraphs/querying/introduction.mdx index 281957e11e14..bdd0bde88865 100644 --- a/website/src/pages/ar/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ar/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## نظره عامة -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx index 33e9d7b78fc2..7b91a147ef47 100644 --- a/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## نظره عامة -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - كمية GRT التي تم صرفها 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - عرض وإدارة أسماء النطاقات المصرح لها باستخدام مفتاح API الخاص بك - - تعيين الـ subgraphs التي يمكن الاستعلام عنها باستخدام مفتاح API الخاص بك + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ar/subgraphs/querying/python.mdx b/website/src/pages/ar/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/ar/subgraphs/querying/python.mdx +++ b/website/src/pages/ar/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ar/subgraphs/quick-start.mdx b/website/src/pages/ar/subgraphs/quick-start.mdx index 42f4acf08df9..9b7bf860e87d 100644 --- a/website/src/pages/ar/subgraphs/quick-start.mdx +++ b/website/src/pages/ar/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: بداية سريعة --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> يمكنك العثور على الأوامر المتعلقة بالغراف الفرعي الخاص بك على صفحة الغراف الفرعي في (سبغراف استوديو) (https://thegraph.com/studio). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -يرجى مراجعة الصورة المرفقة كمثال عن ما يمكن توقعه عند تهيئة غرافك الفرعي: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -عند كتابة غرافك الفرعي، قم بتنفيذ الأوامر التالية: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ar/substreams/developing/dev-container.mdx b/website/src/pages/ar/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/ar/substreams/developing/dev-container.mdx +++ b/website/src/pages/ar/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/ar/substreams/developing/sinks.mdx b/website/src/pages/ar/substreams/developing/sinks.mdx index 8a3a2eda4ff0..40ca8a67080f 100644 --- a/website/src/pages/ar/substreams/developing/sinks.mdx +++ b/website/src/pages/ar/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/ar/substreams/developing/solana/account-changes.mdx b/website/src/pages/ar/substreams/developing/solana/account-changes.mdx index 3e13301b042c..704443dee771 100644 --- a/website/src/pages/ar/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ar/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/ar/substreams/developing/solana/transactions.mdx b/website/src/pages/ar/substreams/developing/solana/transactions.mdx index b1b97cdcbfe5..ebdeeb98a931 100644 --- a/website/src/pages/ar/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ar/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/ar/substreams/introduction.mdx b/website/src/pages/ar/substreams/introduction.mdx index 774c2dfb90c2..ffb3f46baa62 100644 --- a/website/src/pages/ar/substreams/introduction.mdx +++ b/website/src/pages/ar/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/ar/substreams/publishing.mdx b/website/src/pages/ar/substreams/publishing.mdx index 0d3b7933820e..8ee05b0eda53 100644 --- a/website/src/pages/ar/substreams/publishing.mdx +++ b/website/src/pages/ar/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/ar/supported-networks.mdx b/website/src/pages/ar/supported-networks.mdx index 559f4bc25d5e..ac7050638264 100644 --- a/website/src/pages/ar/supported-networks.mdx +++ b/website/src/pages/ar/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: الشبكات المدعومة hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ar/token-api/_meta-titles.json b/website/src/pages/ar/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/ar/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/ar/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/ar/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/ar/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/ar/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/ar/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/ar/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/ar/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/ar/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/ar/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/ar/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/ar/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/ar/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/ar/token-api/faq.mdx b/website/src/pages/ar/token-api/faq.mdx new file mode 100644 index 000000000000..8c1032894ddb --- /dev/null +++ b/website/src/pages/ar/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## عام + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ar/token-api/mcp/claude.mdx b/website/src/pages/ar/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..12a036b6fc24 --- /dev/null +++ b/website/src/pages/ar/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Configuration + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/ar/token-api/mcp/cline.mdx b/website/src/pages/ar/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..ef98e45939fe --- /dev/null +++ b/website/src/pages/ar/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Configuration + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/ar/token-api/mcp/cursor.mdx b/website/src/pages/ar/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..658108d1337b --- /dev/null +++ b/website/src/pages/ar/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Configuration + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/ar/token-api/monitoring/get-health.mdx b/website/src/pages/ar/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/ar/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/ar/token-api/monitoring/get-networks.mdx b/website/src/pages/ar/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/ar/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/ar/token-api/monitoring/get-version.mdx b/website/src/pages/ar/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/ar/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/ar/token-api/quick-start.mdx b/website/src/pages/ar/token-api/quick-start.mdx new file mode 100644 index 000000000000..c5fa07fa9371 --- /dev/null +++ b/website/src/pages/ar/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: بداية سريعة +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/cs/about.mdx b/website/src/pages/cs/about.mdx index 256519660a73..1f43c663437f 100644 --- a/website/src/pages/cs/about.mdx +++ b/website/src/pages/cs/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Grafu vysvětlující, jak Graf používá Uzel grafu k doručování dotazů konzumentům dat](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Průběh se řídí těmito kroky: 1. Dapp přidává data do Ethereum prostřednictvím transakce na chytrém kontraktu. 2. Chytrý smlouva vysílá při zpracování transakce jednu nebo více událostí. -3. Uzel grafu neustále vyhledává nové bloky Ethereum a data pro váš podgraf, která mohou obsahovat. -4. Uzel grafu v těchto blocích vyhledá události Etherea pro váš podgraf a spustí vámi zadané mapovací obsluhy. Mapování je modul WASM, který vytváří nebo aktualizuje datové entity, které Uzel grafu ukládá v reakci na události Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Aplikace dapp se dotazuje grafického uzlu na data indexovaná z blockchainu pomocí [GraphQL endpoint](https://graphql.org/learn/). Uzel Grafu zase překládá dotazy GraphQL na dotazy pro své podkladové datové úložiště, aby tato data načetl, přičemž využívá indexovací schopnosti úložiště. Dapp tato data zobrazuje v bohatém UI pro koncové uživatele, kteří je používají k vydávání nových transakcí na platformě Ethereum. Cyklus se opakuje. ## Další kroky -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx index 050d1a0641aa..df47adfff704 100644 --- a/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Zabezpečení zděděné po Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Komunita Graf se v loňském roce rozhodla pokračovat v Arbitrum po výsledku diskuze [GIP-0031] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ Pro využití výhod používání a Graf na L2 použijte rozevírací přepína ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Jako vývojář podgrafů, Spotřebitel dat, indexer, kurátor, nebo delegátor, co mám nyní udělat? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ Všechny chytré smlouvy byly důkladně [auditovány](https://github.com/graphp Vše bylo důkladně otestováno, a je připraven pohotovostní plán, který zajistí bezpečný a bezproblémový přechod. Podrobnosti naleznete [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx index 88e1d9e632a2..439e83f3864b 100644 --- a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ Výjimkou jsou peněženky s chytrými smlouvami, jako je multisigs: jedná se o Nástroje pro přenos L2 používají k odesílání zpráv z L1 do L2 nativní mechanismus Arbitrum. Tento mechanismus se nazývá 'retryable ticket,' a všechny nativní tokenové můstky, včetně můstku Arbitrum GRT, ho používají. Další informace o opakovatelných ticketch naleznete v části [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Při přenosu aktiv (podgraf, podíl, delegace nebo kurátorství) do L2 se odešle zpráva přes můstek Arbitrum GRT, která vytvoří opakovatelný tiket v L2. Nástroj pro převod zahrnuje v transakci určitou hodnotu ETH, která se použije na 1) zaplacení vytvoření tiketu a 2) zaplacení plynu pro provedení tiketu v L2. Se však ceny plynu mohou v době, než je ticket připraven k provedení v režimu L2, měnit. Je možné, že se tento pokus o automatické provedení nezdaří. Když se tak stane, most Arbitrum udrží opakovatelný tiket naživu až 7 dní a kdokoli se může pokusit o jeho "vykoupení" (což vyžaduje peněženku s určitým množstvím ETH propojenou s mostem Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Tomuto kroku říkáme 'Potvrzení' ve všech nástrojích pro přenos - ve většině případů se spustí automaticky, protože automatické provedení je většinou úspěšné, ale je důležité, abyste se ujistili, že proběhlo. Pokud se to nepodaří a během 7 dnů nedojde k žádnému úspěšnému opakování, můstek Arbitrum tiket zahodí a vaše aktiva (podgraf, podíl, delegace nebo kurátorství) budou ztracena a nebude možné je obnovit. Vývojáři The Graph jádra mají k dispozici monitorovací systém, který tyto situace odhaluje a snaží se lístky uplatnit dříve, než bude pozdě, ale v konečném důsledku je vaší odpovědností zajistit, aby byl váš přenos dokončen včas. Pokud máte potíže s potvrzením transakce, obraťte se na nás pomocí [tohoto formuláře](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) a hlavní vývojáři vám pomohou. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Zahájil jsem přenos delegace/podílů/kurátorství a nejsem si jistý, zda se to dostalo do L2. Jak mohu potvrdit, že to bylo přeneseno správně? @@ -36,43 +36,43 @@ Pokud máte k dispozici hash transakce L1 (který zjistíte, když se podíváte ## Podgraf přenos -### Jak mohu přenést svůj podgraf? +### How do I transfer my Subgraph? -Chcete-li přenést svůj podgraf, musíte provést následující kroky: +To transfer your Subgraph, you will need to complete the following steps: 1. Zahájení převodu v mainnet Ethereum 2. Počkejte 20 minut na potvrzení -3. Potvrzení přenosu podgrafů na Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Úplné zveřejnění podgrafu na arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Aktualizovat adresu URL dotazu (doporučeno) -\*Upozorňujeme, že převod musíte potvrdit do 7 dnů, jinak může dojít ke ztrátě vašeho podgrafu. Ve většině případů se tento krok provede automaticky, ale v případě prudkého nárůstu cen plynu na Arbitru může být nutné ruční potvrzení. Pokud se během tohoto procesu vyskytnou nějaké problémy, budou k dispozici zdroje, které vám pomohou: kontaktujte podporu na adrese support@thegraph.com nebo na [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Odkud mám iniciovat převod? -Přenos můžete zahájit v [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) nebo na libovolné stránce s detaily subgrafu. "Kliknutím na tlačítko 'Transfer Subgraph' na stránce s podrobnostmi o podgrafu zahájíte přenos. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Jak dlouho musím čekat, než bude můj podgraf přenesen +### How long do I need to wait until my Subgraph is transferred Přenos trvá přibližně 20 minut. Most Arbitrum pracuje na pozadí a automaticky dokončí přenos mostu. V některých případech může dojít ke zvýšení nákladů na plyn a transakci bude nutné potvrdit znovu. -### Bude můj podgraf zjistitelný i poté, co jej přenesu do L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Váš podgraf bude zjistitelný pouze v síti, ve které je publikován. Pokud se například váš subgraf nachází na Arbitrum One, pak jej najdete pouze v Průzkumníku na Arbitrum One a na Ethereum jej nenajdete. Ujistěte se, že máte v přepínači sítí v horní části stránky vybranou možnost Arbitrum One, abyste se ujistili, že jste ve správné síti. Po přenosu se podgraf L1 zobrazí jako zastaralý. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Musí být můj podgraf zveřejněn, abych ho mohl přenést? +### Does my Subgraph need to be published to transfer it? -Abyste mohli využít nástroj pro přenos subgrafů, musí být váš subgraf již zveřejněn v mainnet Ethereum a musí mít nějaký kurátorský signál vlastněný peněženkou, která subgraf vlastní. Pokud váš subgraf není zveřejněn, doporučujeme vám jednoduše publikovat přímo na Arbitrum One - související poplatky za plyn budou podstatně nižší. Pokud chcete přenést publikovaný podgraf, ale účet vlastníka na něm nemá kurátorský signál, můžete z tohoto účtu signalizovat malou částku (např. 1 GRT); nezapomeňte zvolit "auto-migrating" signál. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Co se stane s verzí mého subgrafu na ethereum mainnet po převodu na Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Po převedení vašeho subgrafu na Arbitrum bude verze mainnet Ethereum zastaralá. Doporučujeme vám aktualizovat adresu URL dotazu do 48 hodin. Je však zavedena ochranná lhůta, která udržuje adresu URL mainnet funkční, aby bylo možné aktualizovat podporu dapp třetích stran. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Musím po převodu také znovu publikovat na Arbitrum? @@ -80,21 +80,21 @@ Po uplynutí 20minutového okna pro převod budete muset převod potvrdit transa ### Dojde při opětovném publikování k výpadku mého koncového bodu? -Je nepravděpodobné, ale je možné, že dojde ke krátkému výpadku v závislosti na tom, které indexátory podporují podgraf na L1 a zda jej indexují, dokud není podgraf plně podporován na L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Je publikování a verzování na L2 stejné jako na mainnet Ethereum Ethereum? -Ano. Při publikování v aplikaci Subgraph Studio vyberte jako publikovanou síť Arbitrum One. Ve Studiu bude k dispozici nejnovější koncový bod, který odkazuje na nejnovější aktualizovanou verzi podgrafu. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Bude se kurátorství mého podgrafu pohybovat spolu s mým podgrafem? +### Will my Subgraph's curation move with my Subgraph? -Pokud jste zvolili automatickou migraci signálu, 100 % vaší vlastní kurátorství se přesune spolu s vaším subgrafem do Arbitrum One. Veškerý signál kurátorství podgrafu bude v okamžiku převodu převeden na GRT a GRT odpovídající vašemu signálu kurátorství bude použit k ražbě signálu na podgrafu L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Ostatní kurátoři se mohou rozhodnout, zda stáhnou svou část GRT, nebo ji také převedou na L2, aby vyrazili signál na stejném podgraf. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Mohu svůj subgraf po převodu přesunout zpět do mainnet Ethereum? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Po přenosu bude vaše verze tohoto podgrafu v síti Ethereum mainnet zneplatněna. Pokud se chcete přesunout zpět do mainnetu, musíte provést nové nasazení a publikovat zpět do mainnet. Převod zpět do mainnetu Etherea se však důrazně nedoporučuje, protože odměny za indexování budou nakonec distribuovány výhradně na Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Proč potřebuji k dokončení převodu překlenovací ETH? @@ -206,19 +206,19 @@ Chcete-li přenést své kurátorství, musíte provést následující kroky: \*Pokud je to nutné - tj. používáte smluvní adresu. -### Jak se dozvím, že se mnou kurátorovaný podgraf přesunul do L2? +### How will I know if the Subgraph I curated has moved to L2? -Při zobrazení stránky s podrobnostmi podgrafu se zobrazí banner s upozorněním, že tento podgraf byl přenesen. Můžete následovat výzvu k přenosu kurátorství. Tyto informace najdete také na stránce s podrobnostmi o podgrafu, který se přesunul. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Co když si nepřeji přesunout své kurátorství do L2? -Pokud je podgraf vyřazen, máte možnost stáhnout svůj signál. Stejně tak pokud se podgraf přesunul do L2, můžete si vybrat, zda chcete stáhnout svůj signál v mainnet Ethereum, nebo signál poslat do L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Jak poznám, že se moje kurátorství úspěšně přeneslo? Podrobnosti o signálu budou k dispozici prostřednictvím Průzkumníka přibližně 20 minut po spuštění nástroje pro přenos L2. -### Mohu přenést své kurátorství na více než jeden podgraf najednou? +### Can I transfer my curation on more than one Subgraph at a time? V současné době není k dispozici možnost hromadného přenosu. @@ -266,7 +266,7 @@ Nástroj pro převod L2 dokončí převod vašeho podílu přibližně za 20 min ### Musím před převodem svého podílu indexovat na Arbitrum? -Před nastavením indexování můžete nejprve efektivně převést svůj podíl, ale nebudete si moci nárokovat žádné odměny na L2, dokud nepřidělíte podgrafy na L2, neindexujete je a nepředložíte POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Mohou delegáti přesunout svou delegaci dříve, než přesunu svůj indexovací podíl? diff --git a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx index 69717e46ed39..94b78981db6b 100644 --- a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ Graph usnadnil přechod na úroveň L2 v Arbitrum One. Pro každého účastník Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Jak přenést podgraf do Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Výhody přenosu podgrafů +## Benefits of transferring your Subgraphs Komunita a hlavní vývojáři Graphu se v uplynulém roce [připravovali](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) na přechod na Arbitrum. Arbitrum, blockchain druhé vrstvy neboli "L2", zdědil bezpečnost po Ethereum, ale poskytuje výrazně nižší poplatky za plyn. -Když publikujete nebo aktualizujete svůj subgraf v síti The Graph Network, komunikujete s chytrými smlouvami na protokolu, což vyžaduje platbu za plyn pomocí ETH. Přesunutím subgrafů do Arbitrum budou veškeré budoucí aktualizace subgrafů vyžadovat mnohem nižší poplatky za plyn. Nižší poplatky a skutečnost, že křivky vazby kurátorů na L2 jsou ploché, také usnadňují ostatním kurátorům kurátorství na vašem podgrafu, což zvyšuje odměny pro indexátory na vašem podgrafu. Toto prostředí s nižšími náklady také zlevňuje indexování a obsluhu subgrafu pro indexátory. Odměny za indexování se budou v následujících měsících na Arbitrum zvyšovat a na mainnetu Ethereum snižovat, takže stále více indexerů bude převádět své podíly a zakládat své operace na L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Porozumění tomu, co se děje se signálem, podgrafem L1 a adresami URL dotazů +## Understanding what happens with signal, your L1 Subgraph and query URLs -Při přenosu podgrafu do Arbitrum se používá můstek Arbitrum GRT, který zase používá nativní můstek Arbitrum k odeslání podgrafu do L2. Při "přenosu" se subgraf v mainnetu znehodnotí a odešlou se informace pro opětovné vytvoření subgrafu v L2 pomocí mostu. Zahrnuje také GRT vlastníka podgrafu, který již byl signalizován a který musí být větší než nula, aby most převod přijal. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Pokud zvolíte převod podgrafu, převede se veškerý signál kurátoru podgrafu na GRT. To je ekvivalentní "znehodnocení" podgrafu v síti mainnet. GRT odpovídající vašemu kurátorství budou spolu s podgrafem odeslány na L2, kde budou vaším jménem použity k ražbě signálu. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Ostatní kurátoři se mohou rozhodnout, zda si stáhnou svůj podíl GRT, nebo jej také převedou na L2, aby na stejném podgrafu vyrazili signál. Pokud vlastník podgrafu nepřevede svůj podgraf na L2 a ručně jej znehodnotí prostřednictvím volání smlouvy, pak budou Kurátoři upozorněni a budou moci stáhnout svou kurátorskou funkci. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Jakmile je podgraf převeden, protože veškerá kurátorská činnost je převedena na GRT, indexátoři již nebudou dostávat odměny za indexování podgrafu. Budou však existovat indexátory, které 1) budou obsluhovat převedené podgrafy po dobu 24 hodin a 2) okamžitě začnou indexovat podgraf na L2. Protože tyto Indexery již mají podgraf zaindexovaný, nemělo by být nutné čekat na synchronizaci podgrafu a bude možné se na podgraf na L2 dotazovat téměř okamžitě. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Dotazy do podgrafu L2 bude nutné zadávat na jinou adresu URL (na `arbitrum-gateway.thegraph.com`), ale adresa URL L1 bude fungovat nejméně 48 hodin. Poté bude brána L1 přeposílat dotazy na bránu L2 (po určitou dobu), což však zvýší latenci, takže se doporučuje co nejdříve přepnout všechny dotazy na novou adresu URL. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Výběr peněženky L2 -Když jste publikovali svůj podgraf na hlavní síti (mainnet), použili jste připojenou peněženku, která vlastní NFT reprezentující tento podgraf a umožňuje vám publikovat aktualizace. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Při přenosu podgrafu do Arbitrum si můžete vybrat jinou peněženku, která bude vlastnit tento podgraf NFT na L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Pokud používáte "obyčejnou" peněženku, jako je MetaMask (externě vlastněný účet nebo EOA, tj. peněženka, která není chytrým kontraktem), pak je to volitelné a doporučuje se zachovat stejnou adresu vlastníka jako v L1. -Pokud používáte peněženku s chytrým kontraktem, jako je multisig (např. Trezor), pak je nutné zvolit jinou adresu peněženky L2, protože je pravděpodobné, že tento účet existuje pouze v mainnetu a nebudete moci provádět transakce na Arbitrum pomocí této peněženky. Pokud chcete i nadále používat peněženku s chytrým kontraktem nebo multisig, vytvořte si na Arbitrum novou peněženku a její adresu použijte jako vlastníka L2 svého subgrafu. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Je velmi důležité používat adresu peněženky, kterou máte pod kontrolou a která může provádět transakce na Arbitrum. V opačném případě bude podgraf ztracen a nebude možné jej obnovit.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Příprava na převod: přemostění některých ETH -Přenos podgrafu zahrnuje odeslání transakce přes můstek a následné provedení další transakce na Arbitrum. První transakce využívá ETH na mainnetu a obsahuje nějaké ETH na zaplacení plynu, když je zpráva přijata na L2. Pokud však tento plyn nestačí, je třeba transakci zopakovat a zaplatit za plyn přímo na L2 (to je 'Krok 3: Potvrzení převodu' níže). Tento krok musí být proveden do 7 dnů od zahájení převodu\*\*. Druhá transakce ('Krok 4: Dokončení převodu na L2') bude navíc provedena přímo na Arbitrum. Z těchto důvodů budete potřebovat nějaké ETH na peněžence Arbitrum. Pokud používáte multisig nebo smart contract účet, ETH bude muset být v běžné peněžence (EOA), kterou používáte k provádění transakcí, nikoli na samotné multisig peněžence. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. ETH si můžete koupit na některých burzách a vybrat přímo na Arbitrum, nebo můžete použít most Arbitrum a poslat ETH z peněženky mainnetu na L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Vzhledem k tomu, že poplatky za plyn na Arbitrum jsou nižší, mělo by vám stačit jen malé množství. Doporučujeme začít na nízkém prahu (např. 0.01 ETH), aby byla vaše transakce schválena. -## Hledání nástroje pro přenos podgrafu +## Finding the Subgraph Transfer Tool -Nástroj pro přenos L2 najdete při prohlížení stránky svého podgrafu v aplikaci Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -Je k dispozici také v Průzkumníku, pokud jste připojeni k peněžence, která vlastní podgraf, a na stránce tohoto podgrafu v Průzkumníku: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Kliknutím na tlačítko Přenést na L2 otevřete nástroj pro přenos, kde mů ## Krok 1: Zahájení přenosu -Před zahájením převodu se musíte rozhodnout, která adresa bude vlastnit podgraf na L2 (viz výše "Výběr peněženky L2"), a důrazně doporučujeme mít na Arbitrum již přemostěné ETH pro plyn (viz výše "Příprava na převod: přemostění některých ETH"). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Vezměte prosím na vědomí, že přenos podgrafu vyžaduje nenulové množství signálu na podgrafu se stejným účtem, který vlastní podgraf; pokud jste na podgrafu nesignalizovali, budete muset přidat trochu kurátorství (stačí přidat malé množství, například 1 GRT). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Po otevření nástroje Transfer Tool budete moci do pole "Receiving wallet address" zadat adresu peněženky L2 - **ujistěte se, že jste zadali správnou adresu**. Kliknutím na Transfer Subgraph budete vyzváni k provedení transakce na vaší peněžence (všimněte si, že je zahrnuta určitá hodnota ETH, abyste zaplatili za plyn L2); tím se zahájí přenos a znehodnotí váš subgraf L1 (více podrobností o tom, co se děje v zákulisí, najdete výše v části "Porozumění tomu, co se děje se signálem, vaším subgrafem L1 a URL dotazů"). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Pokud tento krok provedete, ujistěte se, že jste pokračovali až do dokončení kroku 3 za méně než 7 dní, jinak se podgraf a váš signál GRT ztratí. To je způsobeno tím, jak funguje zasílání zpráv L1-L2 na Arbitrum: zprávy, které jsou zasílány přes most, jsou "Opakovatelný tiket", které musí být provedeny do 7 dní, a počáteční provedení může vyžadovat opakování, pokud dojde ke skokům v ceně plynu na Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Krok 2: Čekání, až se podgraf dostane do L2 +## Step 2: Waiting for the Subgraph to get to L2 -Po zahájení přenosu se musí zpráva, která odesílá podgraf L1 do L2, šířit přes můstek Arbitrum. To trvá přibližně 20 minut (můstek čeká, až bude blok mainnetu obsahující transakci "bezpečný" před případnými reorgy řetězce). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Po uplynutí této čekací doby se Arbitrum pokusí o automatické provedení přenosu na základě smluv L2. @@ -80,7 +80,7 @@ Po uplynutí této čekací doby se Arbitrum pokusí o automatické provedení p ## Krok 3: Potvrzení převodu -Ve většině případů se tento krok provede automaticky, protože plyn L2 obsažený v kroku 1 by měl stačit k provedení transakce, která přijímá podgraf na smlouvách Arbitrum. V některých případech je však možné, že prudký nárůst cen plynu na Arbitrum způsobí selhání tohoto automatického provedení. V takovém případě bude "ticket", který odešle subgraf na L2, čekat na vyřízení a bude vyžadovat opakování pokusu do 7 dnů. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. V takovém případě se musíte připojit pomocí peněženky L2, která má nějaké ETH na Arbitrum, přepnout síť peněženky na Arbitrum a kliknutím na "Confirm Transfer" zopakovat transakci. @@ -88,33 +88,33 @@ V takovém případě se musíte připojit pomocí peněženky L2, která má n ## Krok 4: Dokončení přenosu na L2 -V tuto chvíli byly váš podgraf a GRT přijaty na Arbitrum, ale podgraf ještě není zveřejněn. Budete se muset připojit pomocí peněženky L2, kterou jste si vybrali jako přijímající peněženku, přepnout síť peněženky na Arbitrum a kliknout na "Publikovat subgraf" +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Tím se podgraf zveřejní, aby jej mohly začít obsluhovat indexery pracující na Arbitrum. Rovněž bude zminován kurátorský signál pomocí GRT, které byly přeneseny z L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Krok 5: Aktualizace URL dotazu -Váš podgraf byl úspěšně přenesen do Arbitrum! Chcete-li se na podgraf zeptat, nová URL bude: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Všimněte si, že ID podgrafu v Arbitrum bude jiné než to, které jste měli v mainnetu, ale vždy ho můžete najít v Průzkumníku nebo Studiu. Jak je uvedeno výše (viz "Pochopení toho, co se děje se signálem, vaším subgrafem L1 a URL dotazů"), stará URL adresa L1 bude po krátkou dobu podporována, ale jakmile bude subgraf synchronizován na L2, měli byste své dotazy přepnout na novou adresu. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Jak přenést kurátorství do služby Arbitrum (L2) -## Porozumění tomu, co se děje s kurátorstvím při přenosu podgrafů do L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Když vlastník podgrafu převede podgraf do Arbitrum, je veškerý signál podgrafu současně převeden na GRT. To se týká "automaticky migrovaného" signálu, tj. signálu, který není specifický pro verzi podgrafu nebo nasazení, ale který následuje nejnovější verzi podgrafu. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Tento převod ze signálu na GRT je stejný, jako kdyby vlastník podgrafu zrušil podgraf v L1. Při depreciaci nebo převodu subgrafu se současně "spálí" veškerý kurátorský signál (pomocí kurátorské vazební křivky) a výsledný GRT je držen inteligentním kontraktem GNS (tedy kontraktem, který se stará o upgrade subgrafu a automatickou migraci signálu). Každý kurátor na tomto subgrafu má tedy nárok na tento GRT úměrný množství podílů, které měl na subgrafu. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Část těchto GRT odpovídající vlastníkovi podgrafu je odeslána do L2 spolu s podgrafem. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -V tomto okamžiku se za kurátorský GRT již nebudou účtovat žádné poplatky za dotazování, takže kurátoři se mohou rozhodnout, zda svůj GRT stáhnou, nebo jej přenesou do stejného podgrafu na L2, kde může být použit k ražbě nového kurátorského signálu. S tímto úkonem není třeba spěchat, protože GRT lze pomáhat donekonečna a každý dostane částku úměrnou svému podílu bez ohledu na to, kdy tak učiní. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Výběr peněženky L2 @@ -130,9 +130,9 @@ Pokud používáte peněženku s chytrým kontraktem, jako je multisig (např. T Před zahájením převodu se musíte rozhodnout, která adresa bude vlastnit kurátorství na L2 (viz výše "Výběr peněženky L2"), a doporučujeme mít nějaké ETH pro plyn již přemostěné na Arbitrum pro případ, že byste potřebovali zopakovat provedení zprávy na L2. ETH můžete nakoupit na některých burzách a vybrat si ho přímo na Arbitrum, nebo můžete použít Arbitrum bridge pro odeslání ETH z peněženky mainnetu na L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - protože poplatky za plyn na Arbitrum jsou tak nízké, mělo by vám stačit jen malé množství, např. 0,01 ETH bude pravděpodobně více než dostačující. -Pokud byl podgraf, do kterého kurátor provádí kurátorství, převeden do L2, zobrazí se v Průzkumníku zpráva, že kurátorství provádíte do převedeného podgrafu. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -Při pohledu na stránku podgrafu můžete zvolit stažení nebo přenos kurátorství. Kliknutím na "Přenést signál do Arbitrum" otevřete nástroj pro přenos. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ V takovém případě se musíte připojit pomocí peněženky L2, která má n ## Odstranění vašeho kurátorství na L1 -Pokud nechcete posílat GRT na L2 nebo byste raději překlenuli GRT ručně, můžete si na L1 stáhnout svůj kurátorovaný GRT. Na banneru na stránce podgrafu zvolte "Withdraw Signal" a potvrďte transakci; GRT bude odeslán na vaši adresu kurátora. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/cs/archived/sunrise.mdx b/website/src/pages/cs/archived/sunrise.mdx index 71b86ac159ff..52e8c90d7708 100644 --- a/website/src/pages/cs/archived/sunrise.mdx +++ b/website/src/pages/cs/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## Jaký byl úsvit decentralizovaných dat? -Úsvit decentralizovaných dat byla iniciativa, kterou vedla společnost Edge & Node. Tato iniciativa umožnila vývojářům podgrafů bezproblémově přejít na decentralizovanou síť Graf. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### Co se stalo s hostovanou službou? -Koncové body dotazů hostované služby již nejsou k dispozici a vývojáři nemohou v hostované službě nasadit nové podgrafy. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -Během procesu aktualizace mohli vlastníci podgrafů hostovaných služeb aktualizovat své podgrafy na síť Graf. Vývojáři navíc mohli nárokovat automatickou aktualizaci podgrafů. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Měla tato aktualizace vliv na Podgraf Studio? Ne, na Podgraf Studio neměl Sunrise vliv. Podgrafy byly okamžitě k dispozici pro dotazování, a to díky aktualizačnímu indexeru, který využívá stejnou infrastrukturu jako hostovaná služba. -### Proč byly podgrafy zveřejněny na Arbitrum, začalo indexovat jinou síť? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## O Upgrade Indexer > Aktualizace Indexer je v současné době aktivní. -Upgrade Indexer byl implementován za účelem zlepšení zkušeností s upgradem podgrafů z hostované služby do sit' Graf a podpory nových verzí stávajících podgrafů, které dosud nebyly indexovány. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### Co dělá upgrade Indexer? -- Zavádí řetězce, které ještě nezískaly odměnu za indexaci v síti Graf, a zajišťuje, aby byl po zveřejnění podgrafu co nejrychleji k dispozici indexátor pro obsluhu dotazů. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexátoři, kteří provozují upgrade indexátoru, tak činí jako veřejnou službu pro podporu nových podgrafů a dalších řetězců, kterým chybí indexační odměny, než je Rada grafů schválí. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Proč Edge & Node spouští aktualizaci Indexer? -Edge & Node historicky udržovaly hostovanou službu, a proto již mají synchronizovaná data pro podgrafy hostované služby. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### Co znamená upgrade indexeru pro stávající indexery? Řetězce, které byly dříve podporovány pouze v hostované službě, byly vývojářům zpřístupněny v síti Graf nejprve bez odměn za indexování. -Tato akce však uvolnila poplatky za dotazy pro všechny zájemce o indexování a zvýšila počet podgrafů zveřejněných v síti Graf. V důsledku toho mají indexátoři více příležitostí indexovat a obsluhovat tyto podgrafy výměnou za poplatky za dotazy, a to ještě předtím, než jsou odměny za indexování pro řetězec povoleny. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -Upgrade Indexer také poskytuje komunitě Indexer informace o potenciální poptávce po podgrafech a nových řetězcích v síti grafů. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### Co to znamená pro delegáti? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ Aktualizace Indexeru umožňuje podporu blockchainů v síti, které byly dřív The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/cs/global.json b/website/src/pages/cs/global.json index c431472eb4f5..59211940d133 100644 --- a/website/src/pages/cs/global.json +++ b/website/src/pages/cs/global.json @@ -6,6 +6,7 @@ "subgraphs": "Podgrafy", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Popis", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Popis", + "liveResponse": "Live Response", + "example": "Příklad" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/cs/index.json b/website/src/pages/cs/index.json index 2cea19c4ff1a..545b2b717b56 100644 --- a/website/src/pages/cs/index.json +++ b/website/src/pages/cs/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Podgrafy", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -37,10 +37,86 @@ }, "supportedNetworks": { "title": "Podporované sítě", + "details": "Network Details", + "services": "Services", + "type": "Typ", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Dokumenty", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { "base": "The Graph supports {0}. To add a new network, {1}", "networks": "networks", "completeThisForm": "complete this form" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Name", + "id": "ID", + "subgraphs": "Podgrafy", + "substreams": "Substreams", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Substreams", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Fakturace", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { @@ -80,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/cs/indexing/chain-integration-overview.mdx b/website/src/pages/cs/indexing/chain-integration-overview.mdx index e048421d7ad9..a2f1eed58864 100644 --- a/website/src/pages/cs/indexing/chain-integration-overview.mdx +++ b/website/src/pages/cs/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ Tento proces souvisí se službou Datová služba podgrafů a vztahuje se pouze ### 2. Co se stane, když podpora Firehose & Substreams přijde až poté, co bude síť podporována v mainnet? -To by mělo vliv pouze na podporu protokolu pro indexování odměn na podgrafech s podsílou. Novou implementaci Firehose by bylo třeba testovat v testnetu podle metodiky popsané pro fázi 2 v tomto GIP. Podobně, za předpokladu, že implementace bude výkonná a spolehlivá, by bylo nutné provést PR na [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) (`Substreams data sources` Subgraph Feature) a také nový GIP pro podporu protokolu pro indexování odměn. PR a GIP může vytvořit kdokoli; nadace by pomohla se schválením Radou. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/cs/indexing/new-chain-integration.mdx b/website/src/pages/cs/indexing/new-chain-integration.mdx index 5eb78fc9efbd..2954c7f0b494 100644 --- a/website/src/pages/cs/indexing/new-chain-integration.mdx +++ b/website/src/pages/cs/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, in a JSON-RPC batch request -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ ### 2. Firehose Integration @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Config uzlu grafu -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/cs/indexing/overview.mdx b/website/src/pages/cs/indexing/overview.mdx index 52eda54899f1..47b88923efa8 100644 --- a/website/src/pages/cs/indexing/overview.mdx +++ b/website/src/pages/cs/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexery jsou operátoři uzlů v síti Graf, kteří sázejí graf tokeny (GRT) GRT, který je v protokolu založen, podléhá období rozmrazování a může být zkrácen, pokud jsou indexátory škodlivé a poskytují aplikacím nesprávná data nebo pokud indexují nesprávně. Indexátoři také získávají odměny za delegované sázky od delegátů, aby přispěli do sítě. -Indexátory vybírají podgrafy k indexování na základě signálu kurátorů podgrafů, přičemž kurátoři sázejí na GRT, aby určili, které podgrafy jsou vysoce kvalitní a měly by být upřednostněny. Spotřebitelé (např. aplikace) mohou také nastavit parametry, podle kterých indexátoři zpracovávají dotazy pro jejich podgrafy, a nastavit preference pro stanovení ceny poplatků za dotazy. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Uzel Graf -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Uzel Graf -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/cs/indexing/supported-network-requirements.mdx b/website/src/pages/cs/indexing/supported-network-requirements.mdx index a81118cec231..e3d76e7c7767 100644 --- a/website/src/pages/cs/indexing/supported-network-requirements.mdx +++ b/website/src/pages/cs/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Síť | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Síť | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/cs/indexing/tap.mdx b/website/src/pages/cs/indexing/tap.mdx index f8d028634016..6063720aca9d 100644 --- a/website/src/pages/cs/indexing/tap.mdx +++ b/website/src/pages/cs/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Přehled -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Požadavky +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/cs/indexing/tooling/graph-node.mdx b/website/src/pages/cs/indexing/tooling/graph-node.mdx index 88ddb88813fb..3b71056d71f9 100644 --- a/website/src/pages/cs/indexing/tooling/graph-node.mdx +++ b/website/src/pages/cs/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Uzel Graf --- -Graf Uzel je komponenta, která indexuje podgrafy a zpřístupňuje výsledná data k dotazování prostřednictvím rozhraní GraphQL API. Jako taková je ústředním prvkem zásobníku indexeru a její správná činnost je pro úspěšný provoz indexeru klíčová. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Uzel Graf -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### Databáze PostgreSQL -Hlavní úložiště pro uzel Graf Uzel, kde jsou uložena data podgrafů, metadata o podgraf a síťová data týkající se podgrafů, jako je bloková cache a cache eth_call. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Síťoví klienti Aby mohl uzel Graph Node indexovat síť, potřebuje přístup k síťovému klientovi prostřednictvím rozhraní API JSON-RPC kompatibilního s EVM. Toto RPC se může připojit k jedinému klientovi nebo může jít o složitější nastavení, které vyrovnává zátěž mezi více klienty. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS uzly -Metadata nasazení podgrafů jsou uložena v síti IPFS. Uzel Graf přistupuje během nasazení podgrafu především k uzlu IPFS, aby načetl manifest podgrafu a všechny propojené soubory. Síťové indexery nemusí hostit vlastní uzel IPFS. Uzel IPFS pro síť je hostován na adrese https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Metrický server Prometheus @@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit Když je Graf Uzel spuštěn, zpřístupňuje následující ports: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. ## Pokročilá konfigurace uzlu Graf -V nejjednodušším případě lze Graf Uzel provozovat s jednou instancí Graf Uzel, jednou databází PostgreSQL, uzlem IPFS a síťovými klienty podle potřeby indexovaných podgrafů. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Více uzlů graf -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Všimněte si, že více graf uzlů lze nakonfigurovat tak, aby používaly stejnou databázi, kterou lze horizontálně škálovat pomocí sharding. #### Pravidla nasazení -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Příklad konfigurace pravidla nasazení: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Každý uzel, jehož --node-id odpovídá regulárnímu výrazu, bude nastaven t Pro většinu případů použití postačuje k podpoře instance graf uzlu jedna databáze Postgres. Pokud instance graf uzlu přeroste rámec jedné databáze Postgres, je možné rozdělit ukládání dat grafového uzlu do více databází Postgres. Všechny databáze dohromady tvoří úložiště instance graf uzlu. Každá jednotlivá databáze se nazývá shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding se stává užitečným, když vaše stávající databáze nedokáže udržet krok se zátěží, kterou na ni Graf Uzel vyvíjí, a když už není možné zvětšit velikost databáze. -> Obecně je lepší vytvořit jednu co největší databázi, než začít s oddíly. Jednou z výjimek jsou případy, kdy je provoz dotazů rozdělen velmi nerovnoměrně mezi dílčí podgrafy; v těchto situacích může výrazně pomoci, pokud jsou dílčí podgrafy s velkým objemem uchovávány v jednom shardu a vše ostatní v jiném, protože toto nastavení zvyšuje pravděpodobnost, že data pro dílčí podgrafu s velkým objemem zůstanou v interní cache db a nebudou nahrazena daty, která nejsou tolik potřebná z dílčích podgrafů s malým objemem. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. Pokud jde o konfiguraci připojení, začněte s max_connections v souboru postgresql.conf nastaveným na 400 (nebo možná dokonce 200) a podívejte se na metriky store_connection_wait_time_ms a store_connection_checkout_count Prometheus. Výrazné čekací doby (cokoli nad 5 ms) jsou známkou toho, že je k dispozici příliš málo připojení; vysoké čekací doby tam budou také způsobeny tím, že databáze je velmi vytížená (například vysoké zatížení procesoru). Pokud se však databáze jinak jeví jako stabilní, vysoké čekací doby naznačují potřebu zvýšit počet připojení. V konfiguraci je horní hranicí, kolik připojení může každá instance graf uzlu používat, a graf uzel nebude udržovat otevřená připojení, pokud je nepotřebuje. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Podpora více sítí -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Více sítí - Více poskytovatelů na síť (to může umožnit rozdělení zátěže mezi poskytovatele a také konfiguraci plných uzlů i archivních uzlů, přičemž Graph Node může preferovat levnější poskytovatele, pokud to daná pracovní zátěž umožňuje). @@ -225,11 +225,11 @@ Uživatelé, kteří provozují škálované nastavení indexování s pokročil ### Správa uzlu graf -Vzhledem k běžícímu uzlu Graf (nebo uzlům Graf Uzel!) je pak úkolem spravovat rozmístěné podgrafy v těchto uzlech. Graf Uzel nabízí řadu nástrojů, které pomáhají se správou podgrafů. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Protokolování -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Práce s podgrafy +### Working with Subgraphs #### Stav indexování API -API pro stav indexování, které je ve výchozím nastavení dostupné na portu 8030/graphql, nabízí řadu metod pro kontrolu stavu indexování pro různé podgrafy, kontrolu důkazů indexování, kontrolu vlastností podgrafů a další. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ Proces indexování má tři samostatné části: - Zpracování událostí v pořadí pomocí příslušných obslužných (to může zahrnovat volání řetězce pro zjištění stavu a načtení dat z úložiště) - Zápis výsledných dat do úložiště -Tyto fáze jsou spojeny do potrubí (tj. mohou být prováděny paralelně), ale jsou na sobě závislé. Pokud se podgrafy indexují pomalu, bude příčina záviset na konkrétním podgrafu. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Běžné příčiny pomalého indexování: @@ -276,24 +276,24 @@ Běžné příčiny pomalého indexování: - Samotný poskytovatel se dostává za hlavu řetězu - Pomalé načítání nových účtenek od poskytovatele v hlavě řetězce -Metriky indexování podgrafů mohou pomoci diagnostikovat hlavní příčinu pomalého indexování. V některých případech spočívá problém v samotném podgrafu, ale v jiných případech mohou zlepšení síťových poskytovatelů, snížení konfliktů v databázi a další zlepšení konfigurace výrazně zlepšit výkon indexování. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Neúspěšné podgrafy +#### Failed Subgraphs -Během indexování mohou dílčí graf selhat, pokud narazí na neočekávaná data, pokud některá komponenta nefunguje podle očekávání nebo pokud je chyba ve zpracovatelích událostí nebo v konfiguraci. Existují dva obecné typy selhání: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministická selhání: jedná se o selhání, která nebudou vyřešena opakovanými pokusy - Nedeterministická selhání: mohou být způsobena problémy se zprostředkovatelem nebo neočekávanou chybou grafického uzlu. Pokud dojde k nedeterministickému selhání, uzel Graf zopakuje selhání obsluhy a postupně se vrátí zpět. -V některých případech může být chyba řešitelná indexátorem (například pokud je chyba důsledkem toho, že není k dispozici správný typ zprostředkovatele, přidání požadovaného zprostředkovatele umožní pokračovat v indexování). V jiných případech je však nutná změna v kódu podgrafu. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Bloková a volací mezipaměť -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Pokud existuje podezření na nekonzistenci blokové mezipaměti, například chybějící událost tx receipt: @@ -304,7 +304,7 @@ Pokud existuje podezření na nekonzistenci blokové mezipaměti, například ch #### Problémy a chyby při dotazování -Jakmile je podgraf indexován, lze očekávat, že indexery budou obsluhovat dotazy prostřednictvím koncového bodu vyhrazeného pro dotazy podgrafu. Pokud indexátor doufá, že bude obsluhovat značný objem dotazů, doporučuje se použít vyhrazený uzel pro dotazy a v případě velmi vysokého objemu dotazů mohou indexátory chtít nakonfigurovat oddíly replik tak, aby dotazy neovlivňovaly proces indexování. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. I s vyhrazeným dotazovacím uzlem a replikami však může provádění některých dotazů trvat dlouho a v některých případech může zvýšit využití paměti a negativně ovlivnit dobu dotazování ostatních uživatelů. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analýza dotazů -Problematické dotazy se nejčastěji objevují jedním ze dvou způsobů. V některých případech uživatelé sami hlásí, že daný dotaz je pomalý. V takovém případě je úkolem diagnostikovat příčinu pomalosti - zda se jedná o obecný problém, nebo o specifický problém daného podgrafu či dotazu. A pak ho samozřejmě vyřešit, pokud je to možné. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. V jiných případech může být spouštěcím faktorem vysoké využití paměti v uzlu dotazu a v takovém případě je třeba nejprve identifikovat dotaz, který problém způsobuje. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Odstranění podgrafů +#### Removing Subgraphs > Jedná se o novou funkci, která bude k dispozici v uzlu Graf 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/cs/indexing/tooling/graphcast.mdx b/website/src/pages/cs/indexing/tooling/graphcast.mdx index aec7d84070c3..5aa86adcc8da 100644 --- a/website/src/pages/cs/indexing/tooling/graphcast.mdx +++ b/website/src/pages/cs/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ V současné době jsou náklady na vysílání informací ostatním účastník Graphcast SDK (Vývoj softwaru Kit) umožňuje vývojářům vytvářet rádia, což jsou aplikace napájené drby, které mohou indexery spouštět k danému účelu. Máme také v úmyslu vytvořit několik Radios (nebo poskytnout podporu jiným vývojářům/týmům, které chtějí Radios vytvořit) pro následující případy použití: -- Křížová kontrola integrity dat subgrafu v reálném čase ([Podgraf Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Provádění aukcí a koordinace pro warp synchronizaci podgrafů, substreamů a dat Firehose z jiných Indexerů. -- Vlastní hlášení o analýze aktivních dotazů, včetně objemů požadavků na dílčí grafy, objemů poplatků atd. -- Vlastní hlášení o analýze indexování, včetně času indexování podgrafů, nákladů na plyn obsluhy, zjištěných chyb indexování atd. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Vlastní hlášení informací o zásobníku včetně verze grafového uzlu, verze Postgres, verze klienta Ethereum atd. ### Dozvědět se více diff --git a/website/src/pages/cs/resources/benefits.mdx b/website/src/pages/cs/resources/benefits.mdx index e18158242265..c0c0031d3f7b 100644 --- a/website/src/pages/cs/resources/benefits.mdx +++ b/website/src/pages/cs/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Srovnání nákladů | Vlastní hostitel | The Graph Network | -| :-: | :-: | :-: | -| Měsíční náklady na server\* | $350 měsíčně | $0 | -| Náklady na dotazování | $0+ | $0 per month | -| Inženýrský čas | $400 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | -| Dotazy za měsíc | Omezeno na infra schopnosti | 100,000 (Free Plan) | -| Náklady na jeden dotaz | $0 | $0 | -| Infrastructure | Centralizovaný | Decentralizované | -| Geografická redundancy | $750+ Usd za další uzel | Zahrnuto | -| Provozuschopnost | Různé | 99.9%+ | -| Celkové měsíční náklady | $750+ | $0 | +| Srovnání nákladů | Vlastní hostitel | The Graph Network | +| :-------------------------: | :-------------------------------------: | :-----------------------------------------------------------: | +| Měsíční náklady na server\* | $350 měsíčně | $0 | +| Náklady na dotazování | $0+ | $0 per month | +| Inženýrský čas | $400 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | +| Dotazy za měsíc | Omezeno na infra schopnosti | 100,000 (Free Plan) | +| Náklady na jeden dotaz | $0 | $0 | +| Infrastructure | Centralizovaný | Decentralizované | +| Geografická redundancy | $750+ Usd za další uzel | Zahrnuto | +| Provozuschopnost | Různé | 99.9%+ | +| Celkové měsíční náklady | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Srovnání nákladů | Vlastní hostitel | The Graph Network | -| :-: | :-: | :-: | -| Měsíční náklady na server\* | $350 měsíčně | $0 | -| Náklady na dotazování | $500 měsíčně | $120 per month | -| Inženýrský čas | $800 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | -| Dotazy za měsíc | Omezeno na infra schopnosti | ~3,000,000 | -| Náklady na jeden dotaz | $0 | $0.00004 | -| Infrastructure | Centralizovaný | Decentralizované | -| Výdaje inženýrskou | $200 za hodinu | Zahrnuto | -| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto | -| Provozuschopnost | Různé | 99.9%+ | -| Celkové měsíční náklady | $1,650+ | $120 | +| Srovnání nákladů | Vlastní hostitel | The Graph Network | +| :-------------------------: | :----------------------------------------: | :-----------------------------------------------------------: | +| Měsíční náklady na server\* | $350 měsíčně | $0 | +| Náklady na dotazování | $500 měsíčně | $120 per month | +| Inženýrský čas | $800 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | +| Dotazy za měsíc | Omezeno na infra schopnosti | ~3,000,000 | +| Náklady na jeden dotaz | $0 | $0.00004 | +| Infrastructure | Centralizovaný | Decentralizované | +| Výdaje inženýrskou | $200 za hodinu | Zahrnuto | +| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto | +| Provozuschopnost | Různé | 99.9%+ | +| Celkové měsíční náklady | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Srovnání nákladů | Vlastní hostitel | The Graph Network | -| :-: | :-: | :-: | -| Měsíční náklady na server\* | $1100 měsíčně za uzel | $0 | -| Náklady na dotazování | $4000 | $1,200 per month | -| Počet potřebných uzlů | 10 | Nepoužije se | -| Inženýrský čas | 6$, 000 nebo více měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | -| Dotazy za měsíc | Omezeno na infra schopnosti | ~30,000,000 | -| Náklady na jeden dotaz | $0 | $0.00004 | -| Infrastructure | Centralizovaný | Decentralizované | -| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto | -| Provozuschopnost | Různé | 99.9%+ | -| Celkové měsíční náklady | $11,000+ | $1,200 | +| Srovnání nákladů | Vlastní hostitel | The Graph Network | +| :-------------------------: | :-----------------------------------------: | :-----------------------------------------------------------: | +| Měsíční náklady na server\* | $1100 měsíčně za uzel | $0 | +| Náklady na dotazování | $4000 | $1,200 per month | +| Počet potřebných uzlů | 10 | Nepoužije se | +| Inženýrský čas | 6$, 000 nebo více měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | +| Dotazy za měsíc | Omezeno na infra schopnosti | ~30,000,000 | +| Náklady na jeden dotaz | $0 | $0.00004 | +| Infrastructure | Centralizovaný | Decentralizované | +| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto | +| Provozuschopnost | Různé | 99.9%+ | +| Celkové měsíční náklady | $11,000+ | $1,200 | \*včetně nákladů na zálohování: $50-$100 měsíčně @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Kurátorování signálu na podgrafu je volitelný jednorázový čistý nulový náklad (např. na podgrafu lze kurátorovat signál v hodnotě $1k a později jej stáhnout - s potenciálem získat v tomto procesu výnosy). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/cs/resources/glossary.mdx b/website/src/pages/cs/resources/glossary.mdx index 70161f581585..49fd1f60c539 100644 --- a/website/src/pages/cs/resources/glossary.mdx +++ b/website/src/pages/cs/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glosář - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glosář - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx index 756873dd8fbb..8af6d2817679 100644 --- a/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: Průvodce migrací AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -To umožní vývojářům podgrafů používat novější funkce jazyka AS a standardní knihovny. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Funkce @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## Jak provést upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -Pokud si nejste jisti, kterou verzi zvolit, doporučujeme vždy použít bezpečnou verzi. Pokud hodnota neexistuje, možná budete chtít provést pouze časný příkaz if s návratem v obsluze podgrafu. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Proměnlivé stínování @@ -132,7 +132,7 @@ Pokud jste použili stínování proměnných, musíte duplicitní proměnné p ### Nulová srovnání -Při aktualizaci podgrafu může někdy dojít k těmto chybám: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -Otevřeli jsme kvůli tomu problém v kompilátoru jazyka AssemblyScript, ale zatím platí, že pokud provádíte tyto operace v mapování podgrafů, měli byste je změnit tak, aby se před nimi provedla kontrola null. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Zkompiluje se, ale za běhu se přeruší, což se stane, protože hodnota nebyla inicializována, takže se ujistěte, že váš podgraf inicializoval své hodnoty, například takto: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx index 7f273724aff4..4051faab8eef 100644 --- a/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Průvodce migrací na GraphQL Validace +title: GraphQL Validations Migration Guide --- Brzy bude `graph-node` podporovat 100% pokrytí [GraphQL Validations specifikace](https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ Chcete-li být v souladu s těmito validacemi, postupujte podle průvodce migrac Pomocí migračního nástroje CLI můžete najít případné problémy v operacích GraphQL a opravit je. Případně můžete aktualizovat koncový bod svého klienta GraphQL tak, aby používal koncový bod `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Testování dotazů proti tomuto koncovému bodu vám pomůže najít problémy ve vašich dotazech. -> Není nutné migrovat všechny podgrafy, pokud používáte [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) nebo [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), ty již zajistí, že vaše dotazy jsou platné. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migrační nástroj CLI diff --git a/website/src/pages/cs/resources/roles/curating.mdx b/website/src/pages/cs/resources/roles/curating.mdx index c8b9caf18e2e..f06866a7c0ee 100644 --- a/website/src/pages/cs/resources/roles/curating.mdx +++ b/website/src/pages/cs/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Kurátorování --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Jak signalizovat -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Kurátor si může zvolit, zda bude signalizovat na konkrétní verzi podgrafu, nebo zda se jeho signál automaticky přenese na nejnovější produkční sestavení daného podgrafu. Obě strategie jsou platné a mají své výhody i nevýhody. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Automatická migrace signálu na nejnovější produkční sestavení může být cenná, protože zajistí, že se poplatky za dotazy budou neustále zvyšovat. Při každém kurátorství se platí 1% kurátorský poplatek. Při každé migraci také zaplatíte 0,5% kurátorskou daň. Vývojáři podgrafu jsou odrazováni od častého publikování nových verzí - musí zaplatit 0.5% kurátorskou daň ze všech automaticky migrovaných kurátorských podílů. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Rizika 1. Trh s dotazy je v Graf ze své podstaty mladý a existuje riziko, že vaše %APY může být nižší, než očekáváte, v důsledku dynamiky rodícího se trhu. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Podgraf může selhat kvůli chybě. Za neúspěšný podgraf se neúčtují poplatky za dotaz. V důsledku toho budete muset počkat, až vývojář chybu opraví a nasadí novou verzi. - - Pokud jste přihlášeni k odběru nejnovější verze podgrafu, vaše sdílené položky se automaticky přemigrují na tuto novou verzi. Při tom bude účtována 0,5% kurátorská daň. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Nejčastější dotazy ke kurátorství ### 1. Kolik % z poplatků za dotazy kurátoři vydělávají? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Jak se rozhodnu, které podgrafy jsou kvalitní a na kterých je třeba signalizovat? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Jaké jsou náklady na aktualizaci podgrafu? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. Jak často mohu svůj podgraf aktualizovat? +### 4. How often can I update my Subgraph? -Doporučujeme, abyste podgrafy neaktualizovali příliš často. Další podrobnosti naleznete v otázce výše. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Mohu prodat své kurátorské podíly? diff --git a/website/src/pages/cs/resources/roles/delegating/undelegating.mdx b/website/src/pages/cs/resources/roles/delegating/undelegating.mdx index 071253821e63..bc98d6aeff17 100644 --- a/website/src/pages/cs/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/cs/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## Další zdroje diff --git a/website/src/pages/cs/resources/subgraph-studio-faq.mdx b/website/src/pages/cs/resources/subgraph-studio-faq.mdx index a67af0f6505e..1f036fb46484 100644 --- a/website/src/pages/cs/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/cs/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: FAQs Podgraf Studio ## 1. Co je Podgraf Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Jak vytvořím klíč API? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th Po vytvoření klíče API můžete v části Zabezpečení definovat domény, které se mohou dotazovat na konkrétní klíč API. -## 5. Mohu svůj podgraf převést na jiného vlastníka? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Všimněte si, že po přenesení podgrafu jej již nebudete moci ve Studio zobrazit ani upravovat. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Jak najdu adresy URL dotazů pro podgrafy, pokud nejsem Vývojář podgrafu, který chci použít? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Nezapomeňte, že si můžete vytvořit klíč API a dotazovat se na libovolný podgraf zveřejněný v síti, i když si podgraf vytvoříte sami. Tyto dotazy prostřednictvím nového klíče API jsou placené dotazy jako jakékoli jiné v síti. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/cs/resources/tokenomics.mdx b/website/src/pages/cs/resources/tokenomics.mdx index 92b1514574b4..66eefd5b8b1a 100644 --- a/website/src/pages/cs/resources/tokenomics.mdx +++ b/website/src/pages/cs/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Přehled -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Kurátoři - nalezení nejlepších podgrafů pro indexátory +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexery - páteř blockchainových dat @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Vytvoření podgrafu +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Dotazování na existující podgraf +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/cs/sps/introduction.mdx b/website/src/pages/cs/sps/introduction.mdx index f0180d6a569b..4938d23102e4 100644 --- a/website/src/pages/cs/sps/introduction.mdx +++ b/website/src/pages/cs/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Úvod --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Přehled -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Další zdroje @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/cs/sps/sps-faq.mdx b/website/src/pages/cs/sps/sps-faq.mdx index 657b027cf5e9..25e77dc3c7f1 100644 --- a/website/src/pages/cs/sps/sps-faq.mdx +++ b/website/src/pages/cs/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## Co jsou substreamu napájen podgrafy? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## Jak se liší substream, které jsou napájeny podgrafy, od podgrafů? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## Jaké jsou výhody používání substreamu, které jsou založeny na podgraf? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## Jaké jsou výhody Substreams? @@ -35,7 +35,7 @@ Používání ubstreams má mnoho výhod, mimo jiné: - Vysoce výkonné indexování: Řádově rychlejší indexování prostřednictvím rozsáhlých klastrů paralelních operací (viz BigQuery). -- Umyvadlo kdekoli: Data můžete ukládat kamkoli chcete: Vložte data do PostgreSQL, MongoDB, Kafka, podgrafy, ploché soubory, tabulky Google. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programovatelné: Pomocí kódu můžete přizpůsobit extrakci, provádět agregace v čase transformace a modelovat výstup pro více zdrojů. @@ -63,17 +63,17 @@ Používání Firehose přináší mnoho výhod, včetně: - Využívá ploché soubory: Blockchain data jsou extrahována do plochých souborů, což je nejlevnější a nejoptimálnější dostupný výpočetní zdroj. -## Kde mohou vývojáři získat více informací o substreamu, které jsou založeny na podgraf a substreamu? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## Jaká je role modulů Rust v Substreamu? -Moduly Rust jsou ekvivalentem mapovačů AssemblyScript v podgraf. Jsou kompilovány do WASM podobným způsobem, ale programovací model umožňuje paralelní provádění. Definují druh transformací a agregací, které chcete aplikovat na surová data blockchainu. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst Při použití substreamů probíhá kompozice na transformační vrstvě, což umožňuje opakované použití modulů uložených v mezipaměti. -Jako příklad může Alice vytvořit cenový modul DEX, Bob jej může použít k vytvoření agregátoru objemu pro některé tokeny, které ho zajímají, a Lisa může zkombinovat čtyři jednotlivé cenové moduly DEX a vytvořit cenové orákulum. Jediný požadavek Substreams zabalí všechny moduly těchto jednotlivců, propojí je dohromady a nabídne mnohem sofistikovanější tok dat. Tento proud pak může být použit k naplnění podgrafu a může být dotazován spotřebiteli. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## Jak můžete vytvořit a nasadit Substreams využívající podgraf? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Kde najdu příklady podgrafů Substreams a Substreams-powered? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -Příklady podgrafů Substreams a Substreams-powered najdete na [tomto repozitáři Github](https://github.com/pinax-network/awesome-substreams). +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## Co znamenají substreams a podgrafy napájené substreams pro síť grafů? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? Integrace slibuje mnoho výhod, včetně extrémně výkonného indexování a větší složitelnosti díky využití komunitních modulů a stavění na nich. diff --git a/website/src/pages/cs/sps/triggers.mdx b/website/src/pages/cs/sps/triggers.mdx index 06a8845e4daf..b0c4bea23f3d 100644 --- a/website/src/pages/cs/sps/triggers.mdx +++ b/website/src/pages/cs/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Přehled -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Další zdroje diff --git a/website/src/pages/cs/sps/tutorial.mdx b/website/src/pages/cs/sps/tutorial.mdx index 3f98c57508bd..c1850bab04fa 100644 --- a/website/src/pages/cs/sps/tutorial.mdx +++ b/website/src/pages/cs/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Začněte @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Závěr -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/cs/subgraphs/_meta-titles.json b/website/src/pages/cs/subgraphs/_meta-titles.json index 0556abfc236c..c2d850dfc35c 100644 --- a/website/src/pages/cs/subgraphs/_meta-titles.json +++ b/website/src/pages/cs/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", - "best-practices": "Best Practices" + "guides": "How-to Guides", + "best-practices": "Osvědčené postupy" } diff --git a/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx index 3ce9c29a17a0..2783957614bf 100644 --- a/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Doporučený postup pro podgraf 4 - Zlepšení rychlosti indexování vyhnutím se eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` jsou volání, která lze provést z podgrafu do uzlu Ethereum. Tato volání zabírají značnou dobu, než vrátí data, což zpomaluje indexování. Pokud je to možné, navrhněte chytré kontrakty tak, aby emitovaly všechna potřebná data, takže nebudete muset používat `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Proč je dobré se vyhnout `eth_calls` -Podgraf jsou optimalizovány pro indexování dat událostí emitovaných z chytré smlouvy. Podgraf může také indexovat data pocházející z `eth_call`, což však může indexování podgrafu výrazně zpomalit, protože `eth_calls` vyžadují externí volání chytrých smluv. Odezva těchto volání nezávisí na podgrafu, ale na konektivitě a odezvě dotazovaného uzlu Ethereum. Minimalizací nebo eliminací eth_calls v našich podgrafech můžeme výrazně zvýšit rychlost indexování. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### Jak vypadá eth_call? -`eth_calls` jsou často nutné, pokud data potřebná pro podgraf nejsou dostupná prostřednictvím emitovaných událostí. Uvažujme například scénář, kdy podgraf potřebuje zjistit, zda jsou tokeny ERC20 součástí určitého poolu, ale smlouva emituje pouze základní událost `Transfer` a neemituje událost, která by obsahovala data, která potřebujeme: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -To je funkční, ale není to ideální, protože to zpomaluje indexování našeho podgrafu. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## Jak odstranit `eth_calls` @@ -54,7 +54,7 @@ V ideálním případě by měl být inteligentní kontrakt aktualizován tak, a event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -Díky této aktualizaci může podgraf přímo indexovat požadovaná data bez externích volání: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ calls: Samotná obslužná rutina přistupuje k výsledku tohoto `eth_call` přesně tak, jak je uvedeno v předchozí části, a to navázáním na smlouvu a provedením volání. graph-node cachuje výsledky deklarovaných `eth_call` v paměti a volání obslužné rutiny získá výsledek z této paměťové cache místo skutečného volání RPC. -Poznámka: Deklarované eth_calls lze provádět pouze v podgraf s verzí specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Závěr -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx index f6ec5a660bf2..fc9dce04c8c0 100644 --- a/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Podgraf Doporučený postup 2 - Zlepšení indexování a rychlosti dotazů pomocí @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Pole ve vašem schématu mohou skutečně zpomalit výkon podgrafu, pokud jejich počet přesáhne tisíce položek. Pokud je to možné, měla by se při použití polí používat direktiva `@derivedFrom`, která zabraňuje vzniku velkých polí, zjednodušuje obslužné programy a snižuje velikost jednotlivých entit, čímž výrazně zvyšuje rychlost indexování a výkon dotazů. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## Jak používat směrnici `@derivedFrom` @@ -15,7 +15,7 @@ Stačí ve schématu za pole přidat směrnici `@derivedFrom`. Takto: comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` vytváří efektivní vztahy typu one-to-many, které umožňují dynamické přiřazení entity k více souvisejícím entitám na základě pole v související entitě. Tento přístup odstraňuje nutnost ukládat duplicitní data na obou stranách vztahu, čímž se podgraf stává efektivnějším. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Příklad případu použití pro `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Pouhým přidáním direktivy `@derivedFrom` bude toto schéma ukládat "Komentáře“ pouze na straně "Komentáře“ vztahu a nikoli na straně "Příspěvek“ vztahu. Pole se ukládají napříč jednotlivými řádky, což umožňuje jejich výrazné rozšíření. To může vést k obzvláště velkým velikostem, pokud je jejich růst neomezený. -Tím se nejen zefektivní náš podgraf, ale také se odemknou tři funkce: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. Můžeme se zeptat na `Post` a zobrazit všechny jeho komentáře. 2. Můžeme provést zpětné vyhledávání a dotazovat se na jakýkoli `Komentář` a zjistit, ze kterého příspěvku pochází. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Závěr -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx index 7a2dbdda86f6..541cf76d0f7a 100644 --- a/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Přehled -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Závěr -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Další zdroje - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 5b058ee9d7cf..e4e191353476 100644 --- a/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Osvědčený postup 3 - Zlepšení indexování a výkonu dotazů pomocí neměnných entit a bytů jako ID -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ I když jsou možné i jiné typy ID, například String a Int8, doporučuje se ### Důvody, proč nepoužívat bajty jako IDs 1. Pokud musí být IDs entit čitelné pro člověka, například automaticky doplňované číselné IDs nebo čitelné řetězce, neměly by být použity bajty pro IDs. -2. Při integraci dat podgrafu s jiným datovým modelem, který nepoužívá bajty jako IDs, by se bajty jako IDs neměly používat. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Zlepšení výkonu indexování a dotazování není žádoucí. ### Konkatenace s byty jako IDs -V mnoha podgrafech se běžně používá spojování řetězců ke spojení dvou vlastností události do jediného ID, například pomocí `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Protože se však tímto způsobem vrací řetězec, značně to zhoršuje indexování podgrafů a výkonnost dotazování. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Místo toho bychom měli použít metodu `concatI32()` pro spojování vlastností událostí. Výsledkem této strategie je ID `Bytes`, které je mnohem výkonnější. @@ -172,7 +172,7 @@ Odpověď na dotaz: ## Závěr -Bylo prokázáno, že použití neměnných entit i bytů jako ID výrazně zvyšuje efektivitu podgrafů. Testy konkrétně ukázaly až 28% nárůst výkonu dotazů a až 48% zrychlení indexace. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Více informací o používání nezměnitelných entit a bytů jako ID najdete v tomto příspěvku na blogu Davida Lutterkorta, softwarového inženýra ve společnosti Edge & Node: [Dvě jednoduchá vylepšení výkonu podgrafu](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/cs/subgraphs/best-practices/pruning.mdx b/website/src/pages/cs/subgraphs/best-practices/pruning.mdx index e6b23f71c409..6fd068f449d6 100644 --- a/website/src/pages/cs/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Doporučený postup 1 - Zlepšení rychlosti dotazu pomocí ořezávání podgrafů -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) odstraní archivní entity z databáze podgrafu až do daného bloku a odstranění nepoužívaných entit z databáze podgrafu zlepší výkonnost dotazu podgrafu, často výrazně. Použití `indexerHints` je snadný způsob, jak podgraf ořezat. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## Jak prořezat podgraf pomocí `indexerHints` @@ -13,14 +13,14 @@ Přidejte do manifestu sekci `indexerHints`. `indexerHints` má tři možnosti `prune`: -- `prune: auto`: Udržuje minimální potřebnou historii nastavenou indexátorem, čímž optimalizuje výkon dotazu. Toto je obecně doporučené nastavení a je výchozí pro všechny podgrafy vytvořené pomocí `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Nastaví vlastní omezení počtu historických bloků, které se mají zachovat. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -Aktualizací souboru `subgraph.yaml` můžeme do podgrafů přidat `indexerHints`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Závěr -Ořezávání pomocí `indexerHints` je osvědčeným postupem pro vývoj podgrafů, který nabízí významné zlepšení výkonu dotazů. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx b/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx index f35ab0913563..dae73ede9ff3 100644 --- a/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Přehled @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Příklad: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Příklad: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Závěr -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/cs/subgraphs/billing.mdx b/website/src/pages/cs/subgraphs/billing.mdx index 4118bf1d451a..b78c375c4aee 100644 --- a/website/src/pages/cs/subgraphs/billing.mdx +++ b/website/src/pages/cs/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Fakturace ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/cs/subgraphs/cookbook/arweave.mdx b/website/src/pages/cs/subgraphs/cookbook/arweave.mdx index d59897ad4e03..dff8facf77d4 100644 --- a/website/src/pages/cs/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/cs/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Vytváření podgrafů na Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! V této příručce se dozvíte, jak vytvořit a nasadit subgrafy pro indexování blockchainu Arweave. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are Abyste mohli sestavit a nasadit Arweave Subgraphs, potřebujete dva balíčky: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Komponenty podgrafu -Podgraf má tři Komponenty: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ Definuje zdroje dat, které jsou předmětem zájmu, a způsob jejich zpracován Zde definujete, na která data se chcete po indexování subgrafu pomocí jazyka GraphQL dotazovat. Je to vlastně podobné modelu pro API, kde model definuje strukturu těla požadavku. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` Jedná se o logiku, která určuje, jak mají být data načtena a uložena, když někdo komunikuje se zdroji dat, kterým nasloucháte. Data se přeloží a uloží na základě schématu, které jste uvedli. -Při vývoji podgrafů existují dva klíčové příkazy: +During Subgraph development there are two key commands: ``` -$ graph codegen # generuje typy ze souboru se schématem identifikovaným v manifestu -$ graph build # vygeneruje webové sestavení ze souborů AssemblyScript a připraví všechny dílčí soubory do složky /build +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Definice podgrafu Manifest -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Zdroje dat Arweave obsahují nepovinné pole source.owner, což je veřejný klíč peněženky Arweave @@ -99,7 +99,7 @@ Datové zdroje Arweave podporují dva typy zpracovatelů: ## Definice schématu -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript Mapování @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Nasazení podgrafu Arweave v Podgraf Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Dotazování podgrafu Arweave -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Příklady podgrafů -Zde je příklad podgrafu pro referenci: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Může podgraf indexovat Arweave a další řetězce? +### Can a Subgraph index Arweave and other chains? -Ne, podgraf může podporovat zdroje dat pouze z jednoho řetězce/sítě. +No, a Subgraph can only support data sources from one chain/network. ### Mohu indexovat uložené soubory v Arweave? V současné době The Graph indexuje pouze Arweave jako blockchain (jeho bloky a transakce). -### Mohu identifikovat svazky Bundlr ve svém podgrafu? +### Can I identify Bundlr bundles in my Subgraph? Toto není aktuálně podporováno. @@ -188,7 +188,7 @@ Source.owner může být veřejný klíč uživatele nebo adresa účtu. ### Jaký je aktuální formát šifrování? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/cs/subgraphs/cookbook/enums.mdx b/website/src/pages/cs/subgraphs/cookbook/enums.mdx index 71f3f784a0eb..7cc0e6c0ed78 100644 --- a/website/src/pages/cs/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/cs/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/cs/subgraphs/cookbook/grafting.mdx b/website/src/pages/cs/subgraphs/cookbook/grafting.mdx index ca0ab0367451..a7bad43c9c1f 100644 --- a/website/src/pages/cs/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/cs/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Nahrazení smlouvy a zachování její historie pomocí roubování --- -V této příručce se dozvíte, jak vytvářet a nasazovat nové podgrafy roubováním stávajících podgrafů. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## Co je to roubování? -Při roubování se znovu použijí data z existujícího podgrafu a začne se indexovat v pozdějším bloku. To je užitečné během vývoje, abyste se rychle dostali přes jednoduché chyby v mapování nebo abyste dočasně znovu zprovoznili existující podgraf po jeho selhání. Také ji lze použít při přidávání funkce do podgrafu, které trvá dlouho, než se indexuje od začátku. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -Štěpovaný podgraf může používat schéma GraphQL, které není totožné se schématem základního podgrafu, ale je s ním pouze kompatibilní. Musí to být platné schéma podgrafu jako takové, ale může se od schématu základního podgrafu odchýlit následujícími způsoby: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Přidává nebo odebírá typy entit - Odstraňuje atributy z typů entit @@ -22,38 +22,38 @@ Další informace naleznete na: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Důležité upozornění k roubování při aktualizaci na síť -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Proč je to důležité? -Štěpování je výkonná funkce, která umožňuje "naroubovat" jeden podgraf na druhý, čímž efektivně přenese historická data ze stávajícího podgrafu do nové verze. Podgraf není možné naroubovat ze Sítě grafů zpět do Podgraf Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Osvědčené postupy -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. Dodržováním těchto pokynů minimalizujete rizika a zajistíte hladší průběh migrace. ## Vytvoření existujícího podgrafu -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Definice podgrafu Manifest -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Definice manifestu roubování -Roubování vyžaduje přidání dvou nových položek do původního manifestu podgrafu: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Nasazení základního podgrafu -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Po dokončení ověřte, zda se podgraf správně indexuje. Pokud spustíte následující příkaz v The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ Vrátí něco takového: } ``` -Jakmile ověříte, že se podgraf správně indexuje, můžete jej rychle aktualizovat pomocí roubování. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Nasazení podgrafu roubování Náhradní podgraf.yaml bude mít novou adresu smlouvy. K tomu může dojít při aktualizaci dapp, novém nasazení kontraktu atd. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Po dokončení ověřte, zda se podgraf správně indexuje. Pokud spustíte následující příkaz v The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ Měla by vrátit následující: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Gratulujeme! Úspěšně jste naroubovali podgraf na jiný podgraf. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Další zdroje diff --git a/website/src/pages/cs/subgraphs/cookbook/near.mdx b/website/src/pages/cs/subgraphs/cookbook/near.mdx index dc65c11da629..275c2aba0fd4 100644 --- a/website/src/pages/cs/subgraphs/cookbook/near.mdx +++ b/website/src/pages/cs/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Vytváření podgrafů v NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## Co je NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## Co jsou podgrafy NEAR? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Obsluhy bloků: jsou spouštěny při každém novém bloku. - Obsluhy příjmu: spouštějí se pokaždé, když je zpráva provedena na zadaném účtu. @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Sestavení podgrafu NEAR -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Vytváření subgrafu NEAR je velmi podobné vytváření subgrafu, který indexuje Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -Definice podgrafů má tři aspekty: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -Při vývoji podgrafů existují dva klíčové příkazy: +During Subgraph development there are two key commands: ```bash -$ graph codegen # generuje typy ze souboru se schématem identifikovaným v manifestu -$ graph build # vygeneruje webové sestavení ze souborů AssemblyScript a připraví všechny dílčí soubory do složky /build +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Definice podgrafu Manifest -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ Zdroje dat NEAR podporují dva typy zpracovatelů: ### Definice schématu -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript Mapování @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Nasazení podgrafu NEAR -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -Konfigurace uzlů závisí na tom, kde je podgraf nasazen. +The node configuration will depend on where the Subgraph is being deployed. ### Podgraf Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Jakmile je podgraf nasazen, bude indexován pomocí Graph Node. Jeho průběh můžete zkontrolovat dotazem na samotný podgraf: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ Brzy vám poskytneme další informace o provozu výše uvedených komponent. ## Dotazování podgrafu NEAR -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Příklady podgrafů -Zde je několik příkladů podgrafů: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Zde je několik příkladů podgrafů: ### Jak funguje beta verze? -Podpora NEAR je ve fázi beta, což znamená, že v API může dojít ke změnám, protože budeme pokračovat ve zdokonalování integrace. Napište nám prosím na adresu near@thegraph.com, abychom vás mohli podpořit při vytváření podgrafů NEAR a informovat vás o nejnovějším vývoji! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Může podgraf indexovat řetězce NEAR i EVM? +### Can a Subgraph index both NEAR and EVM chains? -Ne, podgraf může podporovat zdroje dat pouze z jednoho řetězce/sítě. +No, a Subgraph can only support data sources from one chain/network. -### Mohou podgrafy reagovat na specifičtější spouštěče? +### Can Subgraphs react to more specific triggers? V současné době jsou podporovány pouze spouštěče Blok a Příjem. Zkoumáme spouštěče pro volání funkcí na zadaném účtu. Máme také zájem o podporu spouštěčů událostí, jakmile bude mít NEAR nativní podporu událostí. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### Mohou podgrafy NEAR během mapování volat zobrazení na účty NEAR? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? To není podporováno. Vyhodnocujeme, zda je tato funkce pro indexování nutná. -### Mohu v podgrafu NEAR používat šablony zdrojů dat? +### Can I use data source templates in my NEAR Subgraph? Tato funkce není v současné době podporována. Vyhodnocujeme, zda je tato funkce pro indexování nutná. -### Podgrafy Ethereum podporují verze "pending" a "current", jak mohu nasadit verzi "pending" podgrafu NEAR? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -Pro podgrafy NEAR zatím nejsou podporovány čekající funkce. V mezidobí můžete novou verzi nasadit do jiného "pojmenovaného" podgrafu a po jeho synchronizaci s hlavou řetězce ji můžete znovu nasadit do svého hlavního "pojmenovaného" podgrafu, který bude používat stejné ID nasazení, takže hlavní podgraf bude okamžitě synchronizován. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### Moje otázka nebyla zodpovězena, kde mohu získat další pomoc při vytváření podgrafů NEAR? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## Odkazy: diff --git a/website/src/pages/cs/subgraphs/cookbook/polymarket.mdx b/website/src/pages/cs/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/cs/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/cs/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/cs/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/cs/subgraphs/cookbook/secure-api-keys-nextjs.mdx index de502a0ed526..d311cfa5117e 100644 --- a/website/src/pages/cs/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/cs/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: Jak zabezpečit klíče API pomocí komponent serveru Next.js ## Přehled -K řádnému zabezpečení našeho klíče API před odhalením ve frontendu naší aplikace můžeme použít [komponenty serveru Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components). Pro další zvýšení zabezpečení našeho klíče API můžeme také [omezit náš klíč API na určité podgrafy nebo domény v Podgraf Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -V této kuchařce probereme, jak vytvořit serverovou komponentu Next.js, která se dotazuje na podgraf a zároveň skrývá klíč API před frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Upozornění @@ -18,7 +18,7 @@ V této kuchařce probereme, jak vytvořit serverovou komponentu Next.js, která Ve standardní aplikaci React mohou být klíče API obsažené v kódu frontendu vystaveny na straně klienta, což představuje bezpečnostní riziko. Soubory `.env` se sice běžně používají, ale plně klíče nechrání, protože kód Reactu se spouští na straně klienta a vystavuje klíč API v hlavičkách. Serverové komponenty Next.js tento problém řeší tím, že citlivé operace zpracovávají na straně serveru. -### Použití vykreslování na straně klienta k dotazování podgrafu +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/cs/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/cs/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..0b4847244981 --- /dev/null +++ b/website/src/pages/cs/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Přehled + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Začněte + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Další zdroje + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/cs/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/cs/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..f2b7abeae26a --- /dev/null +++ b/website/src/pages/cs/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Úvod + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Začněte + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Další zdroje + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/cs/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/cs/subgraphs/cookbook/subgraph-debug-forking.mdx index 4673b362c360..60ad21d2fe95 100644 --- a/website/src/pages/cs/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/cs/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Rychlé a snadné ladění podgrafů pomocí vidliček --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## Ok, co to je? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## Co?! Jak? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## Ukažte mi prosím nějaký kód! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. Obvyklý způsob, jak se pokusit o opravu, je: 1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší (zatímco já vím, že ne). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Počkejte na synchronizaci. 4. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá! Nyní můžete mít 2 otázky: @@ -69,18 +69,18 @@ Nyní můžete mít 2 otázky: A já odpovídám: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Vidličkování je snadné, není třeba se potit: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! Takže to dělám takhle: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. Zkontroluji protokoly vytvořené místním graf uzlem a hurá, zdá se, že vše funguje. -5. Nasadím svůj nyní již bezchybný podgraf do vzdáleného uzlu Graf a žiji šťastně až do smrti! (bez brambor) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/cs/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/cs/subgraphs/cookbook/subgraph-uncrashable.mdx index 53750dd1cbee..bdc3671399e1 100644 --- a/website/src/pages/cs/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/cs/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Generátor kódu bezpečného podgrafu --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Proč se integrovat s aplikací Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - Framework také obsahuje způsob (prostřednictvím konfiguračního souboru), jak vytvořit vlastní, ale bezpečné funkce setteru pro skupiny proměnných entit. Tímto způsobem není možné, aby uživatel načetl/použil zastaralou entitu grafu, a také není možné zapomenout uložit nebo nastavit proměnnou, kterou funkce vyžaduje. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Podgraf Uncrashable lze spustit jako volitelný příznak pomocí příkazu Graph CLI codegen. @@ -26,4 +26,4 @@ Podgraf Uncrashable lze spustit jako volitelný příznak pomocí příkazu Grap graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/cs/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/cs/subgraphs/cookbook/transfer-to-the-graph.mdx index 3e4f8eee8ccf..510b0ea317f6 100644 --- a/website/src/pages/cs/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/cs/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Použitím [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Příklad -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Další zdroje -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx b/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx index 4fbf2b573c14..0ae33c1efe69 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Přehled -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Nefatální -Chyby indexování v již synchronizovaných podgrafech ve výchozím nastavení způsobí selhání podgrafy a zastavení synchronizace. Podgrafy lze alternativně nakonfigurovat tak, aby pokračovaly v synchronizaci i při přítomnosti chyb, a to ignorováním změn provedených obslužnou rutinou, která chybu vyvolala. To dává autorům podgrafů čas na opravu jejich podgrafů, zatímco dotazy jsou nadále obsluhovány proti poslednímu bloku, ačkoli výsledky mohou být nekonzistentní kvůli chybě, která chybu způsobila. Všimněte si, že některé chyby jsou stále fatální. Aby chyba nebyla fatální, musí být známo, že je deterministická. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Povolení nefatálních chyb vyžaduje nastavení následujícího příznaku funkce v manifestu podgraf: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -Zdroje dat souborů jsou novou funkcí podgrafu pro přístup k datům mimo řetězec během indexování robustním a rozšiřitelným způsobem. Zdroje souborových dat podporují načítání souborů ze systému IPFS a z Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > To také vytváří základ pro deterministické indexování dat mimo řetězec a potenciální zavedení libovolných dat ze zdrojů HTTP. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -246,7 +246,7 @@ The CID of the file as a readable string can be accessed via the `dataSource` as const cid = dataSource.stringParam() ``` -Příklad +Příklad ```typescript import { json, Bytes, dataSource } from '@graphprotocol/graph-ts' @@ -290,7 +290,7 @@ Příklad: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ Tím se vytvoří nový zdroj dat souborů, který bude dotazovat nakonfigurovan This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Gratulujeme, používáte souborové zdroje dat! -#### Nasazení podgrafů +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Omezení -Zpracovatelé a entity zdrojů dat souborů jsou izolovány od ostatních entit podgrafů, což zajišťuje, že jsou při provádění deterministické a nedochází ke kontaminaci zdrojů dat založených na řetězci. Přesněji řečeno: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entity vytvořené souborovými zdroji dat jsou neměnné a nelze je aktualizovat - Obsluhy zdrojů dat souborů nemohou přistupovat k entita z jiných zdrojů dat souborů - K entita přidruženým k datovým zdrojům souborů nelze přistupovat pomocí zpracovatelů založených na řetězci -> Ačkoli by toto omezení nemělo být pro většinu případů použití problematické, pro některé může představovat složitost. Pokud máte problémy s modelováním dat založených na souborech v podgrafu, kontaktujte nás prosím prostřednictvím služby Discord! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Kromě toho není možné vytvářet zdroje dat ze zdroje dat souborů, ať už se jedná o zdroj dat v řetězci nebo jiný zdroj dat souborů. Toto omezení může být v budoucnu zrušeno. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Protože se při roubování základní data spíše kopírují než indexují, je mnohem rychlejší dostat podgraf do požadovaného bloku než při indexování od nuly, i když počáteční kopírování dat může u velmi velkých podgrafů trvat i několik hodin. Během inicializace roubovaného podgrafu bude uzel Graf Uzel zaznamenávat informace o typů entit, které již byly zkopírovány. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -Štěpovaný podgraf může používat schéma GraphQL, které není totožné se schématem základního podgrafu, ale je s ním pouze kompatibilní. Musí to být platné schéma podgrafu jako takové, ale může se od schématu základního podgrafu odchýlit následujícími způsoby: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Přidává nebo odebírá typy entit - Odstraňuje atributy z typů entit @@ -560,4 +560,4 @@ Protože se při roubování základní data spíše kopírují než indexují, - Přidává nebo odebírá rozhraní - Mění se, pro které typy entit je rozhraní implementováno -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx index fad0d6ebaa1a..00fb7cbcf275 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ Pokud není pro pole v nové entitě se stejným ID nastavena žádná hodnota, ## Generování kódu -Aby byla práce s inteligentními smlouvami, událostmi a entitami snadná a typově bezpečná, může Graf CLI generovat typy AssemblyScript ze schématu GraphQL podgrafu a ABI smluv obsažených ve zdrojích dat. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. To se provádí pomocí @@ -80,7 +80,7 @@ To se provádí pomocí graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx index 3c3dbdc7671f..e794c1caa32c 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ Knihovna `@graphprotocol/graph-ts` poskytuje následující API: ### Verze -`apiVersion` v manifestu podgrafu určuje verzi mapovacího API, kterou pro daný podgraf používá uzel Graf. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Verze | Poznámky vydání | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Přidá ověření existence polí ve schéma při ukládání entity. | -| 0.0.7 | Přidání tříd `TransactionReceipt` a `Log` do typů Ethereum
Přidání pole `receipt` do objektu Ethereum událost | -| 0.0.6 | Přidáno pole `nonce` do objektu Ethereum Transaction
Přidáno `baseFeePerGas` do objektu Ethereum bloku | +| Verze | Poznámky vydání | +| :---: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Přidá ověření existence polí ve schéma při ukládání entity. | +| 0.0.7 | Přidání tříd `TransactionReceipt` a `Log` do typů Ethereum
Přidání pole `receipt` do objektu Ethereum událost | +| 0.0.6 | Přidáno pole `nonce` do objektu Ethereum Transaction
Přidáno `baseFeePerGas` do objektu Ethereum bloku | | 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Přidání pole `functionSignature` do objektu Ethereum SmartContractCall | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Přidání pole `input` do objektu Ethereum Transackce | +| 0.0.4 | Přidání pole `functionSignature` do objektu Ethereum SmartContractCall | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Přidání pole `input` do objektu Ethereum Transackce | ### Vestavěné typy @@ -147,7 +147,7 @@ _Math_ - `x.notEqual(y: BigInt): bool` –lze zapsat jako `x != y`. - `x.lt(y: BigInt): bool` – lze zapsat jako `x < y`. - `x.le(y: BigInt): bool` – lze zapsat jako `x <= y`. -- `x.gt(y: BigInt): bool` – lze zapsat jako `x > y`. +- `x.gt(y: BigInt): bool` – lze zapsat jako `x > y`. - `x.ge(y: BigInt): bool` – lze zapsat jako `x >= y`. - `x.neg(): BigInt` – lze zapsat jako `-x`. - `x.divDecimal(y: BigDecimal): BigDecimal` – dělí desetinným číslem, čímž získá desetinný výsledek. @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' API `store` umožňuje načítat, ukládat a odebírat entity z a do úložiště Graf uzel. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Vytváření entity @@ -282,8 +282,8 @@ Od verzí `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 a `@graphproto The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ Ethereum API poskytuje přístup k inteligentním smlouvám, veřejným stavový #### Podpora typů Ethereum -Stejně jako u entit generuje `graph codegen` třídy pro všechny inteligentní smlouvy a události používané v podgrafu. Za tímto účelem musí být ABI kontraktu součástí zdroje dat v manifestu podgrafu. Obvykle jsou soubory ABI uloženy ve složce `abis/`. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -Ve vygenerovaných třídách probíhají konverze mezi typy Ethereum [built-in-types](#built-in-types) v pozadí, takže se o ně autoři podgraf nemusí starat. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -To ilustruje následující příklad. Je dáno schéma podgrafu, jako je +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Přístup ke stavu inteligentní smlouvy -Kód vygenerovaný nástrojem `graph codegen` obsahuje také třídy pro inteligentní smlouvy používané v podgrafu. Ty lze použít k přístupu k veřejným stavovým proměnným a k volání funkcí kontraktu v aktuálním bloku. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. Běžným vzorem je přístup ke smlouvě, ze které událost pochází. Toho lze dosáhnout pomocí následujícího kódu: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { Pokud má smlouva `ERC20Contract` na platformě Ethereum veřejnou funkci pouze pro čtení s názvem `symbol`, lze ji volat pomocí `.symbol()`. Pro veřejné stavové proměnné se automaticky vytvoří metoda se stejným názvem. -Jakákoli jiná smlouva, která je součástí podgrafu, může být importována z vygenerovaného kódu a může být svázána s platnou adresou. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Zpracování vrácených volání @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. `log` API obsahuje následující funkce: @@ -590,7 +590,7 @@ The `log` API allows subgraphs to log information to the Graph Node standard out - `log.info(fmt: string, args: Array): void` - zaznamená informační zprávu. - `log.warning(fmt: string, args: Array): void` - zaznamená varování. - `log.error(fmt: string, args: Array): void` - zaznamená chybovou zprávu. -- `log.critical(fmt: string, args: Array): void` - zaznamená kritickou zprávu _a_ ukončí podgraf. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. `log` API přebírá formátovací řetězec a pole řetězcových hodnot. Poté nahradí zástupné symboly řetězcovými hodnotami z pole. První zástupný symbol „{}“ bude nahrazen první hodnotou v poli, druhý zástupný symbol „{}“ bude nahrazen druhou hodnotou a tak dále. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) V současné době je podporován pouze příznak `json`, který musí být předán souboru `ipfs.map`. S příznakem `json` se soubor IPFS musí skládat z řady hodnot JSON, jedna hodnota na řádek. Volání příkazu `ipfs.map` přečte každý řádek souboru, deserializuje jej do hodnoty `JSONValue` a pro každou z nich zavolá zpětné volání. Zpětné volání pak může použít operace entit k uložení dat z `JSONValue`. Změny entit se uloží až po úspěšném ukončení obsluhy, která volala `ipfs.map`; do té doby se uchovávají v paměti, a velikost souboru, který může `ipfs.map` zpracovat, je proto omezená. -Při úspěchu vrátí `ipfs.map` hodnotu `void`. Pokud vyvolání zpětného volání způsobí chybu, obslužná rutina, která vyvolala `ipfs.map`, se přeruší a podgraf se označí jako neúspěšný. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ Základní třída `Entity` a podřízená třída `DataSourceContext` mají pom ### DataSourceContext v manifestu -Sekce `context` v rámci `dataSources` umožňuje definovat páry klíč-hodnota, které jsou přístupné v rámci mapování podgrafů. Dostupné typy jsou `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List` a `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Zde je příklad YAML ilustrující použití různých typů v sekci `context`: @@ -887,4 +887,4 @@ dataSources: - `Seznam`: Určuje seznam položek. U každé položky je třeba zadat její typ a data. - `BigInt`: Určuje velkou celočíselnou hodnotu. Kvůli velké velikosti musí být uvedena v uvozovkách. -Tento kontext je pak přístupný v souborech mapování podgrafů, což umožňuje vytvářet dynamičtější a konfigurovatelnější podgrafy. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx index 79ec3df1a827..419f698e68e4 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Běžné problémy se AssemblyScript --- -Při vývoji podgrafů se často vyskytují určité problémy [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Jejich obtížnost při ladění je různá, nicméně jejich znalost může pomoci. Následuje neúplný seznam těchto problémů: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Rozsah se nedědí do [uzavíracích funkcí](https://www.assemblyscript.org/status.html#on-closures), tj. proměnné deklarované mimo uzavírací funkce nelze použít. Vysvětlení v [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx index dbeac0c137a5..536b416c9465 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Instalace Graf CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Přehled -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Začínáme @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Vytvoření podgrafu ### Ze stávající smlouvy -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### Z příkladu podgrafu -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is Soubor(y) ABI se musí shodovat s vaší smlouvou. Soubory ABI lze získat několika způsoby: - Pokud vytváříte vlastní projekt, budete mít pravděpodobně přístup k nejaktuálnějším ABI. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Verze | Poznámky vydání | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx index c0a99bb516eb..dcc831244293 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Přehled -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| Typ | Popis | -| --- | --- | -| `Bytes` | Pole bajtů reprezentované jako hexadecimální řetězec. Běžně se používá pro hashe a adresy Ethereum. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Typ | Popis | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Pole bajtů reprezentované jako hexadecimální řetězec. Běžně se používá pro hashe a adresy Ethereum. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -U vztahů typu "jeden k mnoha" by měl být vztah vždy uložen na straně "jeden" a strana "mnoho" by měla být vždy odvozena. Uložení vztahu tímto způsobem namísto uložení pole entit na straně "mnoho" povede k výrazně lepšímu výkonu jak při indexování, tak při dotazování na podgraf. Obecně platí, že ukládání polí entit je třeba se vyhnout, pokud je to praktické. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Příklad @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Tento propracovanější způsob ukládání vztahů mnoho-více vede k menšímu množství dat uložených pro podgraf, a tedy k podgrafu, který je často výrazně rychlejší při indexování a dotazování. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Přidání komentářů do schématu @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Podporované jazyky diff --git a/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx index 436b407a19ba..04f1eee28246 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Přehled -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Verze | Poznámky vydání | +| :---: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx index a434110b4282..d86f86f9381c 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Přehled -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). Důležité položky, které je třeba v manifestu aktualizovat, jsou: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Zpracovatelé hovorů -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Obsluhy volání se spustí pouze v jednom ze dvou případů: když je zadaná funkce volána jiným účtem než samotnou smlouvou nebo když je v Solidity označena jako externí a volána jako součást jiné funkce ve stejné smlouvě. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Definice obsluhy volání @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Funkce mapování -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Obsluha bloků -Kromě přihlášení k událostem smlouvy nebo volání funkcí může podgraf chtít aktualizovat svá data, když jsou do řetězce přidány nové bloky. Za tímto účelem může podgraf spustit funkci po každém bloku nebo po blocích, které odpovídají předem definovanému filtru. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Podporované filtry @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. Protože pro obsluhu bloku neexistuje žádný filtr, zajistí, že obsluha bude volána každý blok. Zdroj dat může obsahovat pouze jednu blokovou obsluhu pro každý typ filtru. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Jednou Filtr @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Definovaný obslužná rutina s filtrem once bude zavolána pouze jednou před spuštěním všech ostatních rutin. Tato konfigurace umožňuje, aby podgraf používal obslužný program jako inicializační obslužný, který provádí specifické úlohy na začátku indexování. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Funkce mapování -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Výchozí bloky -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Tipy indexátor -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prořezávat -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: Uchování určitého množství historických dat: @@ -532,3 +532,18 @@ Zachování kompletní historie entitních států: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Verze | Poznámky vydání | +| :---: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx index fd0130dd672a..691624b81344 100644 --- a/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Rámec pro testování jednotek --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Začínáme @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### Možnosti CLI @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Ukázkový podgraf +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Videonávody -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im A je to tady - vytvořili jsme první test! 👏 -Pro spuštění našich testů nyní stačí v kořenové složce podgrafu spustit následující příkaz: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Pokrytí test -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Další zdroje -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Zpětná vazba diff --git a/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx index 77f05e1ad499..e9848601ebc7 100644 --- a/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Nasazení podgrafu do více sítí +## Deploying the Subgraph to multiple networks -V některých případech budete chtít nasadit stejný podgraf do více sítí, aniž byste museli duplikovat celý jeho kód. Hlavním problémem, který s tím souvisí, je skutečnost, že smluvní adresy v těchto sítích jsou různé. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Zásady archivace subgrafů Subgraph Studio +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Každý podgraf ovlivněný touto zásadou má možnost vrátit danou verzi zpět. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Kontrola stavu podgrafů +## Checking Subgraph health -Pokud se podgraf úspěšně synchronizuje, je to dobré znamení, že bude dobře fungovat navždy. Nové spouštěče v síti však mohou způsobit, že se podgraf dostane do neověřeného chybového stavu, nebo může začít zaostávat kvůli problémům s výkonem či operátory uzlů. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx index 7c53f174237a..14be0175123c 100644 --- a/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Vytváření a správa klíčů API pro konkrétní podgrafy +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### Jak vytvořit podgraf v Podgraf Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Kompatibilita podgrafů se sítí grafů -Aby mohly být podgrafy podporovány indexátory v síti grafů, musí: - -- Index a [supported network](/supported-networks/) -- Nesmí používat žádnou z následujících funkcí: - - ipfs.cat & ipfs.map - - Nefatální - - Roubování +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Autorizace grafu -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatická archivace verzí podgrafů -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/cs/subgraphs/developing/developer-faq.mdx b/website/src/pages/cs/subgraphs/developing/developer-faq.mdx index e07a7f06fb48..2c5d8903c4d9 100644 --- a/website/src/pages/cs/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/cs/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. Co je to podgraf? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Mohu změnit účet GitHub přidružený k mému podgrafu? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Podgraf musíte znovu nasadit, ale pokud se ID podgrafu (hash IPFS) nezmění, nebude se muset synchronizovat od začátku. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -V rámci podgrafu se události zpracovávají vždy v pořadí, v jakém se objevují v blocích, bez ohledu na to, zda se jedná o více smluv. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Ano! Vyzkoušejte následující příkaz, přičemž "organization/subgraphName" nahraďte názvem organizace, pod kterou je publikován, a názvem vašeho podgrafu: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/cs/subgraphs/developing/introduction.mdx b/website/src/pages/cs/subgraphs/developing/introduction.mdx index 110d7639aded..b040c749c6ca 100644 --- a/website/src/pages/cs/subgraphs/developing/introduction.mdx +++ b/website/src/pages/cs/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx index 77896e36a45d..b8c2330ca49d 100644 --- a/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Kurátoři již nebudou moci signalizovat na podgrafu. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx index ed8846e26498..29c75273aa17 100644 --- a/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Zveřejnění podgrafu v decentralizované síti +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Aktualizace metadata publikovaného podgrafu +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Přidání signálu do podgrafu +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Přidání signálu do podgrafu, který nemá nárok na odměny, nepřiláká další indexátory. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Případně můžete přidat signál GRT do publikovaného podgrafu z Průzkumníka grafů. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/cs/subgraphs/developing/subgraphs.mdx b/website/src/pages/cs/subgraphs/developing/subgraphs.mdx index f197aabdc49c..a998db9c316d 100644 --- a/website/src/pages/cs/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/cs/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Podgrafy ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Životní cyklus podgrafů -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/cs/subgraphs/explorer.mdx b/website/src/pages/cs/subgraphs/explorer.mdx index b679cdbb8c43..2d918567ee9d 100644 --- a/website/src/pages/cs/subgraphs/explorer.mdx +++ b/website/src/pages/cs/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Průzkumník grafů --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Přehled -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signál/nesignál na podgraf +- Signal/Un-signal on Subgraphs - Zobrazit další podrobnosti, například grafy, ID aktuálního nasazení a další metadata -- Přepínání verzí pro zkoumání minulých iterací podgrafu -- Dotazování na podgrafy prostřednictvím GraphQL -- Testování podgrafů na hřišti -- Zobrazení indexátorů, které indexují na určitém podgrafu +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Statistiky podgrafů (alokace, kurátoři atd.) -- Zobrazení subjektu, který podgraf zveřejnil +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Maximální kapacita delegování - maximální množství delegovaných podílů, které může indexátor produktivně přijmout. Nadměrný delegovaný podíl nelze použít pro alokace nebo výpočty odměn. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Kurátoři -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Tab Podgrafy -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Tab Indexování -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Tato část bude také obsahovat podrobnosti o vašich čistých odměnách za indexování a čistých poplatcích za dotazy. Zobrazí se následující metriky: @@ -223,13 +223,13 @@ Nezapomeňte, že tento graf lze horizontálně posouvat, takže pokud se posune ### Tab Kurátorství -Na kartě Kurátorství najdete všechny dílčí grafy, na které signalizujete (a které vám tak umožňují přijímat poplatky za dotazy). Signalizace umožňuje kurátorům upozornit indexátory na to, které podgrafy jsou hodnotné a důvěryhodné, a tím signalizovat, že je třeba je indexovat. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Na této tab najdete přehled: -- Všechny dílčí podgrafy, na kterých kurátor pracuje, s podrobnostmi o signálu -- Celkové podíly na podgraf -- Odměny za dotaz na podgraf +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Aktualizováno v detailu data ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/cs/subgraphs/guides/arweave.mdx b/website/src/pages/cs/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..dff8facf77d4 --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Vytváření podgrafů na Arweave +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +V této příručce se dozvíte, jak vytvořit a nasadit subgrafy pro indexování blockchainu Arweave. + +## Co je Arweave? + +Protokol Arweave umožňuje vývojářům ukládat data trvale a to je hlavní rozdíl mezi Arweave a IPFS, kde IPFS tuto funkci postrádá; trvalé uložení a soubory uložené na Arweave nelze měnit ani mazat. + +Společnost Arweave již vytvořila řadu knihoven pro integraci protokolu do řady různých programovacích jazyků. Další informace naleznete zde: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## Co jsou podgrafy Arweave? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Vytvoření podgrafu Arweave + +Abyste mohli sestavit a nasadit Arweave Subgraphs, potřebujete dva balíčky: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## Komponenty podgrafu + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +Definuje zdroje dat, které jsou předmětem zájmu, a způsob jejich zpracování. Arweave je nový druh datového zdroje. + +### 2. Schema - `schema.graphql` + +Zde definujete, na která data se chcete po indexování subgrafu pomocí jazyka GraphQL dotazovat. Je to vlastně podobné modelu pro API, kde model definuje strukturu těla požadavku. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +Jedná se o logiku, která určuje, jak mají být data načtena a uložena, když někdo komunikuje se zdroji dat, kterým nasloucháte. Data se přeloží a uloží na základě schématu, které jste uvedli. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## Definice podgrafu Manifest + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Zdroje dat Arweave obsahují nepovinné pole source.owner, což je veřejný klíč peněženky Arweave + +Datové zdroje Arweave podporují dva typy zpracovatelů: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> Source.owner může být adresa vlastníka nebo jeho veřejný klíč. +> +> Transakce jsou stavebními kameny permaweb Arweave a jsou to objekty vytvořené koncovými uživateli. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## Definice schématu + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript Mapování + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Nasazení podgrafu Arweave v Podgraf Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Dotazování podgrafu Arweave + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Příklady podgrafů + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Mohu indexovat uložené soubory v Arweave? + +V současné době The Graph indexuje pouze Arweave jako blockchain (jeho bloky a transakce). + +### Can I identify Bundlr bundles in my Subgraph? + +Toto není aktuálně podporováno. + +### Jak mohu filtrovat transakce na určitý účet? + +Source.owner může být veřejný klíč uživatele nebo adresa účtu. + +### Jaký je aktuální formát šifrování? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..9f53796b8066 --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Přehled + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +nebo + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Závěr + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/cs/subgraphs/guides/enums.mdx b/website/src/pages/cs/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..7cc0e6c0ed78 --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Další zdroje + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/cs/subgraphs/guides/grafting.mdx b/website/src/pages/cs/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..a7bad43c9c1f --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Nahrazení smlouvy a zachování její historie pomocí roubování +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## Co je to roubování? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- Přidává nebo odebírá typy entit +- Odstraňuje atributy z typů entit +- Přidává nulovatelné atributy k typům entit +- Mění nenulovatelné atributy na nulovatelné atributy +- Přidává hodnoty do enums +- Přidává nebo odebírá rozhraní +- Mění se, pro které typy entit je rozhraní implementováno + +Další informace naleznete na: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Důležité upozornění k roubování při aktualizaci na síť + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Proč je to důležité? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Osvědčené postupy + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +Dodržováním těchto pokynů minimalizujete rizika a zajistíte hladší průběh migrace. + +## Vytvoření existujícího podgrafu + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## Definice podgrafu Manifest + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Definice manifestu roubování + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## Nasazení základního podgrafu + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Vrátí něco takového: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## Nasazení podgrafu roubování + +Náhradní podgraf.yaml bude mít novou adresu smlouvy. K tomu může dojít při aktualizaci dapp, novém nasazení kontraktu atd. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Měla by vrátit následující: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## Další zdroje + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/cs/subgraphs/guides/near.mdx b/website/src/pages/cs/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..275c2aba0fd4 --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Vytváření podgrafů v NEAR +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## Co je NEAR? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- Obsluhy bloků: jsou spouštěny při každém novém bloku. +- Obsluhy příjmu: spouštějí se pokaždé, když je zpráva provedena na zadaném účtu. + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> Příjemka je jediným objektem, který lze v systému použít. Když na platformě NEAR hovoříme o "zpracování transakce", znamená to v určitém okamžiku "použití účtenky". + +## Sestavení podgrafu NEAR + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### Definice podgrafu Manifest + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +Zdroje dat NEAR podporují dva typy zpracovatelů: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### Definice schématu + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript Mapování + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## Nasazení podgrafu NEAR + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Podgraf Studio + +```sh +graph auth +graph deploy +``` + +### Místní uzel grafu (na základě výchozí konfigurace) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexování NEAR pomocí místního uzlu grafu + +Spuštění grafu uzlu, který indexuje NEAR, má následující provozní požadavky: + +- Framework NEAR Indexer s instrumentací Firehose +- Komponenta(y) NEAR Firehose +- Uzel Graph s nakonfigurovaným koncovým bodem Firehose + +Brzy vám poskytneme další informace o provozu výše uvedených komponent. + +## Dotazování podgrafu NEAR + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Příklady podgrafů + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### Jak funguje beta verze? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +V současné době jsou podporovány pouze spouštěče Blok a Příjem. Zkoumáme spouštěče pro volání funkcí na zadaném účtu. Máme také zájem o podporu spouštěčů událostí, jakmile bude mít NEAR nativní podporu událostí. + +### Budou se obsluhy příjmu spouštět pro účty a jejich podúčty? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +To není podporováno. Vyhodnocujeme, zda je tato funkce pro indexování nutná. + +### Can I use data source templates in my NEAR Subgraph? + +Tato funkce není v současné době podporována. Vyhodnocujeme, zda je tato funkce pro indexování nutná. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## Odkazy: + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/cs/subgraphs/guides/polymarket.mdx b/website/src/pages/cs/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..d311cfa5117e --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: Jak zabezpečit klíče API pomocí komponent serveru Next.js +--- + +## Přehled + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Upozornění + +- Součásti serveru Next.js nechrání klíče API před odčerpáním pomocí útoků typu odepření služby. +- Brány Graf síť mají zavedené strategie detekce a zmírňování odepření služby, avšak použití serverových komponent může tyto ochrany oslabit. +- Server komponenty Next.js přinášejí rizika centralizace, protože může dojít k výpadku serveru. + +### Proč je to důležité + +Ve standardní aplikaci React mohou být klíče API obsažené v kódu frontendu vystaveny na straně klienta, což představuje bezpečnostní riziko. Soubory `.env` se sice běžně používají, ale plně klíče nechrání, protože kód Reactu se spouští na straně klienta a vystavuje klíč API v hlavičkách. Serverové komponenty Next.js tento problém řeší tím, že citlivé operace zpracovávají na straně serveru. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- Klíč API od [Subgraph Studio](https://thegraph.com/studio) +- Základní znalosti Next.js a React. +- Existující projekt Next.js, který používá [App Router](https://nextjs.org/docs/app). + +## Kuchařka krok za krokem + +### Krok 1: Nastavení proměnných prostředí + +1. V kořeni našeho projektu Next.js vytvořte soubor `.env.local`. +2. Přidejte náš klíč API: `API_KEY=`. + +### Krok 2: Vytvoření součásti serveru + +1. V adresáři `components` vytvořte nový soubor `ServerComponent.js`. +2. K nastavení komponenty serveru použijte přiložený ukázkový kód. + +### Krok 3: Implementace požadavku API na straně serveru + +Do souboru `ServerComponent.js` přidejte následující kód: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Krok 4: Použití komponenty serveru + +1. V našem souboru stránky (např. `pages/index.js`) importujte `ServerComponent`. +2. Vykreslení komponenty: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Krok 5: Spusťte a otestujte náš Dapp + +Spusťte naši aplikaci Next.js pomocí `npm run dev`. Ověřte, že serverová komponenta načítá data bez vystavení klíče API. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Závěr + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..f5480ab15a48 --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Úvod + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Začněte + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Další zdroje + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..60ad21d2fe95 --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Rychlé a snadné ladění podgrafů pomocí vidliček +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## Ok, co to je? + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## Co?! Jak? + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## Ukažte mi prosím nějaký kód! + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +Obvyklý způsob, jak se pokusit o opravu, je: + +1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší (zatímco já vím, že ne). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. Počkejte na synchronizaci. +4. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá! + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá! + +Nyní můžete mít 2 otázky: + +1. fork-base co??? +2. Vidličkování kdo?! + +A já odpovídám: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. Vidličkování je snadné, není třeba se potit: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +Takže to dělám takhle: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. Zkontroluji protokoly vytvořené místním graf uzlem a hurá, zdá se, že vše funguje. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..bdc3671399e1 --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Generátor kódu bezpečného podgrafu +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Proč se integrovat s aplikací Subgraph Uncrashable? + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- Framework také obsahuje způsob (prostřednictvím konfiguračního souboru), jak vytvořit vlastní, ale bezpečné funkce setteru pro skupiny proměnných entit. Tímto způsobem není možné, aby uživatel načetl/použil zastaralou entitu grafu, a také není možné zapomenout uložit nebo nastavit proměnnou, kterou funkce vyžaduje. + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +Podgraf Uncrashable lze spustit jako volitelný příznak pomocí příkazu Graph CLI codegen. + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..510b0ea317f6 --- /dev/null +++ b/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfer to The Graph +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Použitím [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### Příklad + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### Další zdroje + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/cs/subgraphs/querying/best-practices.mdx b/website/src/pages/cs/subgraphs/querying/best-practices.mdx index a28d505b9b46..038319488eda 100644 --- a/website/src/pages/cs/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/cs/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Osvědčené postupy dotazování The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Plně zadaný výsledekv @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/cs/subgraphs/querying/from-an-application.mdx b/website/src/pages/cs/subgraphs/querying/from-an-application.mdx index b5e719983167..ef667e6b74c2 100644 --- a/website/src/pages/cs/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/cs/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Dotazování z aplikace +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Plně zadaný výsledekv @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Krok 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Krok 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Krok 1 diff --git a/website/src/pages/cs/subgraphs/querying/graph-client/README.md b/website/src/pages/cs/subgraphs/querying/graph-client/README.md index 416cadc13c6f..5dc2cfc408de 100644 --- a/website/src/pages/cs/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/cs/subgraphs/querying/graph-client/README.md @@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## Začínáme You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### Příklady You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/cs/subgraphs/querying/graph-client/live.md b/website/src/pages/cs/subgraphs/querying/graph-client/live.md index e6f726cb4352..0e3b535bd5d6 100644 --- a/website/src/pages/cs/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/cs/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Začínáme Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/cs/subgraphs/querying/graphql-api.mdx b/website/src/pages/cs/subgraphs/querying/graphql-api.mdx index f0cc9b78b338..1a5e672ccbd5 100644 --- a/website/src/pages/cs/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/cs/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -To může být užitečné, pokud chcete načíst pouze entity, které se změnily například od posledního dotazování. Nebo může být užitečná pro zkoumání nebo ladění změn entit v podgrafu (v kombinaci s blokovým filtrem můžete izolovat pouze entity, které se změnily v určitém bloku). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltextové Vyhledávání dotazy -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Operátory fulltextového vyhledávání: -| Symbol | Operátor | Popis | -| --- | --- | --- | -| `&` | `And` | Pro kombinaci více vyhledávacích výrazů do filtru pro entity, které obsahují všechny zadané výrazy | -| | | `Or` | Dotazy s více hledanými výrazy oddělenými operátorem nebo vrátí všechny entity, které odpovídají některému z uvedených výrazů | -| `<->` | `Follow by` | Zadejte vzdálenost mezi dvěma slovy. | -| `:*` | `Prefix` | Pomocí předponového výrazu vyhledejte slova, jejichž předpona se shoduje (vyžadovány 2 znaky) | +| Symbol | Operátor | Popis | +| ------ | ----------- | ----------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | Pro kombinaci více vyhledávacích výrazů do filtru pro entity, které obsahují všechny zadané výrazy | +| | | `Or` | Dotazy s více hledanými výrazy oddělenými operátorem nebo vrátí všechny entity, které odpovídají některému z uvedených výrazů | +| `<->` | `Follow by` | Zadejte vzdálenost mezi dvěma slovy. | +| `:*` | `Prefix` | Pomocí předponového výrazu vyhledejte slova, jejichž předpona se shoduje (vyžadovány 2 znaky) | #### Příklady @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Metadata podgrafů -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Pokud je uveden blok, metadata se vztahují k tomuto bloku, pokud ne, použije se poslední indexovaný blok. Pokud je blok uveden, musí se nacházet za počátečním blokem podgrafu a musí být menší nebo roven poslednímu Indevovaný bloku. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ Pokud je uveden blok, metadata se vztahují k tomuto bloku, pokud ne, použije s - hash: hash bloku - číslo: číslo bloku -- timestamp: časové razítko bloku, pokud je k dispozici (v současné době je k dispozici pouze pro podgrafy indexující sítě EVM) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/cs/subgraphs/querying/introduction.mdx b/website/src/pages/cs/subgraphs/querying/introduction.mdx index 19ecde83f4a8..6169df767051 100644 --- a/website/src/pages/cs/subgraphs/querying/introduction.mdx +++ b/website/src/pages/cs/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Přehled -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx index 0f5721e5cbcb..f2954c5593c0 100644 --- a/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: Správa klíčů API +title: Managing API keys --- ## Přehled -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Výše vynaložených GRT 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Zobrazení a správa názvů domén oprávněných používat váš klíč API - - Přiřazení podgrafů, na které se lze dotazovat pomocí klíče API + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/cs/subgraphs/querying/python.mdx b/website/src/pages/cs/subgraphs/querying/python.mdx index 669e95c19183..51e3b966a2b5 100644 --- a/website/src/pages/cs/subgraphs/querying/python.mdx +++ b/website/src/pages/cs/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds je intuitivní knihovna Pythonu pro dotazování na podgrafy, vytvořená [Playgrounds](https://playgrounds.network/). Umožňuje přímo připojit data subgrafů k datovému prostředí Pythonu, což vám umožní používat knihovny jako [pandas](https://pandas.pydata.org/) k provádění analýzy dat! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds nabízí jednoduché Pythonic API pro vytváření dotazů GraphQL, automatizuje zdlouhavé pracovní postupy, jako je stránkování, a umožňuje pokročilým uživatelům řízené transformace schémat. @@ -17,24 +17,24 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Po instalaci můžete vyzkoušet podklady pomocí následujícího dotazu. Následující příklad uchopí podgraf pro protokol Aave v2 a dotazuje se na 5 největších trhů seřazených podle TVL (Total Value Locked), vybere jejich název a jejich TVL (v USD) a vrátí data jako pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Načtení podgrafu +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# Sestavte dotaz +# Construct the query latest_markets = aave_v2.Query.markets( orderBy=aave_v2.Market.totalValueLockedUSD, - orderDirection="desc", + orderDirection='desc', first=5, ) -# Vrátit dotaz do datového rámce +# Return query to a dataframe sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, diff --git a/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 7bef9e129e33..7792cb56d855 100644 --- a/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: ID podgrafu vs. ID nasazení --- -Podgraf je identifikován ID podgrafu a každá verze podgrafu je identifikována ID nasazení. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## ID nasazení -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Příklad koncového bodu, který používá ID nasazení: @@ -20,8 +20,8 @@ Příklad koncového bodu, který používá ID nasazení: ## ID podgrafu -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/cs/subgraphs/quick-start.mdx b/website/src/pages/cs/subgraphs/quick-start.mdx index 130f699763ce..7c52d4745a83 100644 --- a/website/src/pages/cs/subgraphs/quick-start.mdx +++ b/website/src/pages/cs/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Rychlé spuštění --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Nainstalujte Graph CLI @@ -37,13 +37,13 @@ Použitím [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> Příkazy pro konkrétní podgraf najdete na stránce podgrafu v [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -Na následujícím snímku najdete příklad toho, co můžete očekávat při inicializaci podgrafu: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Jakmile je podgraf napsán, spusťte následující příkazy: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Přidání signálu do podgrafu +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/cs/substreams/developing/dev-container.mdx b/website/src/pages/cs/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/cs/substreams/developing/dev-container.mdx +++ b/website/src/pages/cs/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/cs/substreams/developing/sinks.mdx b/website/src/pages/cs/substreams/developing/sinks.mdx index f87e46464532..821ded42c0d0 100644 --- a/website/src/pages/cs/substreams/developing/sinks.mdx +++ b/website/src/pages/cs/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/cs/substreams/developing/solana/account-changes.mdx b/website/src/pages/cs/substreams/developing/solana/account-changes.mdx index 8c309bbcce31..98da6949aef4 100644 --- a/website/src/pages/cs/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/cs/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/cs/substreams/developing/solana/transactions.mdx b/website/src/pages/cs/substreams/developing/solana/transactions.mdx index a50984178cd8..a5415dcfd8e4 100644 --- a/website/src/pages/cs/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/cs/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Podgrafy 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/cs/substreams/introduction.mdx b/website/src/pages/cs/substreams/introduction.mdx index 57d215576f60..d68760ad1432 100644 --- a/website/src/pages/cs/substreams/introduction.mdx +++ b/website/src/pages/cs/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/cs/substreams/publishing.mdx b/website/src/pages/cs/substreams/publishing.mdx index 8e71c65c2eed..19415c7860d8 100644 --- a/website/src/pages/cs/substreams/publishing.mdx +++ b/website/src/pages/cs/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/cs/supported-networks.mdx b/website/src/pages/cs/supported-networks.mdx index 733c5de18c69..863814948ba7 100644 --- a/website/src/pages/cs/supported-networks.mdx +++ b/website/src/pages/cs/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Podporované sítě hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/cs/token-api/_meta-titles.json b/website/src/pages/cs/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/cs/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/cs/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/cs/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/cs/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/cs/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/cs/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/cs/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/cs/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/cs/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/cs/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/cs/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/cs/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/cs/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/cs/token-api/faq.mdx b/website/src/pages/cs/token-api/faq.mdx new file mode 100644 index 000000000000..83196959be14 --- /dev/null +++ b/website/src/pages/cs/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Obecný + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/cs/token-api/mcp/claude.mdx b/website/src/pages/cs/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..aabd9c69d69a --- /dev/null +++ b/website/src/pages/cs/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Konfigurace + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/cs/token-api/mcp/cline.mdx b/website/src/pages/cs/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..2e8f478f68c1 --- /dev/null +++ b/website/src/pages/cs/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Konfigurace + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/cs/token-api/mcp/cursor.mdx b/website/src/pages/cs/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..fac3a1a1af73 --- /dev/null +++ b/website/src/pages/cs/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Konfigurace + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/cs/token-api/monitoring/get-health.mdx b/website/src/pages/cs/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/cs/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/cs/token-api/monitoring/get-networks.mdx b/website/src/pages/cs/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/cs/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/cs/token-api/monitoring/get-version.mdx b/website/src/pages/cs/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/cs/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/cs/token-api/quick-start.mdx b/website/src/pages/cs/token-api/quick-start.mdx new file mode 100644 index 000000000000..4083154b5a8b --- /dev/null +++ b/website/src/pages/cs/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Rychlé spuštění +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/de/about.mdx b/website/src/pages/de/about.mdx index 61dbccdd5c84..a5358b063119 100644 --- a/website/src/pages/de/about.mdx +++ b/website/src/pages/de/about.mdx @@ -30,25 +30,25 @@ Blockchain-Eigenschaften wie Endgültigkeit, Umstrukturierung der Kette und nich ## The Graph bietet eine Lösung -The Graph löst diese Herausforderung mit einem dezentralen Protokoll, das Blockchain-Daten indiziert und eine effiziente und leistungsstarke Abfrage ermöglicht. Diese APIs (indizierte „Subgraphen“) können dann mit einer Standard-GraphQL-API abgefragt werden. +The Graph löst diese Herausforderung mit einem dezentralen Protokoll, das die Blockchain-Daten indiziert und eine effiziente und leistungsstarke Abfrage ermöglicht. Diese APIs (indizierte „Subgraphen“) können dann mit einer Standard-GraphQL-API abgefragt werden. Heute gibt es ein dezentralisiertes Protokoll, das durch die Open-Source-Implementierung von [Graph Node](https://github.com/graphprotocol/graph-node) unterstützt wird und diesen Prozess ermöglicht. ### Die Funktionsweise von The Graph -Die Indizierung von Blockchain-Daten ist sehr schwierig, aber The Graph macht es einfach. The Graph lernt, wie man Ethereum-Daten mit Hilfe von Subgraphen indiziert. Subgraphs sind benutzerdefinierte APIs, die auf Blockchain-Daten aufgebaut sind. Sie extrahieren Daten aus einer Blockchain, verarbeiten sie und speichern sie so, dass sie nahtlos über GraphQL abgefragt werden können. +Die Indexierung von Blockchain-Daten ist sehr schwierig, aber The Graph macht es einfach. The Graph lernt, wie man Ethereum-Daten mit Hilfe von Subgraphen indizieren kann. Subgraphen sind benutzerdefinierte APIs, die auf Blockchain-Daten aufgebaut sind. Sie extrahieren Daten aus einer Blockchain, verarbeiten sie und speichern sie so, dass sie nahtlos über GraphQL abgefragt werden können. #### Besonderheiten -- The Graph verwendet Subgraph-Beschreibungen, die als Subgraph Manifest innerhalb des Subgraphen bekannt sind. +- The Graph verwendet Subgraph-Beschreibungen, die als Subgraph-Manifest innerhalb des Subgraphen bekannt sind. -- Die Beschreibung des Subgraphs beschreibt die Smart Contracts, die für einen Subgraph von Interesse sind, die Ereignisse innerhalb dieser Verträge, auf die man sich konzentrieren sollte, und wie man die Ereignisdaten den Daten zuordnet, die The Graph in seiner Datenbank speichern wird. +- Die Subgraph-Beschreibung beschreibt die Smart Contracts, die für einen Subgraphen von Interesse sind, die Ereignisse innerhalb dieser Verträge, auf die man sich konzentrieren soll, und wie man die Ereignisdaten den Daten zuordnet, die The Graph in seiner Datenbank speichern wird. -- Wenn Sie einen Subgraphen erstellen, müssen Sie ein Subgraph Manifest schreiben. +- Wenn Sie einen Subgraphen erstellen, müssen Sie ein Subgraphenmanifest schreiben. -- Nachdem Sie das `Subgraph Manifest` geschrieben haben, können Sie das Graph CLI verwenden, um die Definition im IPFS zu speichern und einen Indexer anzuweisen, mit der Indizierung der Daten für diesen Subgraphen zu beginnen. +- Nachdem Sie das `Subgraphenmanifest` geschrieben haben, können Sie das Graph CLI verwenden, um die Definition im IPFS zu speichern und einen Indexer anzuweisen, mit der Indizierung von Daten für diesen Subgraphen zu beginnen. -Das nachstehende Diagramm enthält detailliertere Informationen über den Datenfluss, nachdem ein Subgraph Manifest mit Ethereum-Transaktionen bereitgestellt worden ist. +Das nachstehende Diagramm enthält detailliertere Informationen über den Datenfluss, nachdem ein Subgraph-Manifest mit Ethereum-Transaktionen bereitgestellt wurde. ![Eine graphische Darstellung, die erklärt, wie The Graph Graph Node verwendet, um Abfragen an Datenkonsumenten zu stellen](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Der Ablauf ist wie folgt: 1. Eine Dapp fügt Ethereum durch eine Transaktion auf einem Smart Contract Daten hinzu. 2. Der Smart Contract gibt während der Verarbeitung der Transaktion ein oder mehrere Ereignisse aus. -3. Graph Node scannt Ethereum kontinuierlich nach neuen Blöcken und den darin enthaltenen Daten für Ihren Subgraphen. -4. Graph Node findet Ethereum-Ereignisse für Ihren Subgraphen in diesen Blöcken und führt die von Ihnen bereitgestellten Mapping-Handler aus. Das Mapping ist ein WASM-Modul, das die Dateneinheiten erstellt oder aktualisiert, die Graph Node als Reaktion auf Ethereum-Ereignisse speichert. +3. Graph Node scannt Ethereum kontinuierlich nach neuen Blöcken und den darin enthaltenen Daten für Ihren Subgraph. +4. Graph Node findet in diesen Blöcken Ethereum-Ereignisse für Ihren Subgraph und führt die von Ihnen bereitgestellten Mapping-Handler aus. Das Mapping ist ein WASM-Modul, das die Dateneinheiten erstellt oder aktualisiert, die Graph Node als Reaktion auf Ethereum-Ereignisse speichert. 5. Die Dapp fragt den Graph Node über den [GraphQL-Endpunkt](https://graphql.org/learn/) des Knotens nach Daten ab, die von der Blockchain indiziert wurden. Der Graph Node wiederum übersetzt die GraphQL-Abfragen in Abfragen für seinen zugrundeliegenden Datenspeicher, um diese Daten abzurufen, wobei er die Indexierungsfunktionen des Speichers nutzt. Die Dapp zeigt diese Daten in einer reichhaltigen Benutzeroberfläche für die Endnutzer an, mit der diese dann neue Transaktionen auf Ethereum durchführen können. Der Zyklus wiederholt sich. ## Nächste Schritte -In den folgenden Abschnitten werden die Subgraphen, ihr Einsatz und die Datenabfrage eingehender behandelt. +In den folgenden Abschnitten werden die Subgraphen, ihr Einsatz und die Datenabfrage näher erläutert. -Bevor Sie Ihren eigenen Subgraphen schreiben, sollten Sie den [Graph Explorer](https://thegraph.com/explorer) erkunden und sich einige der bereits vorhandenen Subgraphen ansehen. Die Seite jedes Subgraphen enthält eine GraphQL- Playground, mit der Sie seine Daten abfragen können. +Bevor Sie Ihren eigenen Subgraph schreiben, sollten Sie den Graph Explorer erkunden und sich einige der bereits eingesetzten Subgraphen ansehen. Die Seite jedes Subgraphen enthält eine GraphQL-Spielwiese, mit der Sie seine Daten abfragen können. diff --git a/website/src/pages/de/archived/_meta-titles.json b/website/src/pages/de/archived/_meta-titles.json index 9501304a4305..68385040140c 100644 --- a/website/src/pages/de/archived/_meta-titles.json +++ b/website/src/pages/de/archived/_meta-titles.json @@ -1,3 +1,3 @@ { - "arbitrum": "Scaling with Arbitrum" + "arbitrum": "Skalierung mit Arbitrum" } diff --git a/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx index 54809f94fd9c..6fa6fbe5faaf 100644 --- a/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ Durch die Skalierung von The Graph auf L2 können die Netzwerkteilnehmer nun von - Von Ethereum übernommene Sicherheit -Die Skalierung der Smart Contracts des Protokolls auf L2 ermöglicht den Netzwerkteilnehmern eine häufigere Interaktion zu geringeren Kosten in Form von Gasgebühren. So können Indexer beispielsweise häufiger Zuweisungen öffnen und schließen, um eine größere Anzahl von Subgraphen zu indexieren. Entwickler können Subgraphen leichter bereitstellen und aktualisieren, und Delegatoren können GRT häufiger delegieren. Kuratoren können einer größeren Anzahl von Subgraphen Signale hinzufügen oder entfernen - Aktionen, die bisher aufgrund der Kosten zu kostspielig waren, um sie häufig durchzuführen. +Die Skalierung der Smart Contracts des Protokolls auf L2 ermöglicht den Netzwerkteilnehmern eine häufigere Interaktion zu geringeren Kosten in Form von Gasgebühren. So können Indexer beispielsweise häufiger Zuweisungen öffnen und schließen, um eine größere Anzahl von Subgraphen zu indexieren. Entwickler können Subgraphen leichter einsetzen und aktualisieren, und Delegatoren können GRT häufiger delegieren. Kuratoren können einer größeren Anzahl von Untergraphen Signale hinzufügen oder entfernen - Aktionen, die bisher aufgrund der Gaskosten zu kostspielig waren, um sie häufig durchzuführen. Die The Graph-Community beschloss letztes Jahr nach dem Ergebnis der [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)-Diskussion, mit Arbitrum weiterzumachen. @@ -39,7 +39,7 @@ Um die Vorteile von The Graph auf L2 zu nutzen, verwenden Sie diesen Dropdown-Sc ![Dropdown-Schalter zum Aktivieren von Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Was muss ich als Entwickler von Subgraphen, Datenkonsument, Indexer, Kurator oder Delegator jetzt tun? +## Was muss ich als Subgraph-Entwickler, Datenkonsument, Indexer, Kurator oder Delegator jetzt tun? Netzwerk-Teilnehmer müssen zu Arbitrum wechseln, um weiterhin am The Graph Network teilnehmen zu können. Weitere Unterstützung finden Sie im [Leitfaden zum L2 Transfer Tool](/archived/arbitrum/l2-transfer-tools-guide/). @@ -51,9 +51,9 @@ Alle Smart Contracts wurden gründlich [audited] (https://github.com/graphprotoc Alles wurde gründlich getestet, und es gibt einen Notfallplan, um einen sicheren und nahtlosen Übergang zu gewährleisten. Einzelheiten finden Sie [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Funktionieren die vorhandenen Subgraphen auf Ethereum? +## Funktionieren die bestehenden Subgraphen auf Ethereum? -Alle Subgraphen sind jetzt auf Arbitrum. Bitte lesen Sie den [Leitfaden zum L2 Transfer Tool](/archived/arbitrum/l2-transfer-tools-guide/), um sicherzustellen, dass Ihre Subgraphen reibungslos funktionieren. +Alle Subgraphen sind jetzt auf Arbitrum. Bitte lesen Sie den [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/), um sicherzustellen, dass Ihre Subgraphen reibungslos funktionieren. ## Verfügt GRT über einen neuen Smart Contract, der auf Arbitrum eingesetzt wird? @@ -77,4 +77,4 @@ Die Brücke wurde [umfangreich geprüft] (https://code4rena.com/contests/2022-10 Das Hinzufügen von GRT zu Ihrem Arbitrum-Abrechnungssaldo kann mit nur einem Klick in [Subgraph Studio] (https://thegraph.com/studio/) erfolgen. Sie können Ihr GRT ganz einfach mit Arbitrum verbinden und Ihre API-Schlüssel in einer einzigen Transaktion füllen. -Visit the [Billing page](/subgraphs/billing/) for more detailed instructions on adding, withdrawing, or acquiring GRT. +Besuchen Sie die [Abrechnungsseite](/subgraphs/billing/) für genauere Anweisungen zum Hinzufügen, Abheben oder Erwerben von GRT. diff --git a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx index 8abcda305f8a..c430e56dd829 100644 --- a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,19 +24,19 @@ Die Ausnahme sind Smart-Contract-Wallets wie Multisigs: Das sind Smart Contracts Die L2-Transfer-Tools verwenden den nativen Mechanismus von Arbitrum, um Nachrichten von L1 nach L2 zu senden. Dieser Mechanismus wird "retryable ticket" genannt und wird von allen nativen Token-Bridges verwendet, einschließlich der Arbitrum GRT-Bridge. Sie können mehr über wiederholbare Tickets in den [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging) lesen. -Wenn Sie Ihre Vermögenswerte (Subgraph, Anteil, Delegation oder Kuration) an L2 übertragen, wird eine Nachricht über die Arbitrum GRT-Brücke gesendet, die ein wiederholbares Ticket in L2 erstellt. Das Transfer-Tool beinhaltet einen gewissen ETH-Wert in der Transaktion, der verwendet wird, um 1) die Erstellung des Tickets und 2) das Gas für die Ausführung des Tickets in L2 zu bezahlen. Da jedoch die Gaspreise in der Zeit, bis das Zertifikat zur Ausführung in L2 bereit ist, schwanken können, ist es möglich, dass dieser automatische Ausführungsversuch fehlschlägt. Wenn das passiert, hält die Arbitrum-Brücke das wiederholbare Zertifikat für bis zu 7 Tage am Leben, und jeder kann versuchen, das Ticket erneut "einzulösen" (was eine Geldbörse mit etwas ETH erfordert, die mit Arbitrum verbunden ist). +Wenn Sie Ihre Vermögenswerte (Subgraph, Anteil, Delegation oder Kuration) an L2 übertragen, wird eine Nachricht über die Arbitrum GRT-Brücke gesendet, die ein wiederholbares Ticket in L2 erstellt. Das Transfer-Tool beinhaltet einen gewissen ETH-Wert in der Transaktion, der verwendet wird, um 1) die Erstellung des Tickets und 2) das Gas für die Ausführung des Tickets in L2 zu bezahlen. Da jedoch die Gaspreise in der Zeit, bis das Ticket zur Ausführung in L2 bereit ist, schwanken können, ist es möglich, dass dieser automatische Ausführungsversuch fehlschlägt. Wenn das passiert, hält die Arbitrum-Brücke das wiederholbare Ticket für bis zu 7 Tage am Leben, und jeder kann versuchen, das Ticket erneut „einzulösen“ (was eine Wallet mit etwas ETH erfordert, die mit Arbitrum verbunden ist). -Dies ist der so genannte "Bestätigungsschritt" in allen Übertragungswerkzeugen - er wird in den meisten Fällen automatisch ausgeführt, da die automatische Ausführung meist erfolgreich ist, aber es ist wichtig, dass Sie sich vergewissern, dass die Übertragung erfolgreich war. Wenn dies nicht gelingt und es innerhalb von 7 Tagen keine erfolgreichen Wiederholungsversuche gibt, verwirft die Arbitrum-Brücke das Ticket, und Ihre Assets (Subgraph, Pfahl, Delegation oder Kuration) gehen verloren und können nicht wiederhergestellt werden. Die Entwickler des Graph-Kerns haben ein Überwachungssystem eingerichtet, um diese Situationen zu erkennen und zu versuchen, die Tickets einzulösen, bevor es zu spät ist, aber es liegt letztendlich in Ihrer Verantwortung, sicherzustellen, dass Ihr Transfer rechtzeitig abgeschlossen wird. Wenn Sie Probleme mit der Bestätigung Ihrer Transaktion haben, wenden Sie sich bitte an [dieses Formular] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) und die Entwickler des Kerns werden Ihnen helfen. +Dies ist der so genannte „Bestätigungsschritt“ in allen Übertragungswerkzeugen - er wird in den meisten Fällen automatisch ausgeführt, da die automatische Ausführung meistens erfolgreich ist, aber es ist wichtig, dass Sie sich vergewissern, dass die Übertragung erfolgreich war. Wenn dies nicht gelingt und es innerhalb von 7 Tagen keine erfolgreichen Wiederholungsversuche gibt, verwirft die Arbitrum-Brücke das Ticket, und Ihre Assets (Subgraph, Anteil, Delegation oder Kuration) gehen verloren und können nicht wiederhergestellt werden. Die Entwickler des The Graph-„ Core“ haben ein Überwachungssystem eingerichtet, um diese Situationen zu erkennen und zu versuchen, die Tickets einzulösen, bevor es zu spät ist, aber es liegt letztendlich in Ihrer Verantwortung, sicherzustellen, dass Ihr Transfer rechtzeitig abgeschlossen wird. Wenn Sie Probleme mit der Bestätigung Ihrer Transaktion haben, wenden Sie sich bitte an [dieses Formular] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) und die Entwickler des Kerns werden Ihnen helfen. ### Ich habe mit der Übertragung meiner Delegation/des Einsatzes/der Kuration begonnen und bin mir nicht sicher, ob sie an L2 weitergeleitet wurde. Wie kann ich bestätigen, dass sie korrekt übertragen wurde? -If you don't see a banner on your profile asking you to finish the transfer, then it's likely the transaction made it safely to L2 and no more action is needed. If in doubt, you can check if Explorer shows your delegation, stake or curation on Arbitrum One. +Wenn Sie in Ihrem Profil kein Banner sehen, das Sie auffordert, den Transfer abzuschließen, dann ist die Transaktion wahrscheinlich sicher auf L2 angekommen und es sind keine weiteren Maßnahmen erforderlich. Im Zweifelsfall können Sie überprüfen, ob der Explorer Ihre Delegation, Ihren Einsatz oder Ihre Kuration auf Arbitrum One anzeigt. -If you have the L1 transaction hash (which you can find by looking at the recent transactions in your wallet), you can also confirm if the "retryable ticket" that carried the message to L2 was redeemed here: https://retryable-dashboard.arbitrum.io/ - if the auto-redeem failed, you can also connect your wallet there and redeem it. Rest assured that core devs are also monitoring for messages that get stuck, and will attempt to redeem them before they expire. +Wenn Sie den L1-Transaktionshash haben ( den Sie durch einen Blick auf die letzten Transaktionen in Ihrer Wallet finden können), können Sie auch überprüfen, ob das „retryable ticket“, das die Nachricht nach L2 transportiert hat, hier eingelöst wurde: https://retryable-dashboard.arbitrum.io/ - wenn die automatische Einlösung fehlgeschlagen ist, können Sie Ihre Wallet auch dort verbinden und es einlösen. Seien Sie versichert, dass die Kernentwickler auch Nachrichten überwachen, die stecken bleiben, und versuchen werden, sie einzulösen, bevor sie ablaufen. ## Subgraph-Transfer -### Wie übertrage ich meinen Subgraphen +### Wie übertrage ich meinen Subgraphen? @@ -48,15 +48,15 @@ Um Ihren Subgraphen zu übertragen, müssen Sie die folgenden Schritte ausführe 3. Bestätigung der Übertragung von Subgraphen auf Arbitrum\* -4. Veröffentlichung des Subgraphen auf Arbitrum beenden +4. Veröffentlichung von Subgraph auf Arbitrum beenden 5. Abfrage-URL aktualisieren (empfohlen) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\* Beachten Sie, dass Sie die Übertragung innerhalb von 7 Tagen bestätigen müssen, da sonst Ihr Subgraph verloren gehen kann. In den meisten Fällen wird dieser Schritt automatisch ausgeführt, aber eine manuelle Bestätigung kann erforderlich sein, wenn es einen Gaspreisanstieg auf Arbitrum gibt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die helfen können: kontaktiere den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol). ### Von wo aus soll ich meine Übertragung veranlassen? -Sie können die Übertragung vom [Subgraph Studio] (https://thegraph.com/studio/), vom [Explorer] (https://thegraph.com/explorer) oder von einer beliebigen Subgraph-Detailseite aus starten. Klicken Sie auf die Schaltfläche "Subgraph übertragen" auf der Detailseite des Subgraphen, um die Übertragung zu starten. +Sie können die Übertragung vom [Subgraph Studio] (https://thegraph.com/studio/), vom [Explorer,] (https://thegraph.com/explorer) oder von einer beliebigen Subgraph-Detailseite aus starten. Klicken Sie auf die Schaltfläche „Subgraph übertragen“ auf der Detailseite des Subgraphen, um die Übertragung zu starten. ### Wie lange muss ich warten, bis mein Subgraph übertragen wird? @@ -64,37 +64,37 @@ Die Übertragungszeit beträgt etwa 20 Minuten. Die Arbitrum-Brücke arbeitet im ### Wird mein Subgraph noch auffindbar sein, nachdem ich ihn auf L2 übertragen habe? -Ihr Subgraph ist nur in dem Netzwerk auffindbar, in dem er veröffentlicht ist. Wenn Ihr Subgraph zum Beispiel auf Arbitrum One ist, können Sie ihn nur im Explorer auf Arbitrum One finden und nicht auf Ethereum. Bitte vergewissern Sie sich, dass Sie Arbitrum One in der Netzwerkumschaltung oben auf der Seite ausgewählt haben, um sicherzustellen, dass Sie sich im richtigen Netzwerk befinden. Nach der Übertragung wird der L1-Subgraph als veraltet angezeigt. +Ihr Subgraph ist nur in dem Netzwerk auffindbar, in dem er veröffentlicht ist. Wenn Ihr Subgraph zum Beispiel auf Arbitrum One ist, können Sie ihn nur im Explorer auf Arbitrum One finden und nicht auf Ethereum. Bitte vergewissern Sie sich, dass Sie Arbitrum One in der Netzwerkumschaltung oben auf der Seite ausgewählt haben, um sicherzustellen, dass Sie sich im richtigen Netzwerk befinden. Nach der Übertragung wird der L1-Subgraph als veraltet angezeigt. -### Muss mein Subgraph ( Teilgraph ) veröffentlicht werden, um ihn zu übertragen? +### Muss mein Subgraph veröffentlicht werden, um ihn zu übertragen? -Um das Subgraph-Transfer-Tool nutzen zu können, muss Ihr Subgraph bereits im Ethereum-Mainnet veröffentlicht sein und über ein Kurationssignal verfügen, das der Wallet gehört, die den Subgraph besitzt. Wenn Ihr Subgraph nicht veröffentlicht ist, empfehlen wir Ihnen, ihn einfach direkt auf Arbitrum One zu veröffentlichen - die damit verbundenen Gasgebühren sind erheblich niedriger. Wenn Sie einen veröffentlichten Subgraphen übertragen wollen, aber das Konto des Eigentümers kein Signal darauf kuratiert hat, können Sie einen kleinen Betrag (z.B. 1 GRT) von diesem Konto signalisieren; stellen Sie sicher, dass Sie ein "auto-migrating" Signal wählen. +Um die Vorteile des Subgraph-Transfer-Tools zu nutzen, muss Ihr Subgraph bereits im Ethereum-Mainnet veröffentlicht sein und über ein Kurationssignal verfügen, das der Wallet gehört, die den Subgraph besitzt. Wenn Ihr Subgraph noch nicht veröffentlicht ist, empfehlen wir Ihnen, ihn einfach direkt auf Arbitrum One zu veröffentlichen - die damit verbundenen Gasgebühren sind erheblich niedriger. Wenn Sie einen veröffentlichten Subgraph transferieren wollen, aber das Konto des Besitzers kein Signal darauf kuratiert hat, können Sie einen kleinen Betrag (z.B. 1 GRT) von diesem Konto signalisieren; stellen Sie sicher, dass Sie das „auto-migrating“ Signal wählen. -### Was passiert mit der Ethereum-Mainnet-Version meines Subgraphen, nachdem ich zu Arbitrum übergehe? +### Was passiert mit der Ethereum-Hauptnetz-Version meines Subgraphen, nachdem ich zu Arbitrum gewechselt bin? -Nach der Übertragung Ihres Subgraphen auf Arbitrum wird die Ethereum-Hauptnetzversion veraltet sein. Wir empfehlen Ihnen, Ihre Abfrage-URL innerhalb von 48 Stunden zu aktualisieren. Es gibt jedoch eine Schonfrist, die Ihre Mainnet-URL funktionsfähig hält, so dass jede Drittanbieter-Dapp-Unterstützung aktualisiert werden kann. +Nach dem Transfer Ihres Subgraphen zu Arbitrum wird die Ethereum-Hauptnetzversion veraltet sein. Wir empfehlen Ihnen, Ihre Abfrage-URL innerhalb von 48 Stunden zu aktualisieren. Es gibt jedoch eine Schonfrist, die Ihre Mainnet-URL funktionsfähig hält, so dass jede Drittanbieter-Dapp-Unterstützung aktualisiert werden kann. ### Muss ich nach der Übertragung auch auf Arbitrum neu veröffentlichen? Nach Ablauf des 20-minütigen Übertragungsfensters müssen Sie die Übertragung mit einer Transaktion in der Benutzeroberfläche bestätigen, um die Übertragung abzuschließen. Ihr L1-Endpunkt wird während des Übertragungsfensters und einer Schonfrist danach weiterhin unterstützt. Es wird empfohlen, dass Sie Ihren Endpunkt aktualisieren, wenn es Ihnen passt. -### Will my endpoint experience downtime while re-publishing? +### Kommt es während der Neuveröffentlichung zu Ausfallzeiten an meinem Endpunkt? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +Es ist unwahrscheinlich, aber möglich, dass es zu einer kurzen Ausfallzeit kommt, je nachdem, welche Indexer den Subgraphen auf L1 unterstützen und ob sie ihn weiter indizieren, bis der Subgraph auf L2 vollständig unterstützt wird. ### Ist die Veröffentlichung und Versionierung auf L2 die gleiche wie im Ethereum-Mainnet? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Ja. Wählen Sie Arbitrum One als Ihr veröffentlichtes Netzwerk, wenn Sie in Subgraph Studio veröffentlichen. Im Studio wird der neueste Endpunkt verfügbar sein, der auf die letzte aktualisierte Version des Subgraphen verweist. -### Bewegt sich die Kuration meines Untergraphen ( Subgraphen ) mit meinem Untergraphen? +### Wird die Kuration meines Subgraphen mit meinem Subgraphen umziehen? Wenn Sie die automatische Signalmigration gewählt haben, werden 100 % Ihrer eigenen Kuration mit Ihrem Subgraphen zu Arbitrum One übertragen. Alle Kurationssignale des Subgraphen werden zum Zeitpunkt des Transfers in GRT umgewandelt, und die GRT, die Ihrem Kurationssignal entsprechen, werden zum Prägen von Signalen auf dem L2-Subgraphen verwendet. -Andere Kuratoren können wählen, ob sie ihren Anteil an GRT zurückziehen oder ihn ebenfalls auf L2 übertragen, um das Signal auf demselben Untergraphen zu prägen. +Andere Kuratoren können wählen, ob sie ihren Anteil an GRT zurückziehen oder ihn ebenfalls auf L2 übertragen, um das Signal auf demselben Subgraphen zu prägen. ### Kann ich meinen Subgraph nach dem Transfer zurück ins Ethereum Mainnet verschieben? -Nach der Übertragung wird Ihre Ethereum-Mainnet-Version dieses Untergraphen veraltet sein. Wenn Sie zum Mainnet zurückkehren möchten, müssen Sie Ihre Version neu bereitstellen und zurück zum Mainnet veröffentlichen. Es wird jedoch dringend davon abgeraten, zurück ins Ethereum Mainnet zu wechseln, da die Indexierungsbelohnungen schließlich vollständig auf Arbitrum One verteilt werden. +Nach der Übertragung wird Ihre Ethereum Mainnet-Version dieses Subgraphen veraltet sein. Wenn Sie zum Mainnet zurückkehren möchten, müssen Sie den Subgraph erneut bereitstellen und im Mainnet veröffentlichen. Es wird jedoch dringend davon abgeraten, zurück zum Ethereum Mainnet zu wechseln, da die Indexierungsbelohnungen schließlich vollständig auf Arbitrum One verteilt werden. ### Warum brauche ich überbrückte ETH, um meine Überweisung abzuschließen? @@ -112,11 +112,11 @@ Um Ihre Delegation zu übertragen, müssen Sie die folgenden Schritte ausführen 2. 20 Minuten auf Bestätigung warten 3. Bestätigung der Delegationsübertragung auf Arbitrum -\*\*\*\*You must confirm the transaction to complete the delegation transfer on Arbitrum. This step must be completed within 7 days or the delegation could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*\*\*\*Sie müssen die Transaktion bestätigen, um die Übertragung der Delegation auf Arbitrum abzuschließen. Dieser Schritt muss innerhalb von 7 Tagen abgeschlossen werden, da die Delegation sonst verloren gehen kann. In den meisten Fällen läuft dieser Schritt automatisch ab, aber eine manuelle Bestätigung kann erforderlich sein, wenn es auf Arbitrum zu einer Gaspreiserhöhung kommt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die helfen können: Kontaktieren Sie den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol). ### Was passiert mit meinen Rewards, wenn ich einen Transfer mit einer offenen Zuteilung im Ethereum Mainnet initiiere? -If the Indexer to whom you're delegating is still operating on L1, when you transfer to Arbitrum you will forfeit any delegation rewards from open allocations on Ethereum mainnet. This means that you will lose the rewards from, at most, the last 28-day period. If you time the transfer right after the Indexer has closed allocations you can make sure this is the least amount possible. If you have a communication channel with your Indexer(s), consider discussing with them to find the best time to do your transfer. +Wenn der Indexer, an den Sie delegieren, noch auf L1 arbeitet, verlieren Sie beim Wechsel zu Arbitrum alle Delegationsbelohnungen aus offenen Zuteilungen im Ethereum Mainnet. Das bedeutet, dass Sie höchstens die Rewards aus dem letzten 28-Tage-Zeitraum verlieren. Wenn Sie den Transfer direkt nach der Schließung der Zuteilungen durch den Indexer durchführen, können Sie sicherstellen, dass der Betrag so gering wie möglich ist. Wenn Sie einen Kommunikationskanal mit Ihrem Indexer haben, sollten Sie mit ihm über den besten Zeitpunkt für den Transfer sprechen. ### Was passiert, wenn der Indexer, an den ich derzeit delegiere, nicht auf Arbitrum One ist? @@ -124,7 +124,7 @@ Das L2-Transfer-Tool wird nur aktiviert, wenn der Indexer, den Sie delegiert hab ### Haben Delegatoren die Möglichkeit, an einen anderen Indexierer zu delegieren? -If you wish to delegate to another Indexer, you can transfer to the same Indexer on Arbitrum, then undelegate and wait for the thawing period. After this, you can select another active Indexer to delegate to. +Wenn Sie an einen anderen Indexer delegieren möchten, können Sie auf denselben Indexer auf Arbitrum übertragen, dann die Delegation aufheben und die Auftau-Phase abwarten. Danach können Sie einen anderen aktiven Indexer auswählen, an den Sie delegieren möchten. ### Was ist, wenn ich den Indexer, an den ich delegiere, auf L2 nicht finden kann? @@ -144,53 +144,53 @@ Es wird davon ausgegangen, dass die gesamte Netzbeteiligung in Zukunft zu Arbitr ### Wie lange dauert es, bis die Übertragung meiner Delegation auf L2 abgeschlossen ist? -A 20-minute confirmation is required for delegation transfer. Please note that after the 20-minute period, you must come back and complete step 3 of the transfer process within 7 days. If you fail to do this, then your delegation may be lost. Note that in most cases the transfer tool will complete this step for you automatically. In case of a failed auto-attempt, you will need to complete it manually. If any issues arise during this process, don't worry, we'll be here to help: contact us at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +Für die Übertragung von Delegationen ist eine 20-minütige Bestätigung erforderlich. Bitte beachten Sie, dass Sie nach Ablauf der 20-Minuten-Frist innerhalb von 7 Tagen zurückkommen und Schritt 3 des Übertragungsverfahrens abschließen müssen. Wenn Sie dies versäumen, kann Ihre Delegation verloren gehen. Beachten Sie bitte, dass das Übertragungstool diesen Schritt in den meisten Fällen automatisch für Sie ausführt. Falls der automatische Versuch fehlschlägt, müssen Sie ihn manuell ausführen. Sollten während dieses Vorgangs Probleme auftreten, sind wir für Sie da: Kontaktieren Sie uns unter support@thegraph.com oder auf [Discord] (https://discord.gg/vtvv7FP). ### Kann ich meine Delegation übertragen, wenn ich eine GRT Vesting Contract/Token Lock Wallet verwende? Ja! Der Prozess ist ein wenig anders, weil Vesting-Verträge die ETH, die für die Bezahlung des L2-Gases benötigt werden, nicht weiterleiten können, also müssen Sie sie vorher einzahlen. Wenn Ihr Berechtigungsvertrag nicht vollständig freigeschaltet ist, müssen Sie außerdem zuerst einen Gegenkontrakt auf L2 initialisieren und können die Delegation dann nur auf diesen L2-Berechtigungsvertrag übertragen. Die Benutzeroberfläche des Explorers kann Sie durch diesen Prozess leiten, wenn Sie sich über die Vesting Lock Wallet mit dem Explorer verbunden haben. -### Does my Arbitrum vesting contract allow releasing GRT just like on mainnet? +### Erlaubt mein Arbitrum-„Vesting“-Vertrag die Freigabe von GRT genau wie im Mainnet? -No, the vesting contract that is created on Arbitrum will not allow releasing any GRT until the end of the vesting timeline, i.e. until your contract is fully vested. This is to prevent double spending, as otherwise it would be possible to release the same amounts on both layers. +Nein, der Vesting-Vertrag, der auf Arbitrum erstellt wird, erlaubt keine Freigabe von GRT bis zum Ende des Vesting-Zeitraums, d.h. bis Ihr Vertrag vollständig freigegeben ist. Damit sollen Doppelausgaben verhindert werden, da es sonst möglich wäre, die gleichen Beträge auf beiden Ebenen freizugeben. -If you'd like to release GRT from the vesting contract, you can transfer them back to the L1 vesting contract using Explorer: in your Arbitrum One profile, you will see a banner saying you can transfer GRT back to the mainnet vesting contract. This requires a transaction on Arbitrum One, waiting 7 days, and a final transaction on mainnet, as it uses the same native bridging mechanism from the GRT bridge. +Wenn Sie GRT aus dem Vesting-Vertrag freigeben möchten, können Sie sie mit dem Explorer zurück in den L1-Vesting-Vertrag übertragen: In Ihrem Arbitrum One-Profil wird ein Banner angezeigt, das besagt, dass Sie GRT zurück in den Mainnet-Vesting-Vertrag übertragen können. Dies erfordert eine Transaktion auf Arbitrum One, eine Wartezeit von 7 Tagen und eine abschließende Transaktion auf dem Mainnet, da es denselben nativen Überbrückungsmechanismus der GRT- Bridge verwendet. ### Fällt eine Delegationssteuer an? -Nein. Auf L2 erhaltene Token werden im Namen des angegebenen Delegators an den angegebenen Indexierer delegiert, ohne dass eine Delegationssteuer erhoben wird. +Nein. Erhaltene Token auf L2 werden im Namen des angegebenen Delegatoren an den angegebenen Indexer delegiert, ohne eine Delegiertensteuer zu erheben. -### Will my unrealized rewards be transferred when I transfer my delegation? +### Werden meine nicht realisierten Rewards übertragen, wenn ich meine Delegation übertrage? -​Yes! The only rewards that can't be transferred are the ones for open allocations, as those won't exist until the Indexer closes the allocations (usually every 28 days). If you've been delegating for a while, this is likely only a small fraction of rewards. +Ja! Die einzigen Rewards, die nicht übertragen werden können, sind die für offene Zuteilungen, da diese nicht mehr existieren, bis der Indexer die Zuteilungen schließt (normalerweise alle 28 Tage). Wenn Sie schon eine Weile delegieren, ist dies wahrscheinlich nur ein kleiner Teil der Rewards. -At the smart contract level, unrealized rewards are already part of your delegation balance, so they will be transferred when you transfer your delegation to L2. ​ +Auf der Smart-Contract-Ebene sind nicht realisierte Rewards bereits Teil Ihres Delegationsguthabens, so dass sie übertragen werden, wenn Sie Ihre Delegation auf L2 übertragen. -### Is moving delegations to L2 mandatory? Is there a deadline? +### Ist die Verlegung von Delegationen nach L2 obligatorisch? Gibt es eine Frist? -​Moving delegation to L2 is not mandatory, but indexing rewards are increasing on L2 following the timeline described in [GIP-0052](https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193). Eventually, if the Council keeps approving the increases, all rewards will be distributed in L2 and there will be no indexing rewards for Indexers and Delegators on L1. ​ +Die Verlagerung der Delegation nach L2 ist nicht zwingend erforderlich, aber die Rewards für die Indexierung steigen auf L2 entsprechend dem in [GIP-0052] (https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193) beschriebenen Zeitplan. Wenn der Rat die Erhöhungen weiterhin genehmigt, werden schließlich alle Rewards in L2 verteilt und es wird keine Indexierungs-Rewards für Indexer und Delegatoren in L1 geben. -### If I am delegating to an Indexer that has already transferred stake to L2, do I stop receiving rewards on L1? +### Wenn ich an einen Indexer delegiere, der bereits Anteile auf L2 übertragen hat, erhalte ich dann keine Rewards mehr auf L1? -​Many Indexers are transferring stake gradually so Indexers on L1 will still be earning rewards and fees on L1, which are then shared with Delegators. Once an Indexer has transferred all of their stake, then they will stop operating on L1, so Delegators will not receive any more rewards unless they transfer to L2. +Viele Indexer übertragen ihre Anteile nach und nach, so dass Indexer auf L1 immer noch Rewards und Gebühren auf L1 verdienen, die dann mit den Delegatoren geteilt werden. Sobald ein Indexer seinen gesamten Anteil übertragen hat, wird er seine Tätigkeit auf L1 einstellen, so dass die Delegatoren keine Rewards mehr erhalten, es sei denn, sie wechseln zu L2. -Eventually, if the Council keeps approving the indexing rewards increases in L2, all rewards will be distributed on L2 and there will be no indexing rewards for Indexers and Delegators on L1. ​ +Wenn das Council die Erhöhungen der Rewards für die Indexierung in L2 weiterhin genehmigt, werden schließlich alle Rewards in L2 verteilt und es wird keine Rewards für Indexer und Delegierte in L1 geben. -### I don't see a button to transfer my delegation. Why is that? +### Ich sehe keine Schaltfläche zum Übertragen meiner Delegation. Woran liegt das? -​Your Indexer has probably not used the L2 transfer tools to transfer stake yet. +Ihr Indexer hat wahrscheinlich noch nicht die L2-Transfer-Tools zur Übertragung von Anteilen verwendet. -If you can contact the Indexer, you can encourage them to use the L2 Transfer Tools so that Delegators can transfer delegations to their L2 Indexer address. ​ +Wenn Sie sich mit dem Indexer in Verbindung setzen können, können Sie ihn ermutigen, die L2-Transfer-Tools zu verwenden, damit die Delegatoren Delegationen an ihre L2-Indexer-Adresse übertragen können. -### My Indexer is also on Arbitrum, but I don't see a button to transfer the delegation in my profile. Why is that? +### Mein Indexer ist auch auf Arbitrum, aber ich sehe in meinem Profil keine Schaltfläche zum Übertragen der Delegation. Warum ist das so? -​It is possible that the Indexer has set up operations on L2, but hasn't used the L2 transfer tools to transfer stake. The L1 smart contracts will therefore not know about the Indexer's L2 address. If you can contact the Indexer, you can encourage them to use the transfer tool so that Delegators can transfer delegations to their L2 Indexer address. ​ +Es ist möglich, dass der Indexer Operationen auf L2 eingerichtet hat, aber nicht die L2-Transfer-Tools zur Übertragung von Einsätzen verwendet hat. Die L1-Smart Contracts kennen daher die L2-Adresse des Indexers nicht. Wenn Sie sich mit dem Indexer in Verbindung setzen können, können Sie ihn ermutigen, das Übertragungswerkzeug zu verwenden, damit Delegatoren Delegationen an seine L2-Indexer-Adresse übertragen können. -### Can I transfer my delegation to L2 if I have started the undelegating process and haven't withdrawn it yet? +### Kann ich meine Delegation auf L2 übertragen, wenn ich den Prozess der Undelegation eingeleitet und noch nicht zurückgezogen habe? -​No. If your delegation is thawing, you have to wait the 28 days and withdraw it. +Nein. Wenn Ihre Delegation auftaut, müssen Sie die 28 Tage abwarten und sie zurückziehen. -The tokens that are being undelegated are "locked" and therefore cannot be transferred to L2. +Die Token, die nicht delegiert werden, sind „gesperrt“ und können daher nicht auf L2 übertragen werden. ## Kurationssignal @@ -206,9 +206,9 @@ Um Ihre Kuration zu übertragen, müssen Sie die folgenden Schritte ausführen: \* Falls erforderlich - d.h. wenn Sie eine Vertragsadresse verwenden. -### Wie erfahre ich, ob der von mir kuratierte Subgraph nach L2 umgezogen ist? +### Wie erfahre ich, ob der von mir kuratierte Subgraph nach L2 verschoben wurde? -Auf der Seite mit den Details der Subgraphen werden Sie durch ein Banner darauf hingewiesen, dass dieser Subgraph übertragen wurde. Sie können der Aufforderung folgen, um Ihre Kuration zu übertragen. Diese Information finden Sie auch auf der Seite mit den Details zu jedem verschobenen Subgraphen. +Wenn Sie die Detailseite des Subgraphen aufrufen, werden Sie durch ein Banner darauf hingewiesen, dass dieser Subgraph übertragen wurde. Sie können der Aufforderung folgen, um Ihre Kuration zu übertragen. Sie finden diese Information auch auf der Seite mit den Details zu jedem verschobenen Subgraphen. ### Was ist, wenn ich meine Kuration nicht auf L2 verschieben möchte? @@ -226,7 +226,7 @@ Zurzeit gibt es keine Option für Massenübertragungen. ### Wie übertrage ich meine Anteile auf Arbitrum? -> Disclaimer: If you are currently unstaking any portion of your GRT on your Indexer, you will not be able to use L2 Transfer Tools. +> Haftungsausschluss: Wenn Sie derzeit einen Teil Ihres GRT auf Ihrem Indexer entsperren, können Sie die L2 Transfer Tools nicht verwenden. @@ -238,7 +238,7 @@ Um Ihren Einsatz zu übertragen, müssen Sie die folgenden Schritte ausführen: 3. Bestätigen Sie die Übertragung von Anteilen auf Arbitrum -\*Note that you must confirm the transfer within 7 days otherwise your stake may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Beachten Sie, dass Sie den Transfer innerhalb von 7 Tagen bestätigen müssen, sonst kann Ihr Einsatz verloren gehen. In den meisten Fällen wird dieser Schritt automatisch ausgeführt, aber eine manuelle Bestätigung kann erforderlich sein, wenn es einen Gaspreisanstieg auf Arbitrum gibt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die Ihnen helfen: Kontaktieren Sie den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol). ### Wird mein gesamter Einsatz übertragen? @@ -276,13 +276,13 @@ Nein, damit Delegatoren ihre delegierten GRT an Arbitrum übertragen können, mu Ja! Der Prozess ist ein wenig anders, weil Vesting-Verträge die ETH, die für die Bezahlung des L2-Gases benötigt werden, nicht weiterleiten können, so dass Sie sie vorher einzahlen müssen. Wenn Ihr Freizügigkeitsvertrag nicht vollständig freigeschaltet ist, müssen Sie außerdem zuerst einen Gegenkontrakt auf L2 initialisieren und können den Anteil nur auf diesen L2-Freizügigkeitsvertrag übertragen. Die Benutzeroberfläche des Explorers kann Sie durch diesen Prozess führen, wenn Sie sich mit dem Explorer über die Vesting Lock Wallet verbunden haben. -### I already have stake on L2. Do I still need to send 100k GRT when I use the transfer tools the first time? +### Ich habe bereits einen Einsatz auf L2. Muss ich immer noch 100k GRT senden, wenn ich die Transfer-Tools zum ersten Mal benutze? -​Yes. The L1 smart contracts will not be aware of your L2 stake, so they will require you to transfer at least 100k GRT when you transfer for the first time. ​ +Ja. Die L1-Smart-Contracts kennen Ihren L2-Einsatz nicht und verlangen daher, dass Sie beim ersten Transfer mindestens 100k GRT übertragen. -### Can I transfer my stake to L2 if I am in the process of unstaking GRT? +### Kann ich meinen Anteil auf L2 übertragen, wenn ich gerade dabei bin, GRT zu entstaken? -​No. If any fraction of your stake is thawing, you have to wait the 28 days and withdraw it before you can transfer stake. The tokens that are being staked are "locked" and will prevent any transfers or stake to L2. +Nein. Wenn ein Teil Ihres Einsatzes auftaut, müssen Sie die 28 Tage warten und ihn abheben, bevor Sie den Einsatz übertragen können. Die Token, die eingesetzt werden, sind „gesperrt“ und verhindern jede Übertragung oder Einsatz auf L2. ## Unverfallbare Vertragsübertragung @@ -377,25 +377,25 @@ Um Ihren Vesting-Vertrag auf L2 zu übertragen, senden Sie ein eventuelles GRT-G \* Falls erforderlich - d.h. wenn Sie eine Vertragsadresse verwenden. -\*\*\*\*You must confirm your transaction to complete the balance transfer on Arbitrum. This step must be completed within 7 days or the balance could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*\*\*\*Sie müssen Ihre Transaktion bestätigen, um die Übertragung des Guthabens auf Arbitrum abzuschließen. Dieser Schritt muss innerhalb von 7 Tagen abgeschlossen werden, da sonst das Guthaben verloren gehen kann. In den meisten Fällen wird dieser Schritt automatisch ausgeführt, aber eine manuelle Bestätigung kann erforderlich sein, wenn es auf Arbitrum eine Gaspreisspitze gibt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die helfen können: kontaktiere den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol). -### My vesting contract shows 0 GRT so I cannot transfer it, why is this and how do I fix it? +### Mein Vesting-Vertrag zeigt 0 GRT an, so dass ich ihn nicht übertragen kann. Warum ist das so und wie kann ich das ändern? -​To initialize your L2 vesting contract, you need to transfer a nonzero amount of GRT to L2. This is required by the Arbitrum GRT bridge that is used by the L2 Transfer Tools. The GRT must come from the vesting contract's balance, so it does not include staked or delegated GRT. +Um Ihren L2 Vesting-Vertrag zu initialisieren, müssen Sie einen GRT-Betrag, der nicht Null ist, auf L2 übertragen. Dies ist für die Arbitrum GRT-Brücke erforderlich, die von den L2-Transfer-Tools verwendet wird. Die GRT müssen aus dem Guthaben des Vesting-Vertrags stammen, d. h. sie umfassen keine abgesicherten oder delegierten GRT. -If you've staked or delegated all your GRT from the vesting contract, you can manually send a small amount like 1 GRT to the vesting contract address from anywhere else (e.g. from another wallet, or an exchange). ​ +Wenn Sie alle Ihre GRT aus dem Vesting-Vertrag eingesetzt oder delegiert haben, können Sie manuell einen kleinen Betrag wie 1 GRT an die Adresse des Vesting-Vertrags von einem anderen Ort aus senden (z. B. von einer anderen Wallet oder einer Börse). -### I am using a vesting contract to stake or delegate, but I don't see a button to transfer my stake or delegation to L2, what do I do? +### Ich verwende einen Vesting-Vertrag, um meinen Anteil oder meine Delegation auf L2 zu übertragen, aber ich sehe keine Taste, um meinen Anteil oder meine Delegation auf L2 zu übertragen. -​If your vesting contract hasn't finished vesting, you need to first create an L2 vesting contract that will receive your stake or delegation on L2. This vesting contract will not allow releasing tokens in L2 until the end of the vesting timeline, but will allow you to transfer GRT back to the L1 vesting contract to be released there. +Wenn Ihr Vesting-Vertrag noch nicht abgeschlossen ist, müssen Sie zunächst einen L2-Vesting-Vertrag erstellen, der Ihren Anteil oder Ihre Delegation auf L2 erhält. Dieser Vesting-Vertrag erlaubt keine Freigabe von Token in L2 bis zum Ende des Vesting-Zeitraums, aber er erlaubt Ihnen, GRT zurück zum L1-Vesting-Vertrag zu übertragen, um dort freigegeben zu werden. -When connected with the vesting contract on Explorer, you should see a button to initialize your L2 vesting contract. Follow that process first, and you will then see the buttons to transfer your stake or delegation in your profile. ​ +Wenn Sie mit dem Vesting-Vertrag im Explorer verbunden sind, sollten Sie eine Schaltfläche zur Initialisierung Ihres L2-Vesting-Vertrags sehen. Befolgen Sie zunächst diesen Prozess, und Sie werden dann die Schaltflächen zur Übertragung Ihres Anteils oder zur Delegation in Ihrem Profil sehen. -### If I initialize my L2 vesting contract, will this also transfer my delegation to L2 automatically? +### Wenn ich meinen L2-Vesting-Vertrag initialisiere, wird dann auch meine Delegation automatisch auf L2 übertragen? -​No, initializing your L2 vesting contract is a prerequisite for transferring stake or delegation from the vesting contract, but you still need to transfer these separately. +Nein, die Initialisierung Ihres L2 Vesting-Vertrags ist eine Voraussetzung für die Übertragung von Anteilen oder Delegationen aus dem Vesting-Vertrag, aber Sie müssen diese trotzdem separat übertragen. -You will see a banner on your profile prompting you to transfer your stake or delegation after you have initialized your L2 vesting contract. +Nachdem Sie Ihren L2 Vesting-Vertrag initialisiert haben, erscheint in Ihrem Profil ein Banner, das Sie auffordert, Ihren Anteil oder Ihre Delegation zu übertragen. ### Kann ich meinen Vertrag mit unverfallbarer Anwartschaft zurück nach L1 verschieben? diff --git a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx index 6a5b13da53d7..1be2386aedba 100644 --- a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -1,60 +1,60 @@ --- -title: L2 Transfer Tools Guide +title: L2 Transfer Tools Anleitung --- The Graph hat den Wechsel zu L2 auf Arbitrum One leicht gemacht. Für jeden Protokollteilnehmer gibt es eine Reihe von L2-Transfer-Tools, um den Transfer zu L2 für alle Netzwerkteilnehmer nahtlos zu gestalten. Je nachdem, was Sie übertragen möchten, müssen Sie eine bestimmte Anzahl von Schritten befolgen. Einige häufig gestellte Fragen zu diesen Tools werden in den [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/) beantwortet. Die FAQs enthalten ausführliche Erklärungen zur Verwendung der Tools, zu ihrer Funktionsweise und zu den Dingen, die bei ihrer Verwendung zu beachten sind. -## So übertragen Sie Ihren Subgraphen auf Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Vorteile der Übertragung Ihrer Untergraphen +## Benefits of transferring your Subgraphs The Graph's Community und die Kernentwickler haben im letzten Jahr den Wechsel zu Arbitrum [vorbereitet] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). Arbitrum, eine Layer-2- oder "L2"-Blockchain, erbt die Sicherheit von Ethereum, bietet aber drastisch niedrigere Gasgebühren. -Wenn Sie Ihren Subgraphen auf The Graph Network veröffentlichen oder aktualisieren, interagieren Sie mit intelligenten Verträgen auf dem Protokoll, und dies erfordert die Bezahlung von Gas mit ETH. Indem Sie Ihre Subgraphen zu Arbitrum verschieben, werden alle zukünftigen Aktualisierungen Ihres Subgraphen viel niedrigere Gasgebühren erfordern. Die niedrigeren Gebühren und die Tatsache, dass die Kurationsbindungskurven auf L2 flach sind, machen es auch für andere Kuratoren einfacher, auf Ihrem Subgraphen zu kuratieren, was die Belohnungen für Indexer auf Ihrem Subgraphen erhöht. Diese kostengünstigere Umgebung macht es auch für Indexer preiswerter, Ihren Subgraphen zu indizieren und zu bedienen. Die Belohnungen für die Indexierung werden in den kommenden Monaten auf Arbitrum steigen und auf dem Ethereum-Mainnet sinken, so dass immer mehr Indexer ihren Einsatz transferieren und ihre Operationen auf L2 einrichten werden. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Verstehen, was mit dem Signal, Ihrem L1-Subgraphen und den Abfrage-URLs geschieht +## Understanding what happens with signal, your L1 Subgraph and query URLs -Die Übertragung eines Subgraphen nach Arbitrum verwendet die Arbitrum GRT-Brücke, die wiederum die native Arbitrum-Brücke verwendet, um den Subgraphen nach L2 zu senden. Der "Transfer" löscht den Subgraphen im Mainnet und sendet die Informationen, um den Subgraphen auf L2 mit Hilfe der Brücke neu zu erstellen. Sie enthält auch die vom Eigentümer des Subgraphen signalisierte GRT, die größer als Null sein muss, damit die Brücke die Übertragung akzeptiert. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Wenn Sie sich für die Übertragung des Untergraphen entscheiden, wird das gesamte Kurationssignal des Untergraphen in GRT umgewandelt. Dies ist gleichbedeutend mit dem "Verwerfen" des Subgraphen im Mainnet. Die GRT, die Ihrer Kuration entsprechen, werden zusammen mit dem Subgraphen an L2 gesendet, wo sie für die Prägung von Signalen in Ihrem Namen verwendet werden. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Andere Kuratoren können wählen, ob sie ihren Anteil an GRT zurückziehen oder ihn ebenfalls an L2 übertragen, um das Signal auf demselben Untergraphen zu prägen. Wenn ein Subgraph-Eigentümer seinen Subgraph nicht an L2 überträgt und ihn manuell über einen Vertragsaufruf abmeldet, werden die Kuratoren benachrichtigt und können ihre Kuration zurückziehen. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Sobald der Subgraph übertragen wurde, erhalten die Indexer keine Belohnungen mehr für die Indizierung des Subgraphen, da die gesamte Kuration in GRT umgewandelt wird. Es wird jedoch Indexer geben, die 1) übertragene Untergraphen für 24 Stunden weiter bedienen und 2) sofort mit der Indizierung des Untergraphen auf L2 beginnen. Da diese Indexer den Untergraphen bereits indiziert haben, sollte es nicht nötig sein, auf die Synchronisierung des Untergraphen zu warten, und es wird möglich sein, den L2-Untergraphen fast sofort abzufragen. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Anfragen an den L2-Subgraphen müssen an eine andere URL gerichtet werden (an `arbitrum-gateway.thegraph.com`), aber die L1-URL wird noch mindestens 48 Stunden lang funktionieren. Danach wird das L1-Gateway (für eine gewisse Zeit) Anfragen an das L2-Gateway weiterleiten, was jedoch zu zusätzlichen Latenzzeiten führt. Es wird daher empfohlen, alle Anfragen so bald wie möglich auf die neue URL umzustellen. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Ein Teil dieser GRT, der dem Inhaber des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet. -Als Sie Ihren Subgraphen im Mainnet veröffentlicht haben, haben Sie eine angeschlossene Wallet benutzt, um den Subgraphen zu erstellen, und diese Wallet besitzt die NFT, die diesen Subgraphen repräsentiert und Ihnen erlaubt, Updates zu veröffentlichen. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Wenn man den Subgraphen zu Arbitrum überträgt, kann man eine andere Wallet wählen, die diesen Subgraphen NFT auf L2 besitzen wird. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Wenn Sie eine "normale" Wallet wie MetaMask verwenden (ein Externally Owned Account oder EOA, d.h. eine Wallet, die kein Smart Contract ist), dann ist dies optional und es wird empfohlen, die gleiche Eigentümeradresse wie in L1 beizubehalten. -Wenn Sie eine Smart-Contract-Wallet, wie z.B. eine Multisig (z.B. Safe), verwenden, dann ist die Wahl einer anderen L2-Wallet-Adresse zwingend erforderlich, da es sehr wahrscheinlich ist, dass dieses Konto nur im Mainnet existiert und Sie mit dieser Wallet keine Transaktionen auf Arbitrum durchführen können. Wenn Sie weiterhin eine Smart Contract Wallet oder Multisig verwenden möchten, erstellen Sie eine neue Wallet auf Arbitrum und verwenden Sie deren Adresse als L2-Besitzer Ihres Subgraphen. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Es ist sehr wichtig, eine Wallet-Adresse zu verwenden, die Sie kontrollieren und die Transaktionen auf Arbitrum durchführen kann. Andernfalls geht der Subgraph verloren und kann nicht wiederhergestellt werden.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Vorbereitung der Übertragung: Überbrückung einiger ETH -Die Übertragung des Subgraphen beinhaltet das Senden einer Transaktion über die Brücke und das Ausführen einer weiteren Transaktion auf Arbitrum. Die erste Transaktion verwendet ETH im Mainnet und enthält einige ETH, um das Gas zu bezahlen, wenn die Nachricht auf L2 empfangen wird. Wenn dieses Gas jedoch nicht ausreicht, müssen Sie die Transaktion wiederholen und das Gas direkt auf L2 bezahlen (dies ist "Schritt 3: Bestätigen des Transfers" unten). Dieser Schritt **muss innerhalb von 7 Tagen nach Beginn der Überweisung** ausgeführt werden. Außerdem wird die zweite Transaktion ("Schritt 4: Beenden der Übertragung auf L2") direkt auf Arbitrum durchgeführt. Aus diesen Gründen benötigen Sie etwas ETH auf einer Arbitrum-Wallet. Wenn Sie ein Multisig- oder Smart-Contract-Konto verwenden, muss sich die ETH in der regulären (EOA-) Wallet befinden, die Sie zum Ausführen der Transaktionen verwenden, nicht in der Multisig-Wallet selbst. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Sie können ETH auf einigen Börsen kaufen und direkt auf Arbitrum abheben, oder Sie können die Arbitrum-Brücke verwenden, um ETH von einer Mainnet-Wallet zu L2 zu senden: [bridge.arbitrum.io](http://bridge.arbitrum.io). Da die Gasgebühren auf Arbitrum niedriger sind, sollten Sie nur eine kleine Menge benötigen. Es wird empfohlen, mit einem niedrigen Schwellenwert (z.B. 0,01 ETH) zu beginnen, damit Ihre Transaktion genehmigt wird. -## Suche nach dem Untergraphen Transfer Tool +## Finding the Subgraph Transfer Tool -Sie finden das L2 Transfer Tool, wenn Sie die Seite Ihres Subgraphen in Subgraph Studio ansehen: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -Sie ist auch im Explorer verfügbar, wenn Sie mit der Wallet verbunden sind, die einen Untergraphen besitzt, und auf der Seite dieses Untergraphen im Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: -![Transferring to L2](/img/transferToL2.png) +![Übertragung auf L2](/img/transferToL2.png) Wenn Sie auf die Schaltfläche auf L2 übertragen klicken, wird das Übertragungstool geöffnet, mit dem Sie den Übertragungsvorgang starten können. @@ -64,15 +64,15 @@ Bevor Sie mit dem Transfer beginnen, müssen Sie entscheiden, welche Adresse den Bitte beachten Sie auch, dass die Übertragung des Untergraphen ein Signal ungleich Null auf dem Untergraphen mit demselben Konto erfordert, das den Untergraphen besitzt; wenn Sie kein Signal auf dem Untergraphen haben, müssen Sie ein wenig Kuration hinzufügen (das Hinzufügen eines kleinen Betrags wie 1 GRT würde ausreichen). -Nachdem Sie das Transfer-Tool geöffnet haben, können Sie die L2-Wallet-Adresse in das Feld "Empfänger-Wallet-Adresse" eingeben - **vergewissern Sie sich, dass Sie hier die richtige Adresse eingegeben haben**. Wenn Sie auf "Transfer Subgraph" klicken, werden Sie aufgefordert, die Transaktion auf Ihrer Wallet auszuführen (beachten Sie, dass ein gewisser ETH-Wert enthalten ist, um das L2-Gas zu bezahlen); dadurch wird der Transfer eingeleitet und Ihr L1-Subgraph außer Kraft gesetzt (siehe "Verstehen, was mit Signal, Ihrem L1-Subgraph und Abfrage-URLs passiert" weiter oben für weitere Details darüber, was hinter den Kulissen passiert). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). Wenn Sie diesen Schritt ausführen, **vergewissern Sie sich, dass Sie bis zum Abschluss von Schritt 3 in weniger als 7 Tagen fortfahren, sonst gehen der Subgraph und Ihr Signal GRT verloren.** Dies liegt daran, wie L1-L2-Nachrichten auf Arbitrum funktionieren: Nachrichten, die über die Brücke gesendet werden, sind "wiederholbare Tickets", die innerhalb von 7 Tagen ausgeführt werden müssen, und die erste Ausführung muss möglicherweise wiederholt werden, wenn es Spitzen im Gaspreis auf Arbitrum gibt. -![Start the transfer to L2](/img/startTransferL2.png) +![Start der Übertragung auf L2](/img/startTransferL2.png) -## Schritt 2: Warten, bis der Untergraph L2 erreicht hat +## Step 2: Waiting for the Subgraph to get to L2 -Nachdem Sie die Übertragung gestartet haben, muss die Nachricht, die Ihren L1-Subgraphen an L2 sendet, die Arbitrum-Brücke durchlaufen. Dies dauert etwa 20 Minuten (die Brücke wartet darauf, dass der Mainnet-Block, der die Transaktion enthält, vor potenziellen Reorgs der Kette "sicher" ist). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Sobald diese Wartezeit abgelaufen ist, versucht Arbitrum, die Übertragung auf den L2-Verträgen automatisch auszuführen. @@ -92,74 +92,74 @@ Zu diesem Zeitpunkt wurden Ihr Subgraph und GRT auf Arbitrum empfangen, aber der ![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Dadurch wird der Untergraph veröffentlicht, so dass Indexer, die auf Arbitrum arbeiten, damit beginnen können, ihn zu bedienen. Es wird auch ein Kurationssignal unter Verwendung der GRT, die von L1 übertragen wurden, eingeleitet. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Schritt 5: Aktualisierung der Abfrage-URL -Ihr Subgraph wurde erfolgreich zu Arbitrum übertragen! Um den Subgraphen abzufragen, wird die neue URL lauten: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Beachten Sie, dass die ID des Subgraphen auf Arbitrum eine andere sein wird als die, die Sie im Mainnet hatten, aber Sie können sie immer im Explorer oder Studio finden. Wie oben erwähnt (siehe "Verstehen, was mit Signal, Ihrem L1-Subgraphen und Abfrage-URLs passiert"), wird die alte L1-URL noch eine kurze Zeit lang unterstützt, aber Sie sollten Ihre Abfragen auf die neue Adresse umstellen, sobald der Subgraph auf L2 synchronisiert worden ist. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Wie Sie Ihre Kuration auf Arbitrum übertragen (L2) -## Verstehen, was mit der Kuration bei der Übertragung von Untergraphen auf L2 geschieht +## Understanding what happens to curation on Subgraph transfers to L2 -Wenn der Eigentümer eines Untergraphen einen Untergraphen an Arbitrum überträgt, werden alle Signale des Untergraphen gleichzeitig in GRT konvertiert. Dies gilt für "automatisch migrierte" Signale, d.h. Signale, die nicht spezifisch für eine Subgraphenversion oder einen Einsatz sind, sondern der neuesten Version eines Subgraphen folgen. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Diese Umwandlung von Signal in GRT entspricht dem, was passieren würde, wenn der Eigentümer des Subgraphen den Subgraphen in L1 verwerfen würde. Wenn der Subgraph veraltet oder übertragen wird, werden alle Kurationssignale gleichzeitig "verbrannt" (unter Verwendung der Kurationsbindungskurve) und das resultierende GRT wird vom GNS-Smart-Contract gehalten (das ist der Vertrag, der Subgraph-Upgrades und automatisch migrierte Signale handhabt). Jeder Kurator auf diesem Subgraphen hat daher einen Anspruch auf dieses GRT proportional zu der Menge an Anteilen, die er für den Subgraphen hatte. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Ein Teil dieser GRT, der dem Eigentümer des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -Ein Teil dieser GRT, der dem Eigentümer des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Ein Teil dieser GRT, der dem Inhaber des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet. Ein Teil dieser GRT, der dem Eigentümer des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet. -If you're using a "regular" wallet like Metamask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same Curator address as in L1. +Wenn Sie eine "normale" Wallet wie MetaMask verwenden (ein Externally Owned Account oder EOA, d.h. eine Wallet, die kein Smart Contract ist), dann ist dies optional und es wird empfohlen, die gleiche Eigentümeradresse wie in L1 beizubehalten. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 receiving wallet address. +Wenn Sie eine Smart-Contract-Wallet wie eine Multisig (z.B. einen Safe) verwenden, dann ist die Wahl einer anderen L2-Wallet-Adresse zwingend erforderlich, da es sehr wahrscheinlich ist, dass dieses Konto nur im Mainnet existiert und Sie mit dieser Wallet keine Transaktionen auf Arbitrum durchführen können. Wenn Sie weiterhin eine Smart Contract Wallet oder Multisig verwenden möchten, erstellen Sie eine neue Wallet auf Arbitrum und verwenden Sie deren Adresse als L2-Empfangs-Wallet-Adresse. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum, as otherwise the curation will be lost and cannot be recovered.** +**Es ist äußerst wichtig, eine Wallet-Adresse zu verwenden, die Sie kontrollieren und mit der Sie Transaktionen auf Arbitrum durchführen können, da sonst die Kuration verloren geht und nicht wiederhergestellt werden kann.** -## Sending curation to L2: Step 1 +## Senden der Kuration an L2: Schritt 1 -Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. +Bevor Sie mit dem Transfer beginnen, müssen Sie entscheiden, welche Adresse die Kuration auf L2 besitzen wird (siehe „Auswahl Ihrer L2 Wallet“ oben), und es wird empfohlen, einige ETH für Gas bereits auf Arbitrum überbrückt zu haben, falls Sie die Ausführung der Nachricht auf L2 wiederholen müssen. Sie können ETH auf einigen Börsen kaufen und sie direkt auf Arbitrum abheben, oder Sie können die Arbitrum- Bridge benutzen, um ETH von einer Mainnet Wallet zu L2 zu senden: [bridge.arbitrum.io](http://bridge.arbitrum.io) - da die Gasgebühren auf Arbitrum so niedrig sind, sollten Sie nur eine kleine Menge benötigen, z.B. 0,01 ETH wird wahrscheinlich mehr als genug sein. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +Wenn ein Subgraph, den Sie kuratieren, auf L2 übertragen wurde, wird im Explorer eine Meldung angezeigt, dass Sie einen übertragenen Subgraph kuratieren. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +Auf der Subgraph-Seite können Sie wählen, ob Sie die Kuration zurückziehen oder übertragen wollen. Ein Klick auf „Signal nach Arbitrum übertragen“ öffnet das Übertragungstool. ![Transfer signal](/img/transferSignalL2TransferTools.png) -After opening the Transfer Tool, you may be prompted to add some ETH to your wallet if you don't have any. Then you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Signal will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer. +Nachdem Sie das Transfer-Tool geöffnet haben, werden Sie möglicherweise aufgefordert, Ihrer Wallet ETH hinzuzufügen, falls Sie keine haben. Dann können Sie die Adresse der L2-Wallet in das Feld „Receiving wallet address“ (Adresse der empfangenden Wallet) eingeben - **vergewissern Sie sich, dass Sie hier die richtige Adresse eingegeben haben**. Wenn Sie auf „Transfer Signal“ klicken, werden Sie aufgefordert, die Transaktion auf Ihrer Wallet auszuführen (beachten Sie, dass ein gewisser ETH-Wert enthalten ist, um das L2-Gas zu bezahlen); dadurch wird der Transfer eingeleitet. -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retryable tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +Wenn Sie diesen Schritt ausführen, **sichern Sie sich, dass Sie bis zum Abschluss von Schritt 3 in weniger als 7 Tagen fortfahren, sonst geht Ihr Signal GRT verloren.** Das liegt daran, wie der L1-L2-Nachrichtenaustausch auf Arbitrum funktioniert: Nachrichten, die über die Bridge gesendet werden, sind „wiederholbare Tickets“, die innerhalb von 7 Tagen ausgeführt werden müssen, und die anfängliche Ausführung muss möglicherweise wiederholt werden, wenn es Spitzen im Gaspreis auf Arbitrum gibt. -## Sending curation to L2: step 2 +## Senden der Kuration an L2: Schritt 2 -Starting the transfer: +Starten Sie den Transfer: ![Send signal to L2](/img/sendingCurationToL2Step2First.png) -After you start the transfer, the message that sends your L1 curation to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +Nachdem Sie die Übertragung gestartet haben, muss die Nachricht, die Ihre L1-Kuration an L2 sendet, die Arbitrum- Bridge durchlaufen. Dies dauert etwa 20 Minuten (die Bridge wartet darauf, dass der Mainnet-Block, der die Transaktion enthält, vor potenziellen Chain Reorgs „sicher“ ist). Sobald diese Wartezeit abgelaufen ist, versucht Arbitrum, die Übertragung auf den L2-Verträgen automatisch auszuführen. ![Sending curation signal to L2](/img/sendingCurationToL2Step2Second.png) -## Sending curation to L2: step 3 +## Senden der Kuration an L2: Schritt 3 -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the curation on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your curation to L2 will be pending and require a retry within 7 days. +In den meisten Fällen wird dieser Schritt automatisch ausgeführt, da das in Schritt 1 enthaltene L2-Gas ausreichen sollte, um die Transaktion auszuführen, die die Kuration auf den Arbitrum-Verträgen erhält. In einigen Fällen ist es jedoch möglich, dass ein Anstieg der Gaspreise auf Arbitrum dazu führt, dass diese automatische Ausführung fehlschlägt. In diesem Fall wird das „Ticket“, das Ihre Kuration an L2 sendet, ausstehend sein und einen erneuten Versuch innerhalb von 7 Tagen erfordern. Wenn dies der Fall ist, müssen Sie sich mit einer L2-Wallet verbinden, die etwas ETH auf Arbitrum hat, Ihr Wallet-Netzwerk auf Arbitrum umstellen und auf "Confirm Transfer" klicken, um die Transaktion zu wiederholen. ![Send signal to L2](/img/L2TransferToolsFinalCurationImage.png) -## Withdrawing your curation on L1 +## Zurückziehen Ihrer Kuration auf L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +Wenn Sie es vorziehen, Ihre GRT nicht an L2 zu senden, oder wenn Sie die GRT lieber manuell überbrücken möchten, können Sie Ihre kuratierten GRT auf L1 abheben. Wählen Sie auf dem Banner auf der Subgraph-Seite „Signal zurückziehen“ und bestätigen Sie die Transaktion; die GRT werden an Ihre Kurator-Adresse gesendet. diff --git a/website/src/pages/de/archived/sunrise.mdx b/website/src/pages/de/archived/sunrise.mdx index 398fe1ca72f7..5b521b176ffc 100644 --- a/website/src/pages/de/archived/sunrise.mdx +++ b/website/src/pages/de/archived/sunrise.mdx @@ -1,13 +1,13 @@ --- title: Post-Sunrise + Upgrade auf The Graph Network FAQ -sidebarTitle: Post-Sunrise Upgrade FAQ +sidebarTitle: FAQ zum Post-Sunrise-Upgrade --- > Hinweis: Die Sunrise der dezentralisierten Daten endete am 12. Juni 2024. ## Was war die Sunrise der dezentralisierten Daten? -Die Sunrise of Decentralized Data war eine Initiative, die von Edge & Node angeführt wurde. Diese Initiative ermöglichte es Subgraph-Entwicklern, nahtlos auf das dezentrale Netzwerk von The Graph zu wechseln. +Die Sunrise of dezentralisierten Data war eine Initiative, die von Edge & Node angeführt wurde. Diese Initiative ermöglichte es Subgraph-Entwicklern, nahtlos auf das dezentrale Netzwerk von The Graph zu wechseln. Dieser Plan stützt sich auf frühere Entwicklungen des Graph-Ökosystems, einschließlich eines aktualisierten Indexers, der Abfragen auf neu veröffentlichte Subgraphen ermöglicht. diff --git a/website/src/pages/de/contracts.json b/website/src/pages/de/contracts.json index b33760446ae8..6b94c57a82a5 100644 --- a/website/src/pages/de/contracts.json +++ b/website/src/pages/de/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "Vertrag", "address": "Adress" } diff --git a/website/src/pages/de/global.json b/website/src/pages/de/global.json index 424bff2965bc..99f5545ec43c 100644 --- a/website/src/pages/de/global.json +++ b/website/src/pages/de/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "Hauptmenü", - "show": "Show navigation", - "hide": "Hide navigation", - "subgraphs": "Subgraphs", + "show": "Navigation anzeigen", + "hide": "Navigation ausblenden", + "subgraphs": "Subgraphen", "substreams": "Substreams", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", + "sps": "Substreams-getriebene Subgraphen", + "tokenApi": "Token API", + "indexing": "Indizierung", "resources": "Ressourcen", - "archived": "Archived" + "archived": "Archiviert" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "Zuletzt aktualisiert", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "Lesedauer", + "minutes": "Minuten" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "Vorherige Seite", + "next": "Nächste Seite", + "edit": "Auf GitHub bearbeiten", + "onThisPage": "Auf dieser Seite", + "tableOfContents": "Inhaltsübersicht", + "linkToThisSection": "Link zu diesem Abschnitt" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Abfrage-Parameter", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Beschreibung", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Beschreibung", + "liveResponse": "Live Response", + "example": "Beispiel" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "Ups! Diese Seite ist im Space verloren gegangen...", + "subtitle": "Überprüfen Sie, ob Sie die richtige Adresse verwenden, oder besuchen Sie unsere Website, indem Sie auf den unten stehenden Link klicken.", + "back": "Zurück zur Startseite" } } diff --git a/website/src/pages/de/index.json b/website/src/pages/de/index.json index fd28f4bd87af..b56ea56c5897 100644 --- a/website/src/pages/de/index.json +++ b/website/src/pages/de/index.json @@ -2,98 +2,174 @@ "title": "Home", "hero": { "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", + "description": "Starten Sie Ihr Web3-Projekt mit den Tools zum Extrahieren, Transformieren und Laden von Blockchain-Daten.", + "cta1": "Funktionsweise von The Graph", "cta2": "Erstellen Sie Ihren ersten Subgraphen" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "Wählen Sie eine Lösung, die Ihren Anforderungen entspricht, und interagieren Sie auf Ihre Weise mit Blockchain-Daten.", "subgraphs": { "title": "Subgraphs", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "Extrahieren, Verarbeiten und Abfragen von Blockchain-Daten mit offenen APIs.", + "cta": "Entwickeln Sie einen Subgraphen" }, "substreams": { "title": "Substreams", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "Abrufen und Konsumieren von Blockchain-Daten mit paralleler Ausführung.", + "cta": "Entwickeln mit Substreams" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "Substreams-getriebene Subgraphen", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "Einrichten eines Substreams-powered Subgraphen" }, "graphNode": { - "title": "Graph Node", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "title": "Graph-Knoten", + "description": "Indexieren Sie Blockchain-Daten und stellen Sie sie über GraphQL-Abfragen bereit.", + "cta": "Lokalen Graph-Knoten einrichten" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "Extrahieren Sie Blockchain-Daten in flache Dateien, um die Synchronisierungszeiten und Streaming-Funktionen zu verbessern.", + "cta": "Erste Schritte mit Firehose" } }, "supportedNetworks": { - "title": "Supported Networks", + "title": "Unterstützte Netzwerke", + "details": "Network Details", + "services": "Services", + "type": "Type", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Dokumente", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "The Graph unterstützt {0}. Um ein neues Netzwerk hinzuzufügen, {1}", + "networks": "Netzwerke", + "completeThisForm": "füllen Sie dieses Formular aus" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Name", + "id": "ID", + "subgraphs": "Subgraphs", + "substreams": "Substreams", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Substreams", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Abrechnung", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { "title": "Guides", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "find Data in Graph Explorer", + "description": "Nutzen Sie Hunderte von öffentlichen Subgraphen für bestehende Blockchain-Daten." }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Veröffentlichen eines Subgraphen", + "description": "Fügen Sie Ihren Subgraphen dem dezentralen Netzwerk hinzu." }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "Substreams veröffentlichen", + "description": "Starten Sie Ihr Substrats-Paket in der Substrats-Registrierung." }, "queryingBestPractices": { - "title": "Querying Best Practices", - "description": "Optimize your subgraph queries for faster, better results." + "title": "Best Practices für Abfragen", + "description": "Optimieren Sie Ihre Subgraphenabfragen für schnellere und bessere Ergebnisse." }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "Optimierte Zeitreihen & Aggregationen", + "description": "Optimieren Sie Ihren Subgraphen für mehr Effizienz." }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "API-Schlüssel-Management", + "description": "Einfaches Erstellen, Verwalten und Sichern von API-Schlüsseln für Ihre Subgraphen." }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "Übertragung auf The Graph", + "description": "Aktualisieren Sie Ihren Subgraph nahtlos von jeder Plattform aus." } }, "videos": { "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "watchOnYouTube": "Auf YouTube ansehen", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "Was ist Delegieren?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Indizierung von Solana mit einem Substreams-powered Subgraph", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", + "reading": "Lesedauer", + "duration": "Laufzeit", "minutes": "min" } } diff --git a/website/src/pages/de/indexing/_meta-titles.json b/website/src/pages/de/indexing/_meta-titles.json index 42f4de188fd4..ccfae2db5e84 100644 --- a/website/src/pages/de/indexing/_meta-titles.json +++ b/website/src/pages/de/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Indexierer- Tools" } diff --git a/website/src/pages/de/indexing/new-chain-integration.mdx b/website/src/pages/de/indexing/new-chain-integration.mdx index 54d9b95d5a24..0403c54ce447 100644 --- a/website/src/pages/de/indexing/new-chain-integration.mdx +++ b/website/src/pages/de/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: Integration neuer Ketten --- -Ketten können die Unterstützung von Subgraphen in ihr Ökosystem einbringen, indem sie eine neue `graph-node` Integration starten. Subgraphen sind ein leistungsfähiges Indizierungswerkzeug, das Entwicklern eine Welt voller Möglichkeiten eröffnet. Graph Node indiziert bereits Daten von den hier aufgeführten Ketten. Wenn Sie an einer neuen Integration interessiert sind, gibt es 2 Integrationsstrategien: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: Alle Firehose-Integrationslösungen umfassen Substreams, eine groß angelegte Streaming-Engine auf der Grundlage von Firehose mit nativer `graph-node`-Unterstützung, die parallelisierte Transformationen ermöglicht. @@ -25,7 +25,7 @@ Damit Graph Node Daten aus einer EVM-Kette aufnehmen kann, muss der RPC-Knoten d - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, in einem JSON-RPC-Batch-Antrag -- `trace_filter`  *(begrenztes Tracing und optional erforderlich für Graph Node)* +- `trace_filter`  _(begrenztes Tracing und optional erforderlich für Graph Node)_ ### 2. Firehose Integration @@ -51,7 +51,7 @@ Während JSON-RPC und Firehose beide für Subgraphen geeignet sind, ist für Ent - All diese `getLogs`-Aufrufe und Roundtrips werden durch einen einzigen Stream ersetzt, der im Herzen von `graph-node` ankommt; ein einziges Blockmodell für alle Subgraphen, die es verarbeitet. -> HINWEIS: Bei einer Firehose-basierten Integration für EVM-Ketten müssen Indexer weiterhin den Archiv-RPC-Knoten der Kette ausführen, um Subgraphen ordnungsgemäß zu indizieren. Dies liegt daran, dass der Firehose nicht in der Lage ist, den Smart-Contract-Status bereitzustellen, der normalerweise über die RPC-Methode „eth_call“ zugänglich ist. (Es ist erwähnenswert, dass `eth_calls` keine gute Praxis für Entwickler sind) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph-Node Konfiguration diff --git a/website/src/pages/de/indexing/overview.mdx b/website/src/pages/de/indexing/overview.mdx index 05530cbff93a..f6128f144663 100644 --- a/website/src/pages/de/indexing/overview.mdx +++ b/website/src/pages/de/indexing/overview.mdx @@ -5,43 +5,43 @@ sidebarTitle: Überblick Indexer sind Knotenbetreiber im Graph Network, die Graph Tokens (GRT) einsetzen, um Indizierungs- und Abfrageverarbeitungsdienste anzubieten. Indexer verdienen Abfragegebühren und Indexing Rewards für ihre Dienste. Sie verdienen auch Abfragegebühren, die gemäß einer exponentiellen Rabattfunktion zurückerstattet werden. -GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. +Die im Protokoll eingesetzte GRT unterliegt einer Nachfrist und kann reduziert werden, wenn Indexierer böswillig sind und Anwendungen falsche Daten präsentieren oder wenn sie falsch indizieren. Indexer erhalten auch Belohnungen für den Einsatz, den Delegatoren für ihren Beitrag zum Netzwerk geben. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Die Indexierer wählen die zu indexierenden Subgraphen auf der Grundlage des Kurationssignals des Subgraphen aus, wobei die Kuratoren GRT einsetzen, um anzugeben, welche Subgraphen von hoher Qualität sind und priorisiert werden sollten. Verbraucher (z. B. Anwendungen) können auch Parameter dafür festlegen, welche Indexierer Abfragen für ihre Teilgraphen verarbeiten, und Präferenzen für die Preisgestaltung für Abfragen festlegen. ## FAQ -### What is the minimum stake required to be an Indexer on the network? +### Wie hoch ist der Mindesteinsatz, der erforderlich ist, um ein Indexierer im Netzwerk zu sein? -The minimum stake for an Indexer is currently set to 100K GRT. +Der Mindesteinsatz für einen Indexer ist derzeit auf 100.000 GRT festgelegt. -### What are the revenue streams for an Indexer? +### Welche Einnahmequellen gibt es für einen Indexierer? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Query fee rebates** - Zahlungen für die Bedienung von Abfragen im Netz. Diese Zahlungen werden über Statuskanäle zwischen einem Indexer und einem Gateway vermittelt. Jede Abfrageanfrage eines Gateways enthält eine Zahlung und die entsprechende Antwort einen Nachweis für die Gültigkeit des Abfrageergebnisses. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexierungsbelohnungen** - Die Indexierungsbelohnungen werden über eine jährliche protokollweite Inflation von 3% an Indexer verteilt, die Subgraph-Einsätze für das Netzwerk indexieren. -### How are indexing rewards distributed? +### Wie werden die Indexierungsprämien verteilt? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexierungsbelohnungen stammen aus der Protokollinflation, die auf 3 % pro Jahr festgelegt ist. Sie werden auf der Grundlage des Anteils aller Kurationssignale auf jedem Subgraphen verteilt und dann anteilig an die Indexierer auf der Grundlage ihres zugewiesenen Anteils an diesem Subgraphen verteilt. \*\*Eine Zuteilung muss mit einem gültigen Indizierungsnachweis (POI) abgeschlossen werden, der die in der Schlichtungscharta festgelegten Standards erfüllt, um für Belohnungen in Frage zu kommen. -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +Die Community hat zahlreiche Tools zur Berechnung von Rewards erstellt, die in der [Community-Guides-Sammlung](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c) zusammengefasst sind. Eine aktuelle Liste von Tools finden Sie auch in den Channels #Delegators und #Indexers auf dem [Discord-Server](https://discord.gg/graphprotocol). Hier verlinken wir einen [empfohlenen Allokationsoptimierer](https://github.com/graphprotocol/allocation-optimizer), der in den Indexer-Software-Stack integriert ist. -### What is a proof of indexing (POI)? +### Was ist ein Indizierungsnachweis (proof of indexing - POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs werden im Netzwerk verwendet, um zu überprüfen, ob ein Indexierer die von ihm zugewiesenen Subgraphen indexiert. Ein POI für den ersten Block der aktuellen Epoche muss beim Schließen einer Zuweisung eingereicht werden, damit diese Zuweisung für die Indexierung belohnt werden kann. Ein POI für einen Block ist eine Zusammenfassung aller Entity-Store-Transaktionen für einen bestimmten Subgraph-Einsatz bis zu diesem Block und einschließlich. -### When are indexing rewards distributed? +### Wann werden Indizierungsprämien verteilt? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +Zuteilungen sind kontinuierlich anfallende Belohnungen, während sie aktiv sind und innerhalb von 28 Epochen zugeteilt werden. Belohnungen werden von den Indexierern gesammelt und verteilt, sobald ihre Zuteilungen geschlossen sind. Das geschieht entweder manuell, wenn der Indexierer das Schließen erzwingen möchte, oder nach 28 Epochen kann ein Delegator die Zuordnung für den Indexer schließen, aber dies führt zu keinen Belohnungen. 28 Epochen ist die maximale Zuweisungslebensdauer (im Moment dauert eine Epoche etwa 24 Stunden). -### Can pending indexing rewards be monitored? +### Können ausstehende Indizierungsprämien überwacht werden? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +Der RewardsManager-Vertrag verfügt über eine schreibgeschützte Funktion [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316), mit der die ausstehenden Rewards für eine bestimmte Zuweisung überprüft werden können. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +Viele der von der Community erstellten Dashboards enthalten ausstehende Prämienwerte und können einfach manuell überprüft werden, indem Sie diesen Schritten folgen: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Abfrage des [mainnet Subgraphen] (https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one), um die IDs für alle aktiven Zuweisungen zu erhalten: ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Verwenden Sie Etherscan, um `getRewards()` aufzurufen: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- Navigieren Sie zu [Etherscan-Schnittstelle zu Rewards-Vertrag](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Zum Aufrufen von `getRewards()`: + - Erweitern Sie das Dropdown-Menü **9. getRewards**. + - Geben Sie die **allocationID** in die Eingabe ein. + - Klicken Sie auf die Schaltfläche **Abfrage**. -### What are disputes and where can I view them? +### Was sind Streitfälle und wo kann ich sie einsehen? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +Sowohl die Abfragen als auch die Zuordnungen des Indexierers können während des Streitzeitraums auf The Graph angefochten werden. Die Streitdauer variiert je nach Streitfall. Abfragen/Bescheinigungen haben ein 7-Epochen-Streitfenster, während Zuweisungen 56 Epochen haben. Nach Ablauf dieser Fristen können weder Zuweisungen noch Rückfragen angefochten werden. Wenn eine Streitigkeit eröffnet wird, wird von den Fischern eine Kaution von mindestens 10.000 GRT verlangt, die gesperrt wird, bis die Streitigkeit abgeschlossen ist und eine Lösung gefunden wurde. Fischer sind alle Netzwerkteilnehmer, die Streitigkeiten eröffnen. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +Bei Streitigkeiten gibt es **drei** mögliche Ergebnisse, so auch bei der Kaution der Fischer. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- Wird die Anfechtung zurückgewiesen, werden die von den Fischern hinterlegten GRT verbrannt, und der angefochtene Indexierer wird nicht gekürzt. +- Wird der Streitfall durch ein Unentschieden entschieden, wird die Kaution des Fischers zurückerstattet und der strittige Indexierer wird nicht gekürzt. +- Wird dem Einspruch stattgegeben, werden die von den Fischern eingezahlten GRT zurückerstattet, der strittige Indexer wird gekürzt und die Fischer erhalten 50 % der gekürzten GRT. -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +Streitfälle können in der Benutzeroberfläche auf der Profilseite eines Indexierers unter der Registerkarte `Disputes` angezeigt werden. -### What are query fee rebates and when are they distributed? +### Was sind Rückerstattungen von Abfragegebühren und wann werden sie ausgeschüttet? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +Die Abfragegebühren werden vom Gateway eingezogen und gemäß der exponentiellen Rabattfunktion an die Indexierer verteilt (siehe GIP [hier](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). Die exponentielle Rabattfunktion wird vorgeschlagen, um sicherzustellen, dass die Indexierer das beste Ergebnis erzielen, indem sie die Abfragen treu bedienen. Sie bietet den Indexierern einen Anreiz, einen hohen Einsatz (der bei Fehlern bei der Bedienung einer Anfrage gekürzt werden kann) im Verhältnis zur Höhe der Abfragegebühren, die sie einnehmen können, zu leisten. -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +Sobald eine Zuteilung abgeschlossen ist, können die Rabatte vom Indexierer beansprucht werden. Nach der Beantragung werden die Abfragegebührenrabatte auf der Grundlage der Abfragegebührenkürzung und der exponentiellen Rabattfunktion an den Indexer und seine Delegatoren verteilt. -### What is query fee cut and indexing reward cut? +### Was ist die Kürzung der Abfragegebühr und die Kürzung der Indizierungsprämie? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +Die Werte `queryFeeCut` und `indexingRewardCut` sind Delegationsparameter, die der Indexer zusammen mit cooldownBlocks setzen kann, um die Verteilung von GRT zwischen dem Indexer und seinen Delegatoren zu kontrollieren. Siehe die letzten Schritte in [Staking im Protokoll](/indexing/overview/#stake-in-the-protocol) für Anweisungen zur Einstellung der Delegationsparameter. -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **queryFeeCut** - der Prozentsatz der Rückerstattungen von Abfragegebühren, der an den Indexer verteilt wird. Wenn dieser Wert auf 95 % gesetzt ist, erhält der Indexer 95 % der Abfragegebühren, die beim Abschluss einer Zuteilung anfallen, während die restlichen 5 % an die Delegatoren gehen. -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **indexingRewardCut** - der Prozentsatz der Indizierung Rewards, der an den Indexer verteilt wird. Wenn dieser Wert auf 95 % gesetzt ist, erhält der Indexierer 95 % der Rewards für die Indizierung, wenn eine Zuweisung abgeschlossen wird, und die Delegatoren teilen sich die restlichen 5 %. -### How do Indexers know which subgraphs to index? +### Woher wissen die Indexierer, welche Subgraphen indexiert werden sollen? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexierer können sich durch die Anwendung fortgeschrittener Techniken für die Indizierung von Subgraphen unterscheiden, aber um eine allgemeine Vorstellung zu vermitteln, werden wir einige Schlüsselmetriken diskutieren, die zur Bewertung von Subgraphen im Netzwerk verwendet werden: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Kurationssignal** - Der Anteil des Netzwerkkurationssignals, der auf einen bestimmten Subgraphen angewandt wird, ist ein guter Indikator für das Interesse an diesem Subgraphen, insbesondere während der Bootstrap-Phase, wenn das Abfragevolumen ansteigt. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Eingezogene Abfragegebühren** - Die historischen Daten zum Volumen der für einen bestimmten Subgraphen eingezogenen Abfragegebühren sind ein guter Indikator für die zukünftige Nachfrage. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Einsatzhöhe** - Die Beobachtung des Verhaltens anderer Indexierer oder die Betrachtung des Anteils am Gesamteinsatz, der bestimmten Subgraphen zugewiesen wird, kann es einem Indexierer ermöglichen, die Angebotsseite für Subgraphenabfragen zu überwachen, um Subgraphen zu identifizieren, in die das Netzwerk Vertrauen zeigt, oder Subgraphen, die möglicherweise einen Bedarf an mehr Angebot aufweisen. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphen ohne Indizierungsbelohnungen** - Einige Subgraphen erzeugen keine Indizierungsbelohnungen, hauptsächlich weil sie nicht unterstützte Funktionen wie IPFS verwenden oder weil sie ein anderes Netzwerk außerhalb des Hauptnetzes abfragen. Wenn ein Subgraph keine Indizierungsbelohnungen erzeugt, wird eine entsprechende Meldung angezeigt. -### What are the hardware requirements? +### Welche Hardware-Anforderungen gibt es? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Small** - Ausreichend, um mit der Indizierung mehrerer Subgraphen zu beginnen, wird wahrscheinlich erweitert werden müssen. +- **Standard** - Standardeinstellung, wie sie in den k8s/terraform-Beispielmanifesten verwendet wird. +- **Medium** - Produktionsindexer, der 100 Subgraphen und 200-500 Anfragen pro Sekunde unterstützt. +- **Large** - Vorbereitet, um alle derzeit verwendeten Subgraphen zu indizieren und Anfragen für den entsprechenden Verkehr zu bedienen. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Konfiguration | Postgres
(CPUs) | Postgres
(Speicher in GB) | Postgres
(Festplatte in TB) | VMs
(CPUs) | VMs
(Speicher in GB) | +| ------------- | :------------------: | :----------------------------: | :------------------------------: | :-------------: | :-----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### Was sind einige grundlegende Sicherheitsvorkehrungen, die ein Indexierer treffen sollte? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **Operator Wallet** - Die Einrichtung einer Operator Wallet ist eine wichtige Vorsichtsmaßnahme, da sie es einem Indexierer ermöglicht, eine Trennung zwischen seinen Schlüsseln, die den Einsatz kontrollieren, und den Schlüsseln, die für den täglichen Betrieb zuständig sind, aufrechtzuerhalten. Siehe [Stake im Protocol](/indexing/overview/#stake-in-the-protocol) für Anweisungen. -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Firewall** - Nur der Indexierer-Dienst muss öffentlich zugänglich gemacht werden, und es sollte besonders darauf geachtet werden, dass die Admin-Ports und der Datenbankzugriff gesperrt werden: der Graph Node JSON-RPC-Endpunkt (Standard-Port: 8030), der Indexer-Management-API-Endpunkt (Standard-Port: 18000) und der Postgres-Datenbank-Endpunkt (Standard-Port: 5432) sollten nicht öffentlich zugänglich sein. -## Infrastructure +## Infrastruktur -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +Im Zentrum der Infrastruktur eines Indexierers steht der Graph Node, der die indizierten Netzwerke überwacht, Daten gemäß einer Subgraph-Definition extrahiert und lädt und sie als [GraphQL API](/about/#how-the-graph-works) bereitstellt. Der Graph Node muss mit einem Endpunkt verbunden sein, der Daten aus jedem indizierten Netzwerk ausgibt; ein IPFS-Knoten für die Datenbeschaffung; eine PostgreSQL-Datenbank für die Speicherung; und Indexer-Komponenten, die seine Interaktionen mit dem Netzwerk erleichtern. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL-Datenbank** - Der Hauptspeicher für den Graphenknoten, in dem die Subgraphen-Daten gespeichert werden. Der Indexer-Dienst und der Agent verwenden die Datenbank auch zum Speichern von Statuskanaldaten, Kostenmodellen, Indizierungsregeln und Zuordnungsaktionen. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Datenendpunkt** - Bei EVM-kompatiblen Netzwerken muss der Graph Node mit einem Endpunkt verbunden sein, der eine EVM-kompatible JSON-RPC-API bereitstellt. Dabei kann es sich um einen einzelnen Client handeln oder um ein komplexeres Setup, das die Last auf mehrere Clients verteilt. Es ist wichtig, sich darüber im Klaren zu sein, dass bestimmte Subgraphen besondere Client-Fähigkeiten erfordern, wie z. B. den Archivmodus und/oder die Paritätsverfolgungs-API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS-Knoten (Version kleiner als 5)** - Die Metadaten für die Subgraph-Bereitstellung werden im IPFS-Netzwerk gespeichert. Der Graph Node greift in erster Linie auf den IPFS-Knoten während der Bereitstellung des Subgraphen zu, um das Subgraphen-Manifest und alle verknüpften Dateien zu holen. Netzwerk-Indizierer müssen keinen eigenen IPFS-Knoten hosten, ein IPFS-Knoten für das Netzwerk wird unter https://ipfs.network.thegraph.com gehostet. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Indexierer-Dienst** - Erledigt alle erforderlichen externen Kommunikationen mit dem Netz. Teilt Kostenmodelle und Indizierungsstatus, leitet Abfrageanfragen von Gateways an einen Graph Node weiter und verwaltet die Abfragezahlungen über Statuskanäle mit dem Gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexierer-Agent** - Erleichtert die Interaktionen des Indexierers in der Kette, einschließlich der Registrierung im Netzwerk, der Verwaltung von Subgraph-Einsätzen in seine(n) Graph-Knoten und der Verwaltung von Zuweisungen. -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Prometheus Metrics Server** - Die Komponenten Graph Node und Indexierer protokollieren ihre Metriken auf dem Metrics Server. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +Hinweis: Um eine flexible Skalierung zu unterstützen, wird empfohlen, Abfrage- und Indizierungsbelange auf verschiedene Knotengruppen zu verteilen: Abfrageknoten und Indexknoten. -### Ports overview +### Übersicht über Ports -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **Wichtig**: Seien Sie vorsichtig damit, Ports öffentlich zugänglich zu machen - **Verwaltungsports** sollten unter Verschluss gehalten werden. Dies gilt auch für den Graph Node JSON-RPC und die Indexierer-Verwaltungsendpunkte, die im Folgenden beschrieben werden. -#### Graph Node +#### Graph-Knoten -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable | +| ---- | ------------------------------------------------ | ---------------------------------------------- | ------------------ | ----------------- | +| 8000 | GraphQL HTTP Server
(für Subgraph-Abfragen) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(für Subgraphen-Abonnements) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(zum Verwalten von Deployments) | / | \--admin-port | - | +| 8030 | Status der Indizierung von Subgraphen API | /graphql | \--index-node-port | - | +| 8040 | Prometheus-Metriken | /metrics | \--metrics-port | - | -#### Indexer Service +#### Indexer-Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable | +| ---- | --------------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP Server
(für bezahlte Subgraph-Abfragen) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus-Metriken | /metrics | \--metrics-port | - | -#### Indexer Agent +#### Indexierer-Agent -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable | +| ---- | ------------------------------- | ------ | -------------------------- | --------------------------------------- | +| 8000 | Indexer-Verwaltungs-API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Einrichten einer Server-Infrastruktur mit Terraform auf Google Cloud -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> Hinweis: Indexierer können alternativ AWS, Microsoft Azure oder Alibaba nutzen. -#### Install prerequisites +#### Installieren Sie die Voraussetzungen -- Google Cloud SDK -- Kubectl command line tool +- Google Cloud-SDK +- Kubectl-Befehlszeilentool - Terraform -#### Create a Google Cloud Project +#### Erstellen Sie ein Google Cloud-Projekt -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Klonen oder navigieren Sie zum [Indexierer-Repository] (https://github.com/graphprotocol/indexer). -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- Navigieren Sie zum Verzeichnis `./terraform`, in dem alle Befehle ausgeführt werden sollen. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Authentifizieren Sie sich bei Google Cloud und erstellen Sie ein neues Projekt. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Verwenden Sie die Abrechnungsseite der Google Cloud Console, um die Abrechnung für das neue Projekt zu aktivieren. -- Create a Google Cloud configuration. +- Erstellen Sie eine Google Cloud-Konfiguration. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Aktivieren Sie die erforderlichen Google Cloud-APIs. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Erstellen Sie ein Service-Konto. ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Aktivieren Sie das Peering zwischen der Datenbank und dem Kubernetes-Cluster, der im nächsten Schritt erstellt wird. ```sh gcloud compute addresses create google-managed-services-default \ @@ -243,41 +243,41 @@ gcloud compute addresses create google-managed-services-default \ --purpose=VPC_PEERING \ --network default \ --global \ - --description 'IP Range for peer networks.' + --description 'IP Range for peer Networks.' gcloud services vpc-peerings connect \ --network=default \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Erstellen Sie eine minimale Terraform-Konfigurationsdatei (aktualisieren Sie sie nach Bedarf). ```sh indexer= cat > terraform.tfvars < **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> HINWEIS: Alle Laufzeit-Konfigurationsvariablen können entweder als Parameter auf den Befehl beim Start oder mithilfe von Umgebungsvariablen im Format `COMPONENT_NAME_VARIABLE_NAME`(z. B. `INDEXER_AGENT_ETHEREUM`) angewandt werden. -#### Indexer agent +#### Indexierer-Agent ```sh graph-indexer-agent start \ @@ -488,7 +488,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Indexierer-Service ```sh SERVER_HOST=localhost \ @@ -514,58 +514,58 @@ graph-indexer-service start \ | pino-pretty ``` -#### Indexer CLI +#### Indexierer-CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +Das Indexierer-CLI ist ein Plugin für [`@graphprotocol/graph-cli`] (https://www.npmjs.com/package/@graphprotocol/graph-cli), das im Terminal unter `graph indexer` erreichbar ist. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### Indexierer-Verwaltung mit Indexierer-CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +Das vorgeschlagene Werkzeug für die Interaktion mit der **Indexierer-Management-API** ist das **Indexierer-CLI**, eine Erweiterung des **Graph CLI**. Der Indexierer-Agent benötigt Input von einem Indexierer, um im Namen des Indexers autonom mit dem Netzwerk zu interagieren. Die Mechanismen zur Definition des Verhaltens des Indexer-Agenten sind der **Zuweisungsmanagement**-Modus und **Indexierungsregeln**. Im automatischen Modus kann ein Indexierer **Indizierungsregeln** verwenden, um seine spezifische Strategie für die Auswahl von Subgraphen anzuwenden, die er indizieren und für die er Abfragen liefern soll. Die Regeln werden über eine GraphQL-API verwaltet, die vom Agenten bereitgestellt wird und als Indexierer Management API bekannt ist. Im manuellen Modus kann ein Indexierer Zuordnungsaktionen über die **Aktionswarteschlange** erstellen und sie explizit genehmigen, bevor sie ausgeführt werden. Im Überwachungsmodus werden **Indizierungsregeln** verwendet, um die **Aktionswarteschlange** zu füllen, und erfordern ebenfalls eine ausdrückliche Genehmigung für die Ausführung. -#### Usage +#### Verwendung -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +Die **Indexierer-CLI** verbindet sich mit dem Indexierer-Agenten, in der Regel über Port-Forwarding, so dass die CLI nicht auf demselben Server oder Cluster laufen muss. Um Ihnen den Einstieg zu erleichtern und etwas Kontext zu liefern, wird die CLI hier kurz beschrieben. -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Verbindet mit der Indexierer-Verwaltungs-API. Typischerweise wird die Verbindung zum Server über Port-Forwarding geöffnet, so dass die CLI einfach aus der Ferne bedient werden kann. (Datenbeispiel: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph indexer rules get [options] [ ...]` - Holt eine oder mehrere Indizierungsregeln unter Verwendung von `all` als ``, um alle Regeln zu erhalten, oder `global`, um die globalen Standardwerte zu erhalten. Ein zusätzliches Argument `--merged` kann verwendet werden, um anzugeben, dass einsatzspezifische Regeln mit der globalen Regel zusammengeführt werden. Auf diese Weise werden sie im Indexer-Agent angewendet. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - Eine oder mehrere Indizierungsregeln setzen. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Startet die Indizierung eines Subgraph-Einsatzes, wenn dieser verfügbar ist, und setzt seine `decisionBasis` auf `always`, so dass der Indexierer-Agent immer die Indizierung dieses Einsatzes wählt. Wenn die globale Regel auf `always` gesetzt ist, werden alle verfügbaren Subgraphen im Netzwerk indiziert. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - Stoppt die Indizierung eines Einsatzes und setzt seine `decisionBasis` auf never, so dass er diesen Einsatz bei der Entscheidung über die zu indizierenden Einsätze überspringt. -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` - Setzt die `decisionBasis` für ein Deployment auf `rules`, so dass der Indexierer-Agent Indizierungsregeln verwendet, um zu entscheiden, ob dieses Deployment indiziert werden soll. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Holt eine oder mehrere Aktionen mit `all` oder lässt `action-id` leer, um alle Aktionen zu erhalten. Ein zusätzliches Argument `--status` kann verwendet werden, um alle Aktionen mit einem bestimmten Status auszugeben. -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer action queue allocate ` - Aktion zur Warteschlangenzuordnung -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer action queue reallocate ` - Aktion zur Neuzuweisung der Warteschlange -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer action queue unallocate ` - Aktion zum Aufheben der Warteschlangenzuordnung -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` - Abbrechen aller Aktionen in der Warteschlange, wenn id nicht angegeben ist, sonst Abbrechen eines Arrays von id mit Leerzeichen als Trennzeichen -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` - Mehrere Aktionen zur Ausführung freigeben -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` - Erzwingt die sofortige Ausführung genehmigter Aktionen durch den Worker -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Alle Befehle, die Regeln in der Ausgabe anzeigen, können zwischen den unterstützten Ausgabeformaten (`table`, `yaml` und `json`) mit dem Argument `-output` wählen. -#### Indexing rules +#### Indizierungsregeln -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indizierungsregeln können entweder als globale Standardwerte oder für bestimmte Subgraph-Einsätze unter Verwendung ihrer IDs angewendet werden. Die Felder `deployment` und `decisionBasis` sind obligatorisch, während alle anderen Felder optional sind. Wenn eine Indizierungsregel `rules` als `decisionBasis` hat, dann vergleicht der Indexierer-Agent die Schwellenwerte dieser Regel, die nicht Null sind, mit den Werten, die aus dem Netzwerk für den entsprechenden Einsatz geholt wurden. Wenn der Subgraph-Einsatz Werte über (oder unter) einem der Schwellenwerte hat, wird er für die Indizierung ausgewählt. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +Wenn zum Beispiel die globale Regel einen `minStake` von **5** (GRT) hat, wird jeder Einsatz von Subgraphen, dem mehr als 5 (GRT) zugewiesen wurden, indiziert. Zu den Schwellenwertregeln gehören `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, und `minAverageQueryFees`. -Data model: +Datenmodell: ```graphql type IndexingRule { @@ -599,7 +599,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +Beispiel für die Verwendung der Indizierungsregel: ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### Befehlszeilenschnittstelle (CLI) für die Aktionswarteschlange -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +Das indexierer-cli bietet ein `actions`-Modul für die manuelle Arbeit mit der Aktionswarteschlange. Es verwendet die **Graphql-API**, die vom Indexierer-Verwaltungsserver gehostet wird, um mit der Aktions-Warteschlange zu interagieren. -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +Der Action Execution Worker holt sich nur dann Elemente aus der Warteschlange, um sie auszuführen, wenn sie den Status `ActionStatus = approved` haben. Im empfohlenen Pfad werden Aktionen der Warteschlange mit ActionStatus = queued hinzugefügt, so dass sie dann genehmigt werden müssen, um in der Kette ausgeführt zu werden. Der allgemeine Ablauf sieht dann wie folgt aus: -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- Aktion, die vom Drittanbieter-Optimierungstool oder vom indexer-cli-Benutzer zur Warteschlange hinzugefügt wurde +- Indexierer kann die `indexer-cli` verwenden, um alle in der Warteschlange stehenden Aktionen zu sehen +- Indexierer (oder andere Software) kann Aktionen in der Warteschlange mit Hilfe des `indexer-cli` genehmigen oder abbrechen. Die Befehle approve und cancel nehmen ein Array von Aktions-Ids als Eingabe. +- Der Ausführungsworker fragt die Warteschlange regelmäßig nach genehmigten Aktionen ab. Er holt die `approved` Aktionen aus der Warteschlange, versucht, sie auszuführen, und aktualisiert die Werte in der Datenbank je nach Ausführungsstatus auf `success` oder `failed`. +- Ist eine Aktion erfolgreich, stellt der Worker sicher, dass eine Indizierungsregel vorhanden ist, die dem Agenten mitteilt, wie er die Zuweisung in Zukunft verwalten soll. Dies ist nützlich, wenn manuelle Aktionen durchgeführt werden, während sich der Agent im `auto`- oder `oversight`-Modus befindet. +- Der Indexierer kann die Aktionswarteschlange überwachen, um einen Überblick über die Ausführung von Aktionen zu erhalten und bei Bedarf Aktionen, deren Ausführung fehlgeschlagen ist, erneut zu genehmigen und zu aktualisieren. Die Aktionswarteschlange bietet einen Überblick über alle in der Warteschlange stehenden und ausgeführten Aktionen. -Data model: +Datenmodell: ```graphql Type ActionInput { @@ -657,7 +657,7 @@ ActionType { } ``` -Example usage from source: +Verwendungsbeispiel aus dem Sourcecode: ```bash graph indexer actions get all @@ -677,141 +677,141 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +Beachten Sie, dass unterstützte Aktionstypen für das Allokationsmanagement unterschiedliche Eingabeanforderungen haben: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - Zuweisung eines Einsatzes für einen bestimmten Einsatz von Subgraphen - - required action params: + - erforderliche Aktionsparameter: - deploymentID - amount -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `Unallocate` - Beendigung der Zuweisung, wodurch der Einsatz für eine andere Zuweisung frei wird - - required action params: + - erforderliche Aktionsparameter: - allocationID - deploymentID - - optional action params: + - optionale Aktionsparameter: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (erzwingt die Verwendung des bereitgestellten POI, auch wenn es nicht mit dem übereinstimmt, was der Graph-Knoten bereitstellt) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - Zuordnung atomar schließen und eine neue Zuordnung für denselben Einsatz von Subgraphen öffnen - - required action params: + - erforderliche Aktionsparameter: - allocationID - deploymentID - amount - - optional action params: + - optionale Aktionsparameter: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (erzwingt die Verwendung des bereitgestellten POI, auch wenn es nicht mit dem übereinstimmt, was der Graph-Knoten bereitstellt) -#### Cost models +#### Kostenmodelle -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Kostenmodelle ermöglichen eine dynamische Preisgestaltung für Abfragen auf der Grundlage von Markt- und Abfrageattributen. Der Indexierer-Service teilt ein Kostenmodell mit den Gateways für jeden Subgraphen, für den sie beabsichtigen, auf Anfragen zu antworten. Die Gateways wiederum nutzen das Kostenmodell, um Entscheidungen über die Auswahl der Indexer pro Anfrage zu treffen und die Bezahlung mit den ausgewählten Indexern auszuhandeln. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Die Agora-Sprache bietet ein flexibles Format zur Deklaration von Kostenmodellen für Abfragen. Ein Agora-Preismodell ist eine Folge von Anweisungen, die für jede Top-Level-Abfrage in einer GraphQL-Abfrage nacheinander ausgeführt werden. Für jede Top-Level-Abfrage bestimmt die erste Anweisung, die ihr entspricht, den Preis für diese Abfrage. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Eine Anweisung besteht aus einem Prädikat, das zum Abgleich von GraphQL-Abfragen verwendet wird, und einem Kostenausdruck, der bei der Auswertung die Kosten in dezimalen GRT ausgibt. Werte in der benannten Argumentposition einer Abfrage können im Prädikat erfasst und im Ausdruck verwendet werden. Globale Werte können auch gesetzt und durch Platzhalter in einem Ausdruck ersetzt werden. -Example cost model: +Beispielkostenmodell: ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +# Diese Anweisung erfasst den Wert „skip“, +# verwendet einen booleschen Ausdruck im Prädikat, um mit bestimmten Abfragen übereinzustimmen, die `skip` verwenden +# und einen Kostenausdruck, um die Kosten auf der Grundlage des `skip`-Wertes und des globalen SYSTEM_LOAD-Wertes zu berechnen query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +# Diese Vorgabe passt auf jeden GraphQL-Ausdruck. +# Sie verwendet ein Global, das in den Ausdruck eingesetzt wird, um die Kosten zu berechnen default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +Beispiel für eine Abfragekostenberechnung unter Verwendung des obigen Modells: -| Query | Price | +| Abfrage | Preis | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### Anwendung des Kostenmodells -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Kostenmodelle werden über die Indexierer-CLI angewendet, die sie zum Speichern in der Datenbank an die Indexierer-Verwaltungs-API des Indexierer-Agenten übergibt. Der Indexierer-Service holt sie dann ab und stellt Gateways die Kostenmodelle zur Verfügung, jedesmal wenn sie danach fragen. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Interaktion mit dem Netzwerk -### Stake in the protocol +### Einsatz im Protokoll -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +Die ersten Schritte zur Teilnahme am Netzwerk als Indexierer sind die Genehmigung des Protokolls, der Einsatz von Geldern und (optional) die Einrichtung einer Betreiberadresse für die täglichen Interaktionen mit dem Protokoll. -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> Hinweis: In dieser Anleitung wird Remix für die Interaktion mit dem Vertrag verwendet, aber Sie können auch das Tool Ihrer Wahl verwenden ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/) und [MyCrypto](https://www.mycrypto.com/account) sind einige andere bekannte Tools). -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +Sobald ein Indexer GRT im Protokoll verankert hat, können die [Indexierer-Komponenten](/indexing/overview/#indexer-components) gestartet werden und ihre Interaktionen mit dem Netzwerk beginnen. -#### Approve tokens +#### Genehmigen Sie Token -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Öffnen Sie die [Remix-App] (https://remix.ethereum.org/) in einem Browser -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. Erstellen Sie im `File Explorer` eine Datei mit dem Namen **GraphToken.abi** mit dem [Token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Wählen Sie die Datei `GraphToken.abi` aus und öffnen Sie sie im Editor. Wechseln Sie in der Remix-Benutzeroberfläche zum Abschnitt `Deploy and run transactions`. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Wählen Sie unter environment die Option `Injected Web3` und unter `Account` die Adresse Ihres Indexierers aus. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Legen Sie die GraphToken-Vertragsadresse fest - Fügen Sie die GraphToken-Vertragsadresse (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) neben `At Address` ein und klicken Sie zum Anwenden auf die Schaltfläche `At address`. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Rufen Sie die Funktion `approve(spender, amount)` auf, um den Einsatzvertrag zu genehmigen. Geben Sie in `spender` die Adresse des Einsatzvertrags (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) und in `amount` die zu setzenden Token (in wei) ein. -#### Stake tokens +#### Stake-Token -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Öffnen Sie die [Remix-App] (https://remix.ethereum.org/) in einem Browser -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. Erstellen Sie im `File Explorer` eine Datei mit dem Namen **Staking.abi** mit dem Staking-ABI. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Wählen Sie die Datei `Staking.abi` aus und öffnen Sie sie im Editor. Wechseln Sie in der Remix-Benutzeroberfläche zum Abschnitt `Deploy and run transactions`. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Wählen Sie unter environment die Option `Injected Web3` und unter `Account` die Adresse Ihres Indexierers aus. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Legen Sie die Adresse des Abtretungsvertrags fest - Fügen Sie die Adresse des Abtretungsvertrags (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) neben `At Address` ein und klicken Sie auf die Schaltfläche `At address`, um sie anzuwenden. -6. Call `stake()` to stake GRT in the protocol. +6. Rufen Sie `stake()` auf, um GRT in das Protokoll aufzunehmen. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexierer können eine andere Adresse als Operator für ihre Indexer-Infrastruktur genehmigen, um die Schlüssel, die die Fonds kontrollieren, von denen zu trennen, die alltägliche Aktionen wie die Zuweisung auf Subgraphen und die Bedienung (bezahlter) Abfragen durchführen. Um den Betreiber zu setzen, rufen Sie `setOperator()` mit der Betreiberadresse auf. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) Um die Verteilung von Belohnungen zu kontrollieren und Delegatoren strategisch anzulocken, können Indexierer ihre Delegationsparameter aktualisieren, indem sie ihren `indexingRewardCut` (Teile pro Million), `queryFeeCut` (Teile pro Million) und `cooldownBlocks` (Anzahl der Blöcke) aktualisieren. Dazu rufen Sie `setDelegationParameters()` auf. Das folgende Beispiel stellt den `queryFeeCut` so ein, dass 95% der Abfragerabatte an den Indexierer und 5% an die Delegatoren verteilt werden, stellt den `indexingRewardCut` so ein, dass 60% der Indexierungsbelohnungen an den Indexierer und 40% an die Delegatoren verteilt werden, und stellt die `cooldownBlocks` Periode auf 500 Blöcke. ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### Einstellung der Delegationsparameter -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +Die Funktion `setDelegationParameters()` im [Staking Contract] (https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) ist für Indexierer von entscheidender Bedeutung, da sie es ihnen ermöglicht, Parameter zu setzen, die ihre Interaktion mit Delegatoren definieren und ihre Reward-Aufteilung und Delegationskapazität beeinflussen. -### How to set delegation parameters +### Festlegen der Delegationsparameter -To set the delegation parameters using Graph Explorer interface, follow these steps: +Gehen Sie wie folgt vor, um die Delegationsparameter über die Graph Explorer-Schnittstelle einzustellen: -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. Navigieren Sie zu [Graph Explorer] (https://thegraph.com/explorer/). +2. Verbinden Sie Ihre Wallet. Wählen Sie Multisig (z. B. Gnosis Safe) und dann Mainnet aus. Hinweis: Sie müssen diesen Vorgang für Arbitrum One wiederholen. +3. Verbinden Sie die Wallet, die Sie als Unterzeichner haben. +4. Navigieren Sie zum Abschnitt 'Settings' und wählen Sie 'Delegation Parameters'. Diese Parameter sollten so konfiguriert werden, dass eine effektive Kürzung innerhalb des gewünschten Bereichs erreicht wird. Nach Eingabe der Werte in die vorgesehenen Eingabefelder berechnet die Schnittstelle automatisch den effektiven Anteil. Passen Sie diese Werte nach Bedarf an, um den gewünschten Prozentsatz der effektiven Kürzung zu erreichen. +5. Übermitteln Sie die Transaktion an das Netz. -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> Hinweis: Diese Transaktion muss von den Unterzeichnern der Multisig-Wallets bestätigt werden. -### The life of an allocation +### Die Lebensdauer einer Zuweisung -After being created by an Indexer a healthy allocation goes through two states. +Nachdem sie von einem Indexer erstellt wurde, durchläuft eine ordnungsgemäße Zuordnung zwei Zustände. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Aktiv** - Sobald eine Zuweisung in der Kette erstellt wurde ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)), wird sie als **aktiv** betrachtet. Ein Teil des eigenen und/oder delegierten Einsatzes des Indexierers wird einem Subgraph-Einsatz zugewiesen, was ihm erlaubt, Rewards für die Indizierung zu beanspruchen und Abfragen für diesen Subgraph-Einsatz zu bedienen. Der Indexierer-Agent verwaltet die Erstellung von Zuweisungen basierend auf den Indexierer-Regeln. -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **Geschlossen** - Ein Indexierer kann eine Zuweisung schließen, sobald 1 Epoche vergangen ist ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) oder sein Indexierer-Agent schließt die Zuweisung automatisch nach der **maxAllocationEpochs** (derzeit 28 Tage). Wenn eine Zuweisung mit einem gültigen Indizierungsnachweis (POI) geschlossen wird, werden die Rewards für die Indizierung an den Indexierer und seine Delegatoren verteilt ([lweitere Informationen](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexierern wird empfohlen, die Offchain-Synchronisierungsfunktionalität zu nutzen, um den Einsatz von Subgraphen mit dem Chainhead zu synchronisieren, bevor die Zuweisung Onchain erstellt wird. Diese Funktion ist besonders nützlich für Subgraphen, bei denen die Synchronisierung länger als 28 Epochen dauert oder die Gefahr eines unbestimmten Fehlers besteht. diff --git a/website/src/pages/de/indexing/supported-network-requirements.mdx b/website/src/pages/de/indexing/supported-network-requirements.mdx index 72e36248f68c..f2206088cafe 100644 --- a/website/src/pages/de/indexing/supported-network-requirements.mdx +++ b/website/src/pages/de/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Unterstützte Netzwerkanforderungen --- -| Netzwerk | Guides | Systemanforderungen | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Höhere Taktfrequenz im Vergleich zur Kernanzahl
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Netzwerk | Guides | Systemanforderungen | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Höhere Taktfrequenz im Vergleich zur Kernanzahl
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/de/indexing/tap.mdx b/website/src/pages/de/indexing/tap.mdx index 13fa3c754e0d..8d76412fd28b 100644 --- a/website/src/pages/de/indexing/tap.mdx +++ b/website/src/pages/de/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP-Migrationsleitfaden +title: GraphTally Guide --- -Erfahren Sie mehr über das neue Zahlungssystem von The Graph, **Timeline Aggregation Protocol, TAP**. Dieses System bietet schnelle, effiziente Mikrotransaktionen mit minimiertem Vertrauen. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Überblick -[TAP] (https://docs.rs/tap_core/latest/tap_core/index.html) ist ein direkter Ersatz für das derzeitige Scalar-Zahlungssystem. Es bietet die folgenden Hauptfunktionen: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Effiziente Abwicklung von Mikrozahlungen. - Fügt den Onchain-Transaktionen und -Kosten eine weitere Ebene der Konsolidierung hinzu. - Ermöglicht den Indexern die Kontrolle über Eingänge und Zahlungen und garantiert die Bezahlung von Abfragen. - Es ermöglicht dezentralisierte, vertrauenslose Gateways und verbessert die Leistung des `indexer-service` für mehrere Absender. -## Besonderheiten +### Besonderheiten -TAP ermöglicht es einem Sender, mehrere Zahlungen an einen Empfänger zu leisten, **TAP Receipts**, der diese Zahlungen zu einer einzigen Zahlung zusammenfasst, einem **Receipt Aggregate Voucher**, auch bekannt als **RAV**. Diese aggregierte Zahlung kann dann auf der Blockchain verifiziert werden, wodurch sich die Anzahl der Transaktionen verringert und der Zahlungsvorgang vereinfacht wird. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. Für jede Abfrage sendet Ihnen das Gateway eine „signierte Quittung“, die in Ihrer Datenbank gespeichert wird. Dann werden diese Abfragen von einem „Tap-Agent“ durch eine Anfrage aggregiert. Anschließend erhalten Sie ein RAV. Sie können ein RAV aktualisieren, indem Sie es mit neueren Quittungen senden, wodurch ein neues RAV mit einem höheren Wert erzeugt wird. @@ -45,28 +45,28 @@ Solange Sie `tap-agent` und `indexer-agent` ausführen, wird alles automatisch a ### Verträge -| Vertrag | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | -| ------------------- | -------------------------------------------- | -------------------------------------------- | -| TAP-Prüfer | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | -| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | -| Treuhandkonto | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | +| Vertrag | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | +| -------------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP-Prüfer | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | +| Treuhandkonto | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | ### Gateway -| Komponente | Edge- und Node-Mainnet (Arbitrum-Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | -| ------------- | --------------------------------------------- | --------------------------------------------- | -| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | -| Unterzeichner | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | -| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | +| Komponente | Edge- und Node-Mainnet (Arbitrum-Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | +| ---------------- | ---------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Unterzeichner | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Anforderungen +### Voraussetzungen -Zusätzlich zu den typischen Anforderungen für den Betrieb eines Indexers benötigen Sie einen `tap-escrow-subgraph`-Endpunkt, um TAP-Aktualisierungen abzufragen. Sie können The Graph Network zur Abfrage verwenden oder sich selbst auf Ihrem `graph-node` hosten. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (für The Graph Testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (für The Graph Mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (für The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (für The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Hinweis: `indexer-agent` übernimmt derzeit nicht die Indizierung dieses Subgraphen, wie es bei der Bereitstellung von Netzwerk-Subgraphen der Fall ist. Daher müssen Sie ihn manuell indizieren. +> Hinweis: `indexer-agent` übernimmt derzeit nicht die Indizierung dieses Subgraphen, wie es beim Einsatz von Subgraphen im Netzwerk der Fall ist. Infolgedessen müssen Sie ihn manuell indizieren. ## Migrationsleitfaden @@ -79,7 +79,7 @@ Die erforderliche Softwareversion finden Sie [hier](https://github.com/graphprot 1. **Indexer-Agent** - Folgen Sie dem [gleichen Prozess](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Geben Sie das neue Argument `--tap-subgraph-endpoint` an, um die neuen TAP-Codepfade zu aktivieren und die Einlösung von TAP-RAVs zu ermöglichen. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer-Service** @@ -104,8 +104,8 @@ Für eine minimale Konfiguration verwenden Sie die folgende Vorlage: # Einige der nachstehenden Konfigurationswerte sind globale Graphnetzwerkwerte, die Sie hier finden können: # # -# Pro-Tipp: Wenn Sie einige Werte aus der Umgebung in diese Konfiguration laden müssen, -# können Sie sie mit Umgebungsvariablen überschreiben. Als Datenbeispiel kann folgendes ersetzt werden +# Pro-Tipp: Wenn Sie einige Werte aus der Umgebung in diese Konfiguration laden müssen, können Sie +# können Sie mit Umgebungsvariablen überschreiben. Zum Beispiel kann das Folgende ersetzt werden # durch [PREFIX]_DATABASE_POSTGRESURL, wobei PREFIX `INDEXER_SERVICE` oder `TAP_AGENT` sein kann: # # [Datenbank] @@ -116,8 +116,8 @@ indexer_address = „0x1111111111111111111111111111111111111111“ operator_mnemonic = „celery smart tip orange scare van steel radio dragon joy alarm crane“ [database] -# Die URL der Postgres-Datenbank, die für die Indexer-Komponenten verwendet wird. Die gleiche Datenbank, -# die auch vom `indexer-agent` verwendet wird. Es wird erwartet, dass `indexer-agent` +# Die URL der Postgres-Datenbank, die für die Indexer-Komponenten verwendet wird. Die gleiche Datenbank +# die vom `indexer-agent` verwendet wird. Es wird erwartet, dass `indexer-agent` die # die notwendigen Tabellen erstellt. postgres_url = „postgres://postgres@postgres:5432/postgres“ @@ -128,18 +128,18 @@ query_url = „“ status_url = „“ [subgraphs.network] -# Abfrage-URL für den Graph Network Subgraph. +# Abfrage-URL für den Graph-Netzwerk-Subgraphen. query_url = „“ -# Optional, Einsatz, nach dem im lokalen `graph-node` gesucht wird, falls er lokal indiziert ist. -# Es wird empfohlen, den Subgraphen lokal zu indizieren. +# Optional, Einsatz, der im lokalen `graph-node` zu suchen ist, falls lokal indiziert. +# Die lokale Indizierung des Subgraphen wird empfohlen. # HINWEIS: Verwenden Sie nur `query_url` oder `deployment_id`. deployment_id = „Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“ [subgraphs.escrow] -# Abfrage-URL für den Subgraphen „Escrow“. +# Abfrage-URL für den Escrow-Subgraphen. query_url = „“ -# Optional, Einsatz, nach dem im lokalen `graph-node` gesucht wird, falls er lokal indiziert ist. -# Es wird empfohlen, den Subgraphen lokal zu indizieren. +# Optional, Einsatz für die Suche im lokalen `Graph-Knoten`, falls lokal indiziert. +# Die lokale Indizierung des Subgraphen wird empfohlen. # HINWEIS: Verwenden Sie nur `query_url` oder `deployment_id`. deployment_id = „Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“ @@ -153,9 +153,9 @@ receipts_verifier_address = „0x2222222222222222222222222222222222222222“ # Spezifische Konfigurationen für tap-agent # ######################################## [tap] -# Dies ist die Höhe der Gebühren, die Sie bereit sind, zu einem bestimmten Zeitpunkt zu riskieren. Zum Beispiel, +# Dies ist die Höhe der Gebühren, die Sie bereit sind, zu einem bestimmten Zeitpunkt zu riskieren. Zum Beispiel. # wenn der Sender lange genug keine RAVs mehr liefert und die Gebühren diesen Betrag -# übersteigt, wird der Indexer-Service keine Anfragen mehr vom Absender annehmen +# Betrag übersteigt, wird der Indexer-Service keine Anfragen mehr vom Absender annehmen # bis die Gebühren aggregiert sind. # HINWEIS: Verwenden Sie Strings für dezimale Werte, um Rundungsfehler zu vermeiden. # z.B.: @@ -164,7 +164,7 @@ max_Betrag_willig_zu_verlieren_grt = 20 [tap.sender_aggregator_endpoints] # Key-Value aller Absender und ihrer Aggregator-Endpunkte -# Das folgende Datenbeispiel gilt für das E&N Testnet-Gateway. +# Dieser hier ist zum Beispiel für das E&N Testnetz-Gateway. 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = „https://tap-aggregator.network.thegraph.com“ ``` diff --git a/website/src/pages/de/indexing/tooling/graph-node.mdx b/website/src/pages/de/indexing/tooling/graph-node.mdx index ad1242d7c2b7..a800f5f52367 100644 --- a/website/src/pages/de/indexing/tooling/graph-node.mdx +++ b/website/src/pages/de/indexing/tooling/graph-node.mdx @@ -1,40 +1,40 @@ --- -title: Graph Node +title: Graph-Knoten --- -Graph Node ist die Komponente, die Subgrafen indiziert und die resultierenden Daten zur Abfrage über eine GraphQL-API verfügbar macht. Als solches ist es für den Indexer-Stack von zentraler Bedeutung, und der korrekte Betrieb des Graph-Knotens ist entscheidend für den Betrieb eines erfolgreichen Indexers. +Graph Node ist die Komponente, die Subgraphen indiziert und die daraus resultierenden Daten zur Abfrage über eine GraphQL-API bereitstellt. Als solche ist sie ein zentraler Bestandteil des Indexer-Stacks, und der korrekte Betrieb von Graph Node ist entscheidend für den erfolgreichen Betrieb eines Indexers. -This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). +Dies bietet einen kontextbezogenen Überblick über Graph Node und einige der erweiterten Optionen, die Indexern zur Verfügung stehen. Ausführliche Dokumentation und Anleitungen finden Sie im [Graph Node repository](https://github.com/graphprotocol/graph-node). -## Graph Node +## Graph-Knoten -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node] (https://github.com/graphprotocol/graph-node) ist die Referenzimplementierung für die Indizierung von Subgraphen auf The Graph Network, die Verbindung zu Blockchain-Clients, die Indizierung von Subgraphen und die Bereitstellung indizierter Daten für Abfragen. -Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). +Graph Node (und der gesamte Indexer-Stack) kann sowohl auf Bare Metal als auch in einer Cloud-Umgebung betrieben werden. Diese Flexibilität der zentralen Indexer-Komponente ist entscheidend für die Robustheit von The Graph Protocol. Ebenso kann Graph Node [aus dem Quellcode gebaut] werden (https://github.com/graphprotocol/graph-node), oder Indexer können eines der [bereitgestellten Docker Images] verwenden (https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL-Datenbank -Der Hauptspeicher für den Graph-Knoten, hier werden Subgraf-Daten sowie Metadaten zu Subgrafen und Subgraf-unabhängige Netzwerkdaten wie Block-Cache und eth_call-Cache gespeichert. +Der Hauptspeicher für den Graph Node. Hier werden die Subgraph-Daten, Metadaten über Subgraphs und Subgraph-agnostische Netzwerkdaten wie der Block-Cache und der eth_call-Cache gespeichert. ### Netzwerk-Clients -In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. +Um ein Netzwerk zu indizieren, benötigt Graph Node Zugriff auf einen Netzwerk-Client über einen EVM-kompatiblen JSON-RPC API. Dieser RPC kann sich mit einem einzelnen Client verbinden oder es könnte sich um ein komplexeres Setup handeln, das die Last auf mehrere verteilt. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +Während einige Subgraphen nur einen vollständigen Knoten benötigen, können einige Indizierungsfunktionen haben, die zusätzliche RPC-Funktionalität erfordern. Insbesondere Subgraphen, die `eth_calls` als Teil der Indizierung machen, benötigen einen Archivknoten, der [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898) unterstützt, und Subgraphen mit `callHandlers` oder `blockHandlers` mit einem `call`-Filter benötigen `trace_filter`-Unterstützung ([siehe Trace-Modul-Dokumentation hier](https://openethereum.github.io/JSONRPC-trace-module)). -**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). +**Network Firehoses** - ein Firehose ist ein gRPC-Dienst, der einen geordneten, aber forkfähigen Strom von Blöcken bereitstellt, der von den Kernentwicklern von The Graph entwickelt wurde, um eine performante Indexierung in großem Umfang zu unterstützen. Dies ist derzeit keine Voraussetzung für Indexer, aber Indexer werden ermutigt, sich mit dieser Technologie vertraut zu machen, bevor die volle Netzwerkunterstützung zur Verfügung steht. Erfahren Sie mehr über den Firehose [hier](https://firehose.streamingfast.io/). ### IPFS-Knoten -Subgraf-Bereitstellungsmetadaten werden im IPFS-Netzwerk gespeichert. Der Graph-Knoten greift hauptsächlich während der Subgraf-Bereitstellung auf den IPFS-Knoten zu, um das Subgraf-Manifest und alle verknüpften Dateien abzurufen. Netzwerk-Indexierer müssen keinen eigenen IPFS-Knoten hosten, ein IPFS-Knoten für das Netzwerk wird unter https://ipfs.network.thegraph.com gehostet. +Die Metadaten für den Einsatz von Subgraphen werden im IPFS-Netzwerk gespeichert. Der Graph Node greift während des Einsatzes von Subgraphen primär auf den IPFS-Knoten zu, um das Subgraphen-Manifest und alle verknüpften Dateien abzurufen. Netzwerkindizierer müssen keinen eigenen IPFS-Knoten hosten. Ein IPFS-Knoten für das Netzwerk wird auf https://ipfs.network.thegraph.com gehostet. ### Prometheus-Metrikserver Um Überwachung und Berichterstellung zu ermöglichen, kann Graph Node optional Metriken auf einem Prometheus-Metrikserver protokollieren. -### Getting started from source +### Einstieg in den Sourcecode -#### Install prerequisites +#### Installieren Sie die Voraussetzungen - **Rust** @@ -42,15 +42,15 @@ Um Überwachung und Berichterstellung zu ermöglichen, kann Graph Node optional - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Zusätzliche Anforderungen für Ubuntu-Benutzer** - Um einen Graph Node unter Ubuntu zu betreiben, sind möglicherweise einige zusätzliche Pakete erforderlich. ```sh -sudo apt-get install -y clang libpq-dev libssl-dev pkg-config +sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### Konfiguration -1. Start a PostgreSQL database server +1. Starten Sie einen PostgreSQL-Datenbankserver ```sh initdb -D .postgres @@ -58,9 +58,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Klonen Sie das [Graph-Knoten](https://github.com/graphprotocol/graph-node)-Repo und erstellen Sie den Sourcecode durch Ausführen von `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Nachdem alle Abhängigkeiten eingerichtet sind, starten Sie den Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -71,35 +71,35 @@ cargo run -p graph-node --release -- \ ### Erste Schritte mit Kubernetes -A complete Kubernetes example configuration can be found in the [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s). +Eine vollständige Datenbeispiel-Konfiguration für Kubernetes ist im [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s) zu finden. ### Ports Wenn es ausgeführt wird, stellt Graph Node die folgenden Ports zur Verfügung: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable | +| ---- | ------------------------------------------------ | ---------------------------------------------- | ------------------ | ----------------- | +| 8000 | GraphQL HTTP Server
(für Subgraph-Abfragen) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(für Subgraphen-Abonnements) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(zum Verwalten von Deployments) | / | \--admin-port | - | +| 8030 | Status der Indizierung von Subgraphen API | /graphql | \--index-node-port | - | +| 8040 | Prometheus-Metriken | /metrics | \--metrics-port | - | -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. +> **Wichtig**: Seien Sie vorsichtig damit, Ports öffentlich zugänglich zu machen - **Verwaltungsports** sollten unter Verschluss gehalten werden. Dies gilt auch für den Graph Node JSON-RPC Endpunkt. ## Erweiterte Graph-Knoten-Konfiguration -In seiner einfachsten Form kann Graph Node mit einer einzelnen Instanz von Graph Node, einer einzelnen PostgreSQL-Datenbank, einem IPFS-Knoten und den Netzwerk-Clients betrieben werden, die von den zu indizierenden Subgrafen benötigt werden. +In seiner einfachsten Form kann Graph Node mit einer einzelnen Instanz von Graph Node, einer einzelnen PostgreSQL-Datenbank, einem IPFS-Knoten und den Netzwerk-Clients betrieben werden, die für die zu indizierenden Subgraphen erforderlich sind. -This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. +Dieses Setup kann horizontal skaliert werden, indem mehrere Graph Nodes und mehrere Datenbanken zur Unterstützung dieser Graph Nodes hinzugefügt werden. Fortgeschrittene Benutzer möchten vielleicht einige der horizontalen Skalierungsmöglichkeiten von Graph Node sowie einige der erweiterten Konfigurationsoptionen über die Datei „config.toml“ und die Umgebungsvariablen von Graph Node nutzen. ### `config.toml` -A [TOML](https://toml.io/en/) configuration file can be used to set more complex configurations than those exposed in the CLI. The location of the file is passed with the --config command line switch. +Eine [TOML](https://toml.io/en/)-Konfigurationsdatei kann verwendet werden, um komplexere Konfigurationen als die in der Befehlszeile angezeigten festzulegen. Der Speicherort der Datei wird mit dem Befehlszeilenschalter --config übergeben. > Bei Verwendung einer Konfigurationsdatei ist es nicht möglich, die Optionen --postgres-url, --postgres-secondary-hosts und --postgres-host-weights zu verwenden. -A minimal `config.toml` file can be provided; the following file is equivalent to using the --postgres-url command line option: +Eine minimale `config.toml`-Datei kann angegeben werden; die folgende Datei entspricht der Verwendung der Befehlszeilenoption --postgres-url: ```toml [store] @@ -110,47 +110,47 @@ connection="<.. postgres-url argument ..>" indexers = [ "<.. list of all indexing nodes ..>" ] ``` -Full documentation of `config.toml` can be found in the [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). +Eine vollständige Dokumentation von `config.toml` findet sich in den [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). #### Mehrere Graph-Knoten -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Die Indizierung von Graph Node kann horizontal skaliert werden, indem mehrere Instanzen von Graph Node ausgeführt werden, um die Indizierung und Abfrage auf verschiedene Knoten aufzuteilen. Dies kann einfach durch die Ausführung von Graph Nodes erfolgen, die beim Start mit einer anderen `node_id` konfiguriert werden (z. B. in der Docker Compose-Datei). Diese kann dann in der Datei `config.toml` verwendet werden, um [dedizierte Abfrageknoten](#dedicated-query-nodes), [Block-Ingestoren](#dedicated-block-ingestion) und die Aufteilung von Subgraphen über Knoten mit [Einsatzregeln](#deployment-rules) zu spezifizieren. > Beachten Sie darauf, dass mehrere Graph-Knoten so konfiguriert werden können, dass sie dieselbe Datenbank verwenden, die ihrerseits durch Sharding horizontal skaliert werden kann. #### Bereitstellungsregeln -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Bei mehreren Graph-Knoten ist es notwendig, den Einsatz von neuen Subgraphen zu verwalten, damit derselbe Subgraph nicht von zwei verschiedenen Knoten indiziert wird, was zu Kollisionen führen würde. Dies kann durch die Verwendung von Einsatzregeln geschehen, die auch angeben können, in welchem „Shard“ die Daten eines Subgraphen gespeichert werden sollen, wenn ein Datenbank-Sharding verwendet wird. Einsatzregeln können den Namen des Subgraphen und das Netzwerk, das der Einsatz indiziert, abgleichen, um eine Entscheidung zu treffen. -Beispielkonfiguration für Bereitstellungsregeln: +Example deployment rule configuration: ```toml [deployment] [[deployment.rule]] -match = { name = "(vip|important)/.*" } -shard = "vip" -indexers = [ "index_node_vip_0", "index_node_vip_1" ] +match = { name = „(vip|important)/.*“ } +shard = „vip“ +indexers = [ „index_node_vip_0“, „index_node_vip_1“ ] [[deployment.rule]] -match = { network = "kovan" } -# No shard, so we use the default shard called 'primary' -indexers = [ "index_node_kovan_0" ] +match = { network = „kovan“ } +# Kein Shard, also verwenden wir den Standard-Shard namens 'primary' +indexers = [ „index_node_kovan_0“ ] [[deployment.rule]] -match = { network = [ "xdai", "poa-core" ] } -indexers = [ "index_node_other_0" ] +match = { network = [ „xdai“, „poa-core“ ] } +indexers = [ „index_node_other_0“ ] [[deployment.rule]] -# There's no 'match', so any subgraph matches -shards = [ "sharda", "shardb" ] +# Es gibt kein 'match', also passt jeder Subgraph +shards = [ „sharda“, „shardb“ ] indexers = [ - "index_node_community_0", - "index_node_community_1", - "index_node_community_2", - "index_node_community_3", - "index_node_community_4", - "index_node_community_5" + „index_node_community_0“, + „index_node_community_1“, [ ‚index_node_community_1‘, + „index_node_community_2“, + „index_node_community_3“, + „index_node_community_4“, + „index_node_community_5“ ] ``` -Read more about deployment rules [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). +Lesen Sie mehr über die Einsatzregeln [hier] (https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). #### Dedizierte Abfrageknoten @@ -167,11 +167,11 @@ Jeder Knoten, dessen --node-id mit dem regulären Ausdruck übereinstimmt, wird Für die meisten Anwendungsfälle reicht eine einzelne Postgres-Datenbank aus, um eine Graph-Node-Instanz zu unterstützen. Wenn eine Graph-Node-Instanz aus einer einzelnen Postgres-Datenbank herauswächst, ist es möglich, die Speicherung der Daten des Graph-Nodes auf mehrere Postgres-Datenbanken aufzuteilen. Alle Datenbanken zusammen bilden den Speicher der Graph-Node-Instanz. Jede einzelne Datenbank wird als Shard bezeichnet. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards können verwendet werden, um Subgraph-Einsätze auf mehrere Datenbanken aufzuteilen, und sie können auch verwendet werden, um Replikate zu verwenden, um die Abfragelast auf die Datenbanken zu verteilen. Dazu gehört auch die Konfiguration der Anzahl der verfügbaren Datenbankverbindungen, die jeder „Graph-Knoten“ in seinem Verbindungspool für jede Datenbank vorhalten soll, was zunehmend wichtiger wird, je mehr Subgraphen indiziert werden. Sharding wird nützlich, wenn Ihre vorhandene Datenbank nicht mit der Last Schritt halten kann, die Graph Node ihr auferlegt, und wenn es nicht mehr möglich ist, die Datenbankgröße zu erhöhen. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> Im Allgemeinen ist es besser, eine einzelne Datenbank so groß wie möglich zu machen, bevor man mit Shards beginnt. Eine Ausnahme ist, wenn der Abfrageverkehr sehr ungleichmäßig auf die Subgraphen verteilt ist; in solchen Situationen kann es sehr hilfreich sein, wenn die hochvolumigen Subgraphen in einem Shard und alles andere in einem anderen aufbewahrt wird, weil es dann wahrscheinlicher ist, dass die Daten für die hochvolumigen Subgraphen im db-internen Cache verbleiben und nicht durch Daten ersetzt werden, die von den niedrigvolumigen Subgraphen nicht so häufig benötigt werden. Was das Konfigurieren von Verbindungen betrifft, beginnen Sie mit max_connections in postgresql.conf, das auf 400 (oder vielleicht sogar 200) eingestellt ist, und sehen Sie sich die Prometheus-Metriken store_connection_wait_time_ms und store_connection_checkout_count an. Spürbare Wartezeiten (alles über 5 ms) sind ein Hinweis darauf, dass zu wenige Verbindungen verfügbar sind; hohe Wartezeiten werden auch dadurch verursacht, dass die Datenbank sehr ausgelastet ist (z. B. hohe CPU-Last). Wenn die Datenbank jedoch ansonsten stabil erscheint, weisen hohe Wartezeiten darauf hin, dass die Anzahl der Verbindungen erhöht werden muss. In der Konfiguration ist die Anzahl der Verbindungen, die jede Graph-Knoten-Instanz verwenden kann, eine Obergrenze, und der Graph-Knoten hält Verbindungen nicht offen, wenn er sie nicht benötigt. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Unterstützung mehrerer Netzwerke -Das Graph-Protokoll erhöht die Anzahl der Netzwerke, die für die Indizierung von Belohnungen unterstützt werden, und es gibt viele Subgraphen, die nicht unterstützte Netzwerke indizieren, die ein Indexer verarbeiten möchte. Die Datei `config.toml` ermöglicht eine ausdrucksstarke und flexible Konfiguration von: +Das Graph Protocol erhöht die Anzahl der Netzwerke, die für die Indizierung von Belohnungen unterstützt werden, und es gibt viele Subgraphen, die nicht unterstützte Netzwerke indizieren, die ein Indexer gerne verarbeiten würde. Die Datei `config.toml` ermöglicht eine ausdrucksstarke und flexible Konfiguration von: - Mehrere Netzwerke - Mehrere Anbieter pro Netzwerk (dies kann eine Aufteilung der Last auf Anbieter ermöglichen und kann auch die Konfiguration von vollständigen Knoten sowie Archivknoten ermöglichen, wobei Graph Node günstigere Anbieter bevorzugt, wenn eine bestimmte Arbeitslast dies zulässt). @@ -223,13 +223,13 @@ Benutzer, die ein skaliertes Indizierungs-Setup mit erweiterter Konfiguration be - Das Indexer-Repository hat eine [Beispiel-Kubernetes-Referenz](https://github.com/graphprotocol/indexer/tree/main/k8s) - [Launchpad] (https://docs.graphops.xyz/launchpad/intro) ist ein Toolkit für den Betrieb eines Graph Protocol Indexer auf Kubernetes, das von GraphOps gepflegt wird. Es bietet eine Reihe von Helm-Diagrammen und eine CLI zur Verwaltung eines Graph Node- Deployments. -### Managing Graph Node +### Verwaltung von Graph Knoten -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Bei einem laufenden Graph Node (oder Graph Nodes!) besteht die Herausforderung darin, die eingesetzten Subgraphen über diese Nodes hinweg zu verwalten. Graph Node bietet eine Reihe von Tools, die bei der Verwaltung von Subgraphen helfen. #### Protokollierung -Die Protokolle von Graph Node können nützliche Informationen für die Debuggen und Optimierung von Graph Node und bestimmten Subgraphen liefern. Graph Node unterstützt verschiedene Log-Ebenen über die Umgebungsvariable `GRAPH_LOG`, mit den folgenden Ebenen: Fehler, Warnung, Info, Debug oder Trace. +Die Protokolle von Graph Node können nützliche Informationen zur Fehlersuche und Optimierung von Graph Node und bestimmten Untergraphen liefern. Graph Node unterstützt verschiedene Log-Ebenen über die Umgebungsvariable `GRAPH_LOG`, mit den folgenden Ebenen: error, warn, info, debug oder trace. Wenn Sie außerdem `GRAPH_LOG_QUERY_TIMING` auf `gql` setzen, erhalten Sie mehr Details darüber, wie GraphQL-Abfragen ausgeführt werden (allerdings wird dadurch eine große Menge an Protokollen erzeugt). @@ -247,86 +247,86 @@ Der Befehl graphman ist in den offiziellen Containern enthalten, und Sie können Eine vollständige Dokumentation der `graphman`-Befehle ist im Graph Node Repository verfügbar. Siehe [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) im Graph Node `/docs` -### Working with subgraphs +### Arbeiten mit Subgraphen #### Indizierungsstatus-API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Die API für den Indizierungsstatus ist standardmäßig an Port 8030/graphql verfügbar und bietet eine Reihe von Methoden zur Überprüfung des Indizierungsstatus für verschiedene Subgraphen, zur Überprüfung von Indizierungsnachweisen, zur Inspektion von Subgraphen-Features und mehr. Das vollständige Schema ist [hier] verfügbar (https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). -#### Indexing performance +#### Indizierungsleistung -There are three separate parts of the indexing process: +Es gibt drei separate Teile des Indizierungsprozesses: -- Fetching events of interest from the provider -- Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) -- Writing the resulting data to the store +- Abrufen von interessanten Ereignissen vom Anbieter +- Verarbeiten von Ereignissen in der Reihenfolge mit den entsprechenden Handlern (dies kann das Aufrufen der Kette für den Zustand und das Abrufen von Daten aus dem Speicher beinhalten) +- Schreiben der Ergebnisdaten in den Speicher -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +Diese Phasen sind in einer Pipeline angeordnet (d.h. sie können parallel ausgeführt werden), aber sie sind voneinander abhängig. Wenn die Indizierung von Subgraphen langsam ist, hängt die Ursache dafür von dem jeweiligen Subgraphen ab. -Common causes of indexing slowness: +Häufige Ursachen für eine langsame Indizierung: - Zeit, die benötigt wird, um relevante Ereignisse aus der Kette zu finden (insbesondere Call-Handler können langsam sein, da sie auf `trace_filter` angewiesen sind) - Durchführen einer großen Anzahl von „eth_calls“ als Teil von Handlern -- A large amount of store interaction during execution -- A large amount of data to save to the store -- A large number of events to process -- Slow database connection time, for crowded nodes -- The provider itself falling behind the chain head -- Slowness in fetching new receipts at the chain head from the provider +- Eine große Anzahl von Store-Interaktionen während der Ausführung +- Eine große Datenmenge, die im Speicher gespeichert werden soll +- Eine große Anzahl von Ereignissen, die verarbeitet werden müssen +- Lange Datenbankverbindungszeit für überfüllte Knoten +- Der Anbieter selbst fällt dem Kettenkopf hinterher +- Langsamkeit beim Abrufen neuer Einnahmen am Kettenkopf vom Anbieter -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Metriken zur Indizierung von Subgraphen können dabei helfen, die Ursache für die Langsamkeit der Indizierung zu ermitteln. In einigen Fällen liegt das Problem am Subgraph selbst, in anderen Fällen können verbesserte Netzwerkanbieter, geringere Datenbankkonflikte und andere Konfigurationsverbesserungen die Indizierungsleistung deutlich verbessern. -#### Failed subgraphs +#### Fehlerhafte Subgraphen -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +Während der Indizierung können Subgraphen fehlschlagen, wenn sie auf unerwartete Daten stoßen, wenn eine Komponente nicht wie erwartet funktioniert oder wenn es einen Fehler in den Event-Handlern oder der Konfiguration gibt. Es gibt zwei allgemeine Arten von Fehlern: -- Deterministic failures: these are failures which will not be resolved with retries -- Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. +- Deterministische Fehler: Dies sind Fehler, die nicht durch Wiederholungsversuche behoben werden können +- Nicht deterministische Fehler: Diese können auf Probleme mit dem Anbieter oder auf einen unerwarteten Graph-Knoten-Fehler zurückzuführen sein. Wenn ein nicht deterministischer Fehler auftritt, versucht Graph Node die fehlgeschlagenen Handler erneut und nimmt im Laufe der Zeit einen Rückzieher. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In einigen Fällen kann ein Fehler durch den Indexer behoben werden (z. B. wenn der Fehler darauf zurückzuführen ist, dass nicht die richtige Art von Anbieter vorhanden ist, kann durch Hinzufügen des erforderlichen Anbieters die Indizierung fortgesetzt werden). In anderen Fällen ist jedoch eine Änderung des Subgraph-Codes erforderlich. -> Deterministische Fehler werden als „endgültig“ betrachtet, wobei für den fehlgeschlagenen Block ein Indizierungsnachweis generiert wird, während nicht-deterministische Fehler nicht als solche betrachtet werden, da es dem Subgraph gelingen kann, „auszufallen“ und die Indizierung fortzusetzen. In einigen Fällen ist das nicht-deterministische Label falsch und der Subgraph wird den Fehler nie überwinden; solche Fehler sollten als Probleme im Graph Node Repository gemeldet werden. +> Deterministische Fehler werden als „endgültig“ betrachtet, wobei für den fehlgeschlagenen Block ein Indizierungsnachweis generiert wird, während nicht-deterministische Fehler nicht als solche betrachtet werden, da es dem Subgraphen gelingen kann, „nicht zu versagen“ und die Indizierung fortzusetzen. In einigen Fällen ist die nicht-deterministische Kennzeichnung falsch, und der Subgraph wird den Fehler nie überwinden; solche Fehler sollten als Probleme im Graph Node Repository gemeldet werden. -#### Block and call cache +#### Cache blockieren und aufrufen -Graph Node speichert bestimmte Daten im Zwischenspeicher, um ein erneutes Abrufen vom Anbieter zu vermeiden. Blöcke werden zwischengespeichert, ebenso wie die Ergebnisse von `eth_calls` (letztere werden ab einem bestimmten Block zwischengespeichert). Diese Zwischenspeicherung kann die Indizierungsgeschwindigkeit bei der „Neusynchronisierung“ eines geringfügig veränderten Subgraphen drastisch erhöhen. +Graph Node speichert bestimmte Daten im Zwischenspeicher, um ein erneutes Abrufen vom Anbieter zu vermeiden. Blöcke werden zwischengespeichert, ebenso wie die Ergebnisse von `eth_calls` (letztere werden ab einem bestimmten Block zwischengespeichert). Diese Zwischenspeicherung kann die Indizierungsgeschwindigkeit bei der „Neusynchronisierung“ eines leicht geänderten Untergraphen drastisch erhöhen. -Wenn jedoch ein Ethereum-Knoten über einen bestimmten Zeitraum falsche Daten geliefert hat, können diese in den Cache gelangen und zu falschen Daten oder fehlgeschlagenen Subgraphen führen. In diesem Fall können Indexer `graphman` verwenden, um den vergifteten Cache zu löschen und dann die betroffenen Subgraphen zurückzuspulen, die dann frische Daten von dem (hoffentlich) gesunden Anbieter abrufen. +Wenn jedoch ein Ethereum-Knoten über einen bestimmten Zeitraum falsche Daten geliefert hat, können diese in den Cache gelangen und zu falschen Daten oder fehlgeschlagenen Subgraphen führen. In diesem Fall können Indexer `graphman` verwenden, um den vergifteten Cache zu löschen und dann die betroffenen Subgraphen neu zu spulen, die dann frische Daten von dem (hoffentlich) gesunden Anbieter holen werden. -If a block cache inconsistency is suspected, such as a tx receipt missing event: +Wenn eine Block-Cache-Inkonsistenz vermutet wird, z. B. ein Ereignis „TX-Empfang fehlt“: 1. `graphman chain list`, um den Namen der Kette zu finden. 2. `graphman chain check-blocks by-number ` prüft, ob der zwischengespeicherte Block mit dem Anbieter übereinstimmt, und löscht den Block aus dem Cache, wenn dies nicht der Fall ist. 1. Wenn es einen Unterschied gibt, kann es sicherer sein, den gesamten Cache mit `graphman chain truncate ` abzuschneiden. - 2. If the block matches the provider, then the issue can be debugged directly against the provider. + 2. Wenn der Block mit dem Anbieter übereinstimmt, kann das Problem direkt beim Anbieter gedebuggt werden. -#### Querying issues and errors +#### Abfragen von Problemen und Fehlern -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Sobald ein Subgraph indiziert wurde, können Indexierer erwarten, dass Abfragen über den dedizierten Abfrageendpunkt des Subgraphen bedient werden. Wenn der Indexer hofft, ein erhebliches Abfragevolumen zu bedienen, wird ein dedizierter Abfrageknoten empfohlen. Im Falle eines sehr hohen Abfragevolumens möchten Indexer möglicherweise Replikatshards konfigurieren, damit Abfragen den Indexierungsprozess nicht beeinträchtigen. -However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. +Aber selbst mit einem dedizierten Abfrageknoten und Replikaten kann die Ausführung bestimmter Abfragen lange dauern und in einigen Fällen die Speichernutzung erhöhen und die Abfragezeit für andere Benutzer negativ beeinflussen. -There is not one "silver bullet", but a range of tools for preventing, diagnosing and dealing with slow queries. +Es gibt nicht die eine Wunderwaffe, sondern eine Reihe von Tools zur Vorbeugung, Diagnose und Behandlung langsamer Abfragen. -##### Query caching +##### Abfrage-Caching Graph Node zwischenspeichert GraphQL-Abfragen standardmäßig, was die Datenbanklast erheblich reduzieren kann. Dies kann mit den Einstellungen `GRAPH_QUERY_CACHE_BLOCKS` und `GRAPH_QUERY_CACHE_MAX_MEM` weiter konfiguriert werden - lesen Sie mehr [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md#graphql-caching). -##### Analysing queries +##### Analysieren von Abfragen -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematische Abfragen treten meist auf zwei Arten auf. In einigen Fällen melden die Benutzer selbst, dass eine bestimmte Abfrage langsam ist. In diesem Fall besteht die Herausforderung darin, den Grund für die Langsamkeit zu diagnostizieren - ob es sich um ein allgemeines Problem oder um ein spezifisches Problem für diesen Untergraphen oder diese Abfrage handelt. Und dann natürlich, wenn möglich, das Problem zu beheben. -In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. +In anderen Fällen kann der Auslöser eine hohe Speicherauslastung auf einem Abfrageknoten sein. In diesem Fall besteht die Herausforderung darin, zuerst die Abfrage zu identifizieren, die das Problem verursacht. Indexer können [qlog](https://github.com/graphprotocol/qlog/) verwenden, um die Abfrageprotokolle von Graph Node zu verarbeiten und zusammenzufassen. `GRAPH_LOG_QUERY_TIMING` kann auch aktiviert werden, um langsame Abfragen zu identifizieren und zu debuggen. -Given a slow query, indexers have a few options. Of course they can alter their cost model, to significantly increase the cost of sending the problematic query. This may result in a reduction in the frequency of that query. However this often doesn't resolve the root cause of the issue. +Bei einer langsamen Abfrage haben Indexierer einige Optionen. Natürlich können sie ihr Kostenmodell ändern, um die Kosten für das Senden der problematischen Anfrage erheblich zu erhöhen. Dies kann zu einer Verringerung der Häufigkeit dieser Abfrage führen. Dies behebt jedoch häufig nicht die Ursache des Problems. -##### Account-like optimisation +##### Account-ähnliche Optimierung -Database tables that store entities seem to generally come in two varieties: 'transaction-like', where entities, once created, are never updated, i.e., they store something akin to a list of financial transactions, and 'account-like' where entities are updated very often, i.e., they store something like financial accounts that get modified every time a transaction is recorded. Account-like tables are characterized by the fact that they contain a large number of entity versions, but relatively few distinct entities. Often, in such tables the number of distinct entities is 1% of the total number of rows (entity versions) +Datenbanktabellen, die Entitäten speichern, scheinen im Allgemeinen in zwei Varianten zu existieren: „transaktionsähnlich“, bei denen Entitäten, sobald sie erstellt wurden, nie aktualisiert werden, d. h. sie speichern so etwas wie eine Liste von Finanztransaktionen, und „kontoähnlich“, bei denen Entitäten sehr oft aktualisiert werden, d. h. sie speichern so etwas wie Finanzkonten, die jedes Mal geändert werden, wenn eine Transaktion aufgezeichnet wird. Kontenähnliche Tabellen zeichnen sich dadurch aus, dass sie eine große Anzahl von Entitätsversionen, aber relativ wenige eindeutige Entitäten enthalten. In solchen Tabellen beträgt die Anzahl der unterschiedlichen Entitäten häufig 1 % der Gesamtzahl der Zeilen (Entitätsversionen). Für kontoähnliche Tabellen kann `graph-node` Abfragen generieren, die sich die Details zunutze machen, wie Postgres Daten mit einer so hohen Änderungsrate speichert, nämlich dass alle Versionen für die jüngsten Blöcke in einem kleinen Teil des Gesamtspeichers für eine solche Tabelle liegen. @@ -336,10 +336,10 @@ Im Allgemeinen sind Tabellen, bei denen die Anzahl der unterschiedlichen Entitä Sobald eine Tabelle als „kontoähnlich“ eingestuft wurde, wird durch die Ausführung von `graphman stats account-like .
` die kontoähnliche Optimierung für Abfragen auf diese Tabelle aktiviert. Die Optimierung kann mit `graphman stats account-like --clear .
` wieder ausgeschaltet werden. Es dauert bis zu 5 Minuten, bis die Abfrageknoten merken, dass die Optimierung ein- oder ausgeschaltet wurde. Nach dem Einschalten der Optimierung muss überprüft werden, ob die Abfragen für diese Tabelle durch die Änderung nicht tatsächlich langsamer werden. Wenn Sie Grafana für die Überwachung von Postgres konfiguriert haben, würden langsame Abfragen in `pg_stat_activity` in großer Zahl angezeigt werden und mehrere Sekunden dauern. In diesem Fall muss die Optimierung wieder abgeschaltet werden. -Bei Uniswap-ähnlichen Subgraphen sind die `pair`- und `token`-Tabellen die Hauptkandidaten für diese Optimierung und können die Datenbankauslastung erheblich beeinflussen. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Entfernen von Subgraphen > This is new functionality, which will be available in Graph Node 0.29.x -Irgendwann möchte ein Indexer vielleicht einen bestimmten Subgraph entfernen. Das kann einfach mit `graphman drop` gemacht werden, das einen Einsatz und alle indizierten Daten löscht. Der Einsatz kann entweder als Subgraph-Name, als IPFS-Hash `Qm..` oder als Datenbank-Namensraum `sgdNNN` angegeben werden. Weitere Dokumentation ist [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop) verfügbar. +Irgendwann möchte ein Indexer vielleicht einen bestimmten Subgraph entfernen. Dies kann einfach mit `graphman drop` gemacht werden, welches einen Einsatz und alle indizierten Daten löscht. Der Einsatz kann entweder als Name eines Subgraphen, als IPFS-Hash `Qm..` oder als Datenbank-Namensraum `sgdNNN` angegeben werden. Weitere Dokumentation ist [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop) verfügbar. diff --git a/website/src/pages/de/resources/_meta-titles.json b/website/src/pages/de/resources/_meta-titles.json index f5971e95a8f6..5ef7fded48f6 100644 --- a/website/src/pages/de/resources/_meta-titles.json +++ b/website/src/pages/de/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "Zusätzliche Rollen", + "migration-guides": "Leitfäden zur Migration" } diff --git a/website/src/pages/de/resources/benefits.mdx b/website/src/pages/de/resources/benefits.mdx index 24c816c0784e..3835f43d4f7e 100644 --- a/website/src/pages/de/resources/benefits.mdx +++ b/website/src/pages/de/resources/benefits.mdx @@ -27,54 +27,53 @@ Die Abfragekosten können variieren; die angegebenen Kosten sind der Durchschnit ## Benutzer mit geringem Volumen (weniger als 100.000 Abfragen pro Monat) -| Kostenvergleich | Selbst gehostet | The Graph Network | -| :-: | :-: | :-: | -| Monatliche Serverkosten\* | $350 pro Monat | $0 | -| Abfragekosten | $0+ | $0 pro Monat | -| Entwicklungszeit | $400 pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern | -| Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | 100.000 (kostenloser Plan) | -| Kosten pro Abfrage | $0 | $0 | -| Infrastructure | Zentralisiert | Dezentralisiert | -| Geografische Redundanz | $750+ pro zusätzlichem Knoten | Eingeschlossen | -| Betriebszeit | Variiert | 99.9%+ | -| Monatliche Gesamtkosten | $750+ | $0 | +| Kostenvergleich | Selbst gehostet | The Graph Network | +| :--------------------------: | :---------------------------------------: | :-------------------------------------------------------------: | +| Monatliche Serverkosten\* | $350 pro Monat | $0 | +| Abfragekosten | $0+ | $0 pro Monat | +| Entwicklungszeit | $400 pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern | +| Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | 100.000 (kostenloser Plan) | +| Kosten pro Abfrage | $0 | $0 | +| Infrastruktur | Zentralisiert | Dezentralisiert | +| Geografische Redundanz | $750+ pro zusätzlichem Knoten | Eingeschlossen | +| Betriebszeit | Variiert | 99.9%+ | +| Monatliche Gesamtkosten | $750+ | $0 | ## Benutzer mit mittlerem Volumen (~3M Abfragen pro Monat) -| Kostenvergleich | Selbst gehostet | The Graph Network | -| :-: | :-: | :-: | -| Monatliche Serverkosten\* | $350 pro Monat | $0 | -| Abfragekosten | $500 pro Monat | $120 pro Monat | -| Entwicklungszeit | $800 pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern | -| Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | ~3,000,000 | -| Kosten pro Abfrage | $0 | $0.00004 | -| Infrastructure | Zentralisiert | Dezentralisiert | -| Engineering-Kosten | $200 pro Stunde | Eingeschlossen | -| Geografische Redundanz | $1,200 Gesamtkosten pro zusätzlichem Knoten | Eingeschlossen | -| Betriebszeit | Variiert | 99.9%+ | -| Monatliche Gesamtkosten | $1.650+ | $120 | +| Kostenvergleich | Selbst gehostet | The Graph Network | +| :--------------------------: | :-----------------------------------------: | :-------------------------------------------------------------: | +| Monatliche Serverkosten\* | $350 pro Monat | $0 | +| Abfragekosten | $500 pro Monat | $120 pro Monat | +| Entwicklungszeit | $800 pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern | +| Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | ~3,000,000 | +| Kosten pro Abfrage | $0 | $0.00004 | +| Infrastruktur | Zentralisiert | Dezentralisiert | +| Engineering-Kosten | $200 pro Stunde | Eingeschlossen | +| Geografische Redundanz | $1,200 Gesamtkosten pro zusätzlichem Knoten | Eingeschlossen | +| Betriebszeit | Variiert | 99.9%+ | +| Monatliche Gesamtkosten | $1.650+ | $120 | ## Benutzer mit hohem Volumen (~30M Abfragen pro Monat) -| Kostenvergleich | Selbst gehostet | The Graph Network | -| :-: | :-: | :-: | -| Monatliche Serverkosten\* | $1100 pro Monat, pro Knoten | $0 | -| Abfragekosten | $4000 | $1,200 pro Monat | -| Anzahl der benötigten Knoten | 10 | Nicht anwendbar | -| Entwicklungszeit | $6,000 oder mehr pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern | -| Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | ~30,000,000 | -| Kosten pro Abfrage | $0 | $0.00004 | -| Infrastructure | Zentralisiert | Dezentralisiert | -| Geografische Redundanz | $1,200 Gesamtkosten pro zusätzlichem Knoten | Eingeschlossen | -| Betriebszeit | Variiert | 99.9%+ | -| Monatliche Gesamtkosten | $11,000+ | $1,200 | +| Kostenvergleich | Selbst gehostet | The Graph Network | +| :--------------------------: | :-----------------------------------------: | :-------------------------------------------------------------: | +| Monatliche Serverkosten\* | $1100 pro Monat, pro Knoten | $0 | +| Abfragekosten | $4000 | $1,200 pro Monat | +| Anzahl der benötigten Knoten | 10 | Nicht anwendbar | +| Entwicklungszeit | $6,000 oder mehr pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern | +| Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | ~30,000,000 | +| Kosten pro Abfrage | $0 | $0.00004 | +| Infrastruktur | Zentralisiert | Dezentralisiert | +| Geografische Redundanz | $1,200 Gesamtkosten pro zusätzlichem Knoten | Eingeschlossen | +| Betriebszeit | Variiert | 99.9%+ | +| Monatliche Gesamtkosten | $11,000+ | $1,200 | \*einschließlich der Kosten für die Datensicherung: $50-$100 pro Monat Engineering-Zeit auf der Grundlage von 200 $ pro Stunde angenommen -Reflektiert die Kosten für den Datenkonsumenten. Für Abfragen im Rahmen des „Free Plan“ werden nach wie vor -Abfragegebühren an Indexer gezahlt. +Reflektiert die Kosten für den Datenkonsumenten. Für Abfragen im Rahmen des „Free Plan“ werden nach wie vor Abfragegebühren an Indexer gezahlt. Die geschätzten Kosten gelten nur für Ethereum Mainnet Subgraphen - die Kosten sind noch höher, wenn man selbst einen `graph-node` in anderen Netzwerken hostet. Einige Nutzer müssen ihren Subgraphen möglicherweise auf eine neue Version aktualisieren. Aufgrund der Ethereum-Gas-Gebühren kostet ein Update zum Zeitpunkt des Schreibens ~$50. Beachten Sie, dass die Gasgebühren auf [Arbitrum](/archived/arbitrum/arbitrum-faq/) wesentlich niedriger sind als im Ethereum Mainnet. @@ -90,4 +89,4 @@ Das dezentralisierte Netzwerk von The Graph bietet den Nutzern Zugang zu einer g Unterm Strich: Das The Graph Network ist kostengünstiger, einfacher zu benutzen und liefert bessere Ergebnisse als ein lokaler `graph-node`. -Beginnen Sie noch heute mit der Nutzung von The Graph Network und erfahren Sie, wie Sie [Ihren Subgraphут im dezentralen Netzwerk von The Graph veröffentlichen](/subgraphs/quick-start/). +Beginnen Sie noch heute mit der Nutzung von The Graph Network und erfahren Sie, wie Sie [Ihren Subgraphen im dezentralen Netzwerk von The Graph veröffentlichen](/subgraphs/quick-start/). diff --git a/website/src/pages/de/resources/glossary.mdx b/website/src/pages/de/resources/glossary.mdx index ffcd4bca2eed..921c1f6225ae 100644 --- a/website/src/pages/de/resources/glossary.mdx +++ b/website/src/pages/de/resources/glossary.mdx @@ -1,83 +1,83 @@ --- -title: Glossary +title: Glossar --- -- **The Graph**: A decentralized protocol for indexing and querying data. +- **The Graph**: Ein dezentrales Protokoll zur Indizierung und Abfrage von Daten. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Abfrage**: Eine Anfrage nach Daten. Im Fall von The Graph ist eine Abfrage eine Anfrage nach Daten aus einem Subgraphen, die von einem Indexierer beantwortet wird. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: Eine Abfragesprache für APIs und eine Laufzeitumgebung, um diese Abfragen mit Ihren vorhandenen Daten zu erfüllen. The Graph verwendet GraphQL, um Subgraphen abzufragen. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpunkt**: Eine URL, die zur Abfrage eines Subgraphen verwendet werden kann. Der Test-Endpunkt für Subgraph Studio ist `https://api.studio.thegraph.com/query///` und der Graph Explorer Endpunkt ist `https://gateway.thegraph.com/api//subgraphs/id/`. Der The Graph Explorer Endpunkt wird verwendet, um Subgraphen im dezentralen Netzwerk von The Graph abzufragen. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: Eine offene API, die Daten aus einer Blockchain extrahiert, verarbeitet und so speichert, dass sie einfach über GraphQL abgefragt werden können. Entwickler können einen Subgraphen erstellen, bereitstellen und auf The Graph Network veröffentlichen. Sobald der Subgraph indiziert ist, kann er von jedem abgefragt werden. -- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexierer**: Netzwerkteilnehmer, die Indexierungsknoten betreiben, um Daten aus Blockchains zu indexieren und GraphQL-Abfragen zu bedienen. -- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. +- **Einkommensströme für Indexierer**: Indexierer werden in GRT mit zwei Komponenten belohnt: Rabatte auf Abfragegebühren und Rewards für die Indizierung. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Abfragegebühren-Rabatte**: Zahlungen von Subgraph-Konsumenten für die Bedienung von Anfragen im Netz. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indizierung-Rewards**: Die Rewards, die Indexierer für die Indizierung von Subgraphen erhalten. Indizierung-Rewards werden durch die Neuausgabe von 3% GRT jährlich generiert. -- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. +- **Selbstbeteiligung der Indexierer**: Der Betrag an GRT, den Indexierer einsetzen, um am dezentralen Netzwerk teilzunehmen. Das Minimum beträgt 100.000 GRT, eine Obergrenze gibt es nicht. -- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. +- **Delegationskapazität**: Die maximale Menge an GRT, die ein Indexierer von Delegatoren annehmen kann. Indexierer können nur bis zum 16-fachen ihres Indexierer-Eigenanteils akzeptieren, und zusätzliche Delegationen führen zu verwässerten Rewards. Ein Datenbeispiel: Wenn ein Indexierer eine Selbsteinnahme von 1 Mio. GRT hat, beträgt seine Delegationskapazität 16 Mio. GRT. Indexierer können jedoch ihre Delegationskapazität erhöhen, indem sie ihre Selbstbeteiligung erhöhen. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade-Indexierer**: Ein Indexierer, der als Fallback für Subgraph-Abfragen dient, die nicht von anderen Indexierern im Netzwerk bedient werden. Der Upgrade-Indexierer ist nicht konkurrenzfähig mit anderen Indexierern. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Netzwerkteilnehmer, die GRT besitzen und ihre GRT an Indexer delegieren. Dies erlaubt es Indexern, ihre Beteiligung an Subgraphen im Netzwerk zu erhöhen. Im Gegenzug erhalten die Delegierten einen Teil der Indexbelohnungen, die Indexer für die Bearbeitung von Subgraphen erhalten. -- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. +- **Delegationssteuer**: Eine 0,5%ige Gebühr, die von Delegatoren gezahlt wird, wenn sie GRT an Indexierer delegieren. Die GRT, die zur Zahlung der Gebühr verwendet werden, werden verbrannt. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Kurator**: Netzwerkteilnehmer, die hochwertige Untergraphen identifizieren und GRT auf ihnen im Gegenzug für Kuratierungsfreigaben signalisieren. Wenn Indexer Abfragegebühren für einen Subgraph beanspruchen, werden 10% an die Kuratoren dieses Subgraphen verteilt. Es gibt eine positive Korrelation zwischen der Menge der signalisierten GRT und der Anzahl der Indexer, die einen Subgraph indizieren. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Kuratierungssteuer**: Eine 1% Gebühr, die von Kuratoren bezahlt wird, wenn sie GRT auf Subgraphen signalisieren. Der GRT wird verwendet, um die Gebühr zu bezahlen. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Datenverbraucher**: Jede Anwendung oder Benutzer, die einen Subgraph abfragt. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: Ein Entwickler, der einen Subgraph für das dezentralisierte Netzwerk von The Graphen baut und bereitstellt. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. -- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. +- **Epoche**: Eine Zeiteinheit innerhalb des Netzes. Derzeit entspricht eine Epoche 6.646 Blöcken oder etwa 1 Tag. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Aktiv**: Eine Zuordnung gilt als aktiv, wenn sie onchain erstellt wird. Dies wird als Öffnen einer Zuordnung bezeichnet und zeigt dem Netzwerk an, dass der Indexierer aktiv indiziert und Abfragen für einen bestimmten Subgraphen bedient. Aktive Zuweisungen sammeln Rewards für die Indizierung, die proportional zum Signal auf dem Subgraphen und der Menge des zugewiesenen GRT sind. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Geschlossen**: Ein Indexierer kann die aufgelaufenen Rewards für einen bestimmten Subgraphen beanspruchen, indem er einen aktuellen und gültigen Proof of Indexing (POI) einreicht. Dies wird als Schließen einer Zuordnung bezeichnet. Eine Zuordnung muss mindestens eine Epoche lang offen gewesen sein, bevor sie geschlossen werden kann. Die maximale Zuordnungsdauer beträgt 28 Epochen. Lässt ein Indexierer eine Zuordnung länger als 28 Epochen offen, wird sie als veraltete Zuordnung bezeichnet. Wenn sich eine Zuordnung im Zustand **Geschlossen** befindet, kann ein Fisher immer noch einen Disput eröffnen, um einen Indexierer wegen der Bereitstellung falscher Daten anzufechten. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: Ein mächtiger dApp zum Erstellen, Bereitstellen und Publizieren von Subgraphen. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fischer**: Eine Rolle innerhalb des The Graph Network, die von Teilnehmern eingenommen wird, die die Genauigkeit und Integrität der von Indexierern gelieferten Daten überwachen. Wenn ein Fisher eine Abfrage-Antwort oder einen POI identifiziert, den er für falsch hält, kann er einen Disput gegen den Indexierer einleiten. Wenn der Streitfall zu Gunsten des Fischers entschieden wird, verliert der Indexierer 2,5 % seines Eigenanteils. Von diesem Betrag erhält der Fischer 50 % als Belohnung für seine Wachsamkeit, und die restlichen 50 % werden aus dem Verkehr gezogen (verbrannt). Dieser Mechanismus soll die Fischer dazu ermutigen, die Zuverlässigkeit des Netzwerks aufrechtzuerhalten, indem sichergestellt wird, dass die Indexierer für die von ihnen gelieferten Daten verantwortlich gemacht werden. -- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. +- **Schlichter**: Schlichter sind Netzwerkteilnehmer, die im Rahmen eines Governance-Prozesses ernannt werden. Die Rolle des Schlichters besteht darin, über den Ausgang von Streitigkeiten bei Indizierungen und Abfragen zu entscheiden. Ihr Ziel ist es, den Nutzen und die Zuverlässigkeit von The Graph Network zu maximieren. -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- **Slashing**: Indexierer können für die Bereitstellung eines falschen POI oder für die Bereitstellung ungenauer Daten um ihre selbst gesetzten GRT gekürzt werden. Der Prozentsatz des Slashings ist ein Protokollparameter, der derzeit auf 2,5% des Eigenanteils eines Indexierers festgelegt ist. 50 % der gekürzten GRT gehen an den Fischer, der die ungenauen Daten oder den falschen POI bestritten hat. Die anderen 50% werden verbrannt. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. -- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. +- **Delegation Rewards**: Die Rewards, die Delegatoren für die Delegierung von GRT an Indexierer erhalten. Delegations-Rewards werden in GRT verteilt. -- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. +- **GRT**: Der Utility-Token von The Graph. GRT bietet den Netzwerkteilnehmern wirtschaftliche Anreize für ihren Beitrag zum Netzwerk. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. -- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. +- **The Graph Client**: Eine Bibliothek für den Aufbau von GraphQL-basierten Dapps auf dezentralisierte Weise. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. -- **Graph CLI**: A command line interface tool for building and deploying to The Graph. +- **Graph CLI**: Ein Command-Line-Interface-Tool (CLI) zum Erstellen und Bereitstellen von The Graph. -- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. +- **Abkühlphase**: Die Zeit, die verbleibt, bis ein Indexierer, der seine Delegationsparameter geändert hat, dies wieder tun kann. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrieren**: Der Prozess, bei dem Kurationsanteile von einer alten Version eines Subgraphen auf eine neue Version eines Subgraphen übertragen werden (z. B. wenn v0.0.1 auf v0.0.2 aktualisiert wird). diff --git a/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx index d5ffa00d0e1f..0508b5db3baf 100644 --- a/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -1,18 +1,18 @@ --- -title: AssemblyScript Migration Guide +title: AssemblyScript-Migrationsleitfaden --- Bis jetzt haben Subgraphen eine der [ersten Versionen von AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6) verwendet. Endlich haben wir Unterstützung für die [neueste verfügbare Version](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10) hinzugefügt! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +Dies ermöglicht es den Entwicklern von Subgrafen, neuere Funktionen der AS-Sprache und der Standardbibliothek zu nutzen. Diese Anleitung gilt für alle, die `graph-cli`/`graph-ts` unter Version `0.22.0` verwenden. Wenn Sie bereits eine höhere (oder gleiche) Version als diese haben, haben Sie bereits Version `0.19.10` von AssemblyScript verwendet 🙂 > Anmerkung: Ab `0.24.0` kann `graph-node` beide Versionen unterstützen, abhängig von der im Subgraph-Manifest angegebenen `apiVersion`. -## Features +## Besonderheiten -### New functionality +### Neue Funktionalität - `TypedArray` kann nun aus `ArrayBuffer` mit Hilfe der [neuen statischen Methode `wrap`](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) erstellt werden - Neue Standard-Bibliotheksfunktionen: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`und `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) @@ -30,39 +30,39 @@ Diese Anleitung gilt für alle, die `graph-cli`/`graph-ts` unter Version `0.22.0 - Hinzufügen von `toUTCString` für `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) - Hinzufügen von `nonnull/NonNullable` integrierten Typ ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) -### Optimizations +### Optimierungen - `Math`-Funktionen wie `exp`, `exp2`, `log`, `log2` und `pow` wurden durch schnellere Varianten ersetzt ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - Leicht optimierte `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) - Mehr Feldzugriffe in std Map und Set zwischengespeichert ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) - Optimieren für Zweierpotenzen in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -### Other +### Sonstiges - Der Typ eines Array-Literal kann nun aus seinem Inhalt abgeleitet werden ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - stdlib auf Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) aktualisiert -## How to upgrade? +## Wie kann man upgraden? -1. Ändern Sie Ihre Mappings `apiVersion` in `subgraph.yaml` auf `0.0.6`: +1. Ändern Sie Ihre Mappings `apiVersion` in `subgraph.yaml` auf `0.0.9`: ```yaml ... dataSources: ... - mapping: + Kartierung: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` 2. Aktualisieren Sie die `graph-cli`, die Sie verwenden, auf die `latest` Version, indem Sie sie ausführen: ```bash -# if you have it globally installed +# wenn es global installiert ist npm install --global @graphprotocol/graph-cli@latest -# or in your subgraph if you have it as a dev dependency +# oder in Ihrem Subgrafen, wenn Sie es als Entwicklerabhängigkeit haben npm install --save-dev @graphprotocol/graph-cli@latest ``` @@ -72,14 +72,14 @@ npm install --save-dev @graphprotocol/graph-cli@latest npm install --save @graphprotocol/graph-ts@latest ``` -4. Follow the rest of the guide to fix the language breaking changes. +4. Befolgen Sie den Rest der Anleitung, um die Sprachänderungen zu beheben. 5. Führen Sie `codegen` und `deploy` erneut aus. -## Breaking changes +## Einschneidende Veränderungen -### Nullability +### Nullbarkeit -On the older version of AssemblyScript, you could create code like this: +In der älteren Version von AssemblyScript konnten Sie Code wie diesen erstellen: ```typescript function load(): Value | null { ... } @@ -88,7 +88,7 @@ let maybeValue = load(); maybeValue.aMethod(); ``` -However on the newer version, because the value is nullable, it requires you to check, like this: +Da der Wert in der neueren Version jedoch nullbar ist, müssen Sie dies wie folgt überprüfen: ```typescript let maybeValue = load() @@ -98,17 +98,17 @@ if (maybeValue) { } ``` -Or force it like this: +Oder erzwingen Sie es wie folgt: ```typescript -let maybeValue = load()! // breaks in runtime if value is null +let maybeValue = load()! // bricht zur Laufzeit ab, wenn der Wert null ist maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +Wenn Sie unsicher sind, welche Sie wählen sollen, empfehlen wir Ihnen, immer die sichere Variante zu verwenden. Wenn der Wert nicht vorhanden ist, sollten Sie einfach eine frühe if-Anweisung mit einem Return in Ihrem Subgraf-Handler ausführen. -### Variable Shadowing +### Variable Beschattung Früher konnte man [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) machen und Code wie dieser würde funktionieren: @@ -118,7 +118,7 @@ let b = 20 let a = a + b ``` -However now this isn't possible anymore, and the compiler returns this error: +Jetzt ist dies jedoch nicht mehr möglich und der Compiler gibt diesen Fehler zurück: ```typescript ERROR TS2451: Cannot redeclare block-scoped variable 'a' @@ -128,11 +128,11 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' in assembly/index.ts(4,3) ``` -You'll need to rename your duplicate variables if you had variable shadowing. +Sie müssen Ihre doppelten Variablen umbenennen, wenn Sie Variable Beschattung verwendet haben. -### Null Comparisons +### Null-Vergleiche -By doing the upgrade on your subgraph, sometimes you might get errors like these: +Wenn Sie das Upgrade für Ihren Subgrafen durchführen, können manchmal solche Fehler wie diese auftreten: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -151,7 +151,7 @@ Zur Lösung des Problems können Sie die `if`-Anweisung einfach wie folgt änder if (decimals === null) { ``` -The same applies if you're doing != instead of ==. +Dasselbe gilt, wenn Sie != statt == verwenden. ### Casting @@ -162,15 +162,15 @@ let byteArray = new ByteArray(10) let uint8Array = byteArray as Uint8Array // equivalent to: byteArray ``` -However this only works in two scenarios: +Dies funktioniert jedoch nur in zwei Szenarien: - Primitives Casting (zwischen Typen wie `u8`, `i32`, `bool`; z. B.: `let b: isize = 10; b as usize`); -- Upcasting on class inheritance (subclass → superclass) +- Upcasting bei der Klassenvererbung (subclass → superclass) Beispiele: ```typescript -// primitive casting +// primitives Casting let a: usize = 10 let b: isize = 5 let c: usize = a + (b as usize) @@ -186,8 +186,8 @@ let bytes = new Bytes(2) Es gibt zwei Szenarien, in denen man casten möchte, aber die Verwendung von `as`/`var` **ist nicht sicher**: -- Downcasting on class inheritance (superclass → subclass) -- Between two types that share a superclass +- Downcasting bei der Klassenvererbung (superclass → subclass) +- Zwischen zwei Typen, die eine gemeinsame Oberklasse haben ```typescript // Downcasting bei Klassenvererbung @@ -228,11 +228,11 @@ changetype(bytes) // funktioniert :) Wenn Sie nur die Nullbarkeit entfernen wollen, können Sie weiterhin den `as`-Operator (oder `variable`) verwenden, aber stellen Sie sicher, dass Sie wissen, dass der Wert nicht Null sein kann, sonst bricht es. ```typescript -// remove nullability +// die NULL-Zulässigkeit entfernen let previousBalance = AccountBalance.load(balanceId) // AccountBalance | null if (previousBalance != null) { - return previousBalance as AccountBalance // safe remove null + return previousBalance as AccountBalance // die NULL-Zulässigkeit sicher entfernen } let newBalance = new AccountBalance(balanceId) @@ -240,14 +240,14 @@ let newBalance = new AccountBalance(balanceId) Für den Fall der Nullbarkeit empfehlen wir, einen Blick auf die [Nullability-Check-Funktion] (https://www.assemblyscript.org/basics.html#nullability-checks) zu werfen, sie wird Ihren Code sauberer machen 🙂 -Also we've added a few more static methods in some types to ease casting, they are: +Außerdem haben wir ein paar weitere statische Methoden in einigen Typen hinzugefügt, um das Casting zu erleichtern: - Bytes.fromByteArray - Bytes.fromUint8Array - BigInt.fromByteArray - ByteArray.fromBigInt -### Nullability check with property access +### Nullbarkeitsprüfung mit Eigenschaftszugriff Um die [Nullability-Check-Funktion] (https://www.assemblyscript.org/basics.html#nullability-checks) zu verwenden, können Sie entweder `if`-Anweisungen oder den ternären Operator (`?` und `:`) wie folgt verwenden: @@ -277,10 +277,10 @@ class Container { let container = new Container() container.data = 'data' -let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile +let somethingOrElse: string = container.data ? container.data : 'else' // lässt sich nicht kompilieren ``` -Which outputs this error: +Das gibt folgenden Fehler aus: ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. @@ -301,12 +301,12 @@ container.data = 'data' let data = container.data -let somethingOrElse: string = data ? data : 'else' // compiles just fine :) +let somethingOrElse: string = data ? data : 'else' // lässt sich prima kompilieren :) ``` -### Operator overloading with property access +### Operator-Überlastung mit Eigenschaftszugriff -If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. +Wenn Sie versuchen, (z.B.) einen Typ, der NULL-Werte (aus einem Eigenschaftszugriff) zulässt, mit einem Typ, der keine NULL-Werte zulässt, zu summieren, gibt der AssemblyScript-Compiler keine Fehlermeldung aus, dass einer der Werte NULL-Werte zulässt, sondern kompiliert es einfach stillschweigend, so dass die Möglichkeit besteht, dass der Code zur Laufzeit nicht funktioniert. ```typescript class BigInt extends Uint8Array { @@ -323,14 +323,14 @@ class Wrapper { let x = BigInt.fromI32(2) let y: BigInt | null = null -x + y // give compile time error about nullability +x + y // gibt Kompilierzeitfehler über die Nullbarkeit let wrapper = new Wrapper(y) -wrapper.n = wrapper.n + x // doesn't give compile time errors as it should +wrapper.n = wrapper.n + x // gibt keine Kompilierzeitfehler, wie es sollte ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +Wir haben diesbezüglich ein Problem mit dem AssemblyScript-Compiler eröffnet. Wenn Sie diese Art von Vorgängen jedoch in Ihren Subgraf-Zuordnungen ausführen, sollten Sie sie zunächst so ändern, dass zuvor eine Nullprüfung durchgeführt wird. ```typescript let wrapper = new Wrapper(y) @@ -339,12 +339,12 @@ if (!wrapper.n) { wrapper.n = BigInt.fromI32(0) } -wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt +wrapper.n = wrapper.n + x // jetzt ist `n` garantiert ein BigInt ``` -### Value initialization +### Wert-Initialisierung -If you have any code like this: +Wenn Sie einen Code wie diesen haben: ```typescript var value: Type // null @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +Es wird zwar kompiliert, bricht aber zur Laufzeit ab. Dies liegt daran, dass der Wert nicht initialisiert wurde. Stellen Sie daher sicher, dass Ihr Subgraf seine Werte initialisiert hat, etwa so: ```typescript var value = new Type() // initialized @@ -360,7 +360,7 @@ value.x = 10 value.y = 'content' ``` -Also if you have nullable properties in a GraphQL entity, like this: +Auch wenn Sie nullfähige Eigenschaften in einer GraphQL-Entität haben, gehen Sie wie folgt vor: ```graphql type Total @entity { @@ -369,7 +369,7 @@ type Total @entity { } ``` -And you have code similar to this: +Und Sie haben einen ähnlichen Code wie diesen: ```typescript let total = Total.load('latest') @@ -407,15 +407,15 @@ type Total @entity { let total = Total.load('latest') if (total === null) { - total = new Total('latest') // already initializes non-nullable properties + total = new Total('latest') // initialisiert bereits Eigenschaften, die keine NULL-Werte zulassen } total.amount = total.amount + BigInt.fromI32(1) ``` -### Class property initialization +### Initialisierung von Klasseneigenschaften -If you export any classes with properties that are other classes (declared by you or by the standard library) like this: +Wenn Sie Klassen mit Eigenschaften exportieren, die andere Klassen sind (von Ihnen selbst oder von der Standardbibliothek deklariert), dann ist dies der Fall: ```typescript class Thing {} @@ -432,7 +432,7 @@ export class Something { constructor(public value: Thing) {} } -// oder +// or export class Something { value: Thing @@ -442,7 +442,7 @@ export class Something { } } -// oder +// or export class Something { value!: Thing @@ -459,7 +459,7 @@ let arr = new Array(5) // ["", "", "", "", ""] arr.push('something') // ["", "", "", "", "", "something"] // size 6 :( ``` -Depending on the types you're using, eg nullable ones, and how you're accessing them, you might encounter a runtime error like this one: +Je nach den Typen, die Sie verwenden (z. B. nullbare Typen) und wie Sie darauf zugreifen, kann es zu einem Laufzeitfehler wie diesem kommen: ``` ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/array.ts, line 110, column 40, with message: Element type must be nullable if array is holey wasm backtrace: 0: 0x19c4 - !~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type @@ -473,7 +473,7 @@ let arr = new Array(0) // [] arr.push('something') // ["something"] ``` -Or you should mutate it via index: +Oder Sie sollten es per Index mutieren: ```typescript let arr = new Array(5) // ["", "", "", "", ""] @@ -481,11 +481,11 @@ let arr = new Array(5) // ["", "", "", "", ""] arr[0] = 'something' // ["something", "", "", "", ""] ``` -### GraphQL schema +### GraphQL-Schema Dies ist keine direkte AssemblyScript-Änderung, aber Sie müssen möglicherweise Ihre Datei `schema.graphql` aktualisieren. -Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: +Jetzt können Sie in Ihren Typen keine Felder mehr definieren, die nicht nullbare Listen sind. Wenn Sie über ein Schema wie dieses verfügen: ```graphql type Something @entity { @@ -513,7 +513,7 @@ type MyEntity @entity { Dies hat sich aufgrund von Unterschieden in der Nullbarkeit zwischen AssemblyScript-Versionen geändert und hängt mit der Datei `src/generated/schema.ts` (Standardpfad, vielleicht haben Sie diesen geändert) zusammen. -### Other +### Sonstiges - `Map#set` und `Set#add` wurden an die Spezifikation angepasst und geben `this` zurück ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - Arrays erben nicht mehr von ArrayBufferView, sondern sind jetzt eigenständig ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) diff --git a/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx index 68c70b711a60..f4dbe2e67266 100644 --- a/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,62 +1,62 @@ --- -title: GraphQL Validations Migration Guide +title: Anleitung zur Migration von GraphQL-Validierungen --- -Soon `graph-node` will support 100% coverage of the [GraphQL Validations specification](https://spec.graphql.org/June2018/#sec-Validation). +Bald wird „graph-node“ eine 100-prozentige Abdeckung der [GraphQL Validations-Spezifikation](https://spec.graphql.org/June2018/#sec-Validation) unterstützen. -Previous versions of `graph-node` did not support all validations and provided more graceful responses - so, in cases of ambiguity, `graph-node` was ignoring invalid GraphQL operations components. +Frühere Versionen von „graph-node“ unterstützten nicht alle Validierungen und lieferten optimierte Antworten – daher ignorierte „graph-node“ bei Unklarheiten ungültige GraphQL-Operationskomponenten. -GraphQL Validations support is the pillar for the upcoming new features and the performance at scale of The Graph Network. +Die Unterstützung von GraphQL-Validierungen ist die Grundlage für die kommenden neuen Funktionen und die umfassende Leistung von The Graph Network. -It will also ensure determinism of query responses, a key requirement on The Graph Network. +Dadurch wird auch der Determinismus der Abfrageantworten sichergestellt, eine wichtige Anforderung für The Graph Network. -**Enabling the GraphQL Validations will break some existing queries** sent to The Graph API. +**Durch die Aktivierung der GraphQL-Validierungen werden einige vorhandene Abfragen unterbrochen,** die an die Graph-API gesendet werden. -To be compliant with those validations, please follow the migration guide. +Um diese Validierungen einzuhalten, befolgen Sie bitte den Migrationsleitfaden. -> ⚠️ If you do not migrate your queries before the validations are rolled out, they will return errors and possibly break your frontends/clients. +> ⚠️ Wenn Sie Ihre Abfragen nicht migrieren, bevor die Validierungen eingeführt werden, werden Fehler zurückgegeben und möglicherweise Ihre Frontends/Clients beschädigt. -## Migration guide +## Migrationsleitfaden -You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. +Mit dem CLI-Migrationstool können Sie Probleme in Ihren GraphQL-Vorgängen finden und beheben. Alternativ können Sie den Endpunkt Ihres GraphQL-Clients aktualisieren, um den Endpunkt „https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME“ zu verwenden. Wenn Sie Ihre Abfragen anhand dieses Endpunkts testen, können Sie die Probleme in Ihren Abfragen leichter finden. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Nicht alle Subgrafen müssen migriert werden, wenn Sie [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) oder [GraphQL Code Generator](https://the-guild.dev/graphql/codegen) verwenden, stellen sie bereits sicher, dass Ihre Abfragen gültig sind. -## Migration CLI tool +## Migrations-CLI-Tool -**Most of the GraphQL operations errors can be found in your codebase ahead of time.** +**Die meisten GraphQL-Operationsfehler können im Voraus in Ihrer Codebasis gefunden werden.** -For this reason, we provide a smooth experience for validating your GraphQL operations during development or in CI. +Aus diesem Grund bieten wir eine reibungslose Validierung Ihrer GraphQL-Operationen während der Entwicklung oder im CI. -[`@graphql-validate/cli`](https://github.com/saihaj/graphql-validate) is a simple CLI tool that helps validate GraphQL operations against a given schema. +[`@graphql-validate/cli`](https://github.com/saihaj/graphql-validate) ist ein einfaches CLI-Tool, das bei der Validierung von GraphQL-Operationen anhand eines bestimmten Schemas hilft. -### **Getting started** +### **Erste Schritte** -You can run the tool as follows: +Sie können das Tool wie folgt ausführen: ```bash npx @graphql-validate/cli -s https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME -o *.graphql ``` -**Notes:** +**Anmerkungen:** -- Set or replace $GITHUB_USER, $SUBGRAPH_NAME with the appropriate values. Like: [`artblocks/art-blocks`](https://api.thegraph.com/subgraphs/name/artblocks/art-blocks) -- The preview schema URL (https://api-next.thegraph.com/) provided is heavily rate-limited and will be sunset once all users have migrated to the new version. **Do not use it in production.** -- Operations are identified in files with the following extensions [`.graphql`,](https://www.graphql-tools.com/docs/schema-loading#graphql-file-loader)[`.ts`, `.tsx`, `.js`, `jsx`](https://www.graphql-tools.com/docs/schema-loading#code-file-loader) (`-o` option). +- Setzen oder ersetzen Sie $GITHUB_USER, $SUBGRAPH_NAME durch die entsprechenden Werte. Wie z.B.: [`artblocks/art-blocks`](https://api.thegraph.com/subgraphs/name/artblocks/art-blocks) +- Die bereitgestellte Vorschau-Schema-URL (https://api-next.thegraph.com/) ist stark ratenbeschränkt und wird eingestellt, sobald alle Benutzer auf die neue Version migrieren werden. **Verwenden Sie es nicht in der Produktion.** +- Operationen werden in Dateien mit den folgenden Erweiterungen identifiziert [`.graphql`,](https://www.graphql-tools.com/docs/schema-loading#graphql-file-loader)[`.ts`, `.tsx`, `.js`, `jsx`](https://www.graphql-tools.com/docs/schema-loading#code-file-loader) (`-o` option). -### CLI output +### CLI-Ausgabe -The `[@graphql-validate/cli](https://github.com/saihaj/graphql-validate)` CLI tool will output any GraphQL operations errors as follows: +Das CLI-Tool „[@graphql-validate/cli](https://github.com/saihaj/graphql-validate)“ gibt alle GraphQL-Operationsfehler wie folgt aus: ![Error output from CLI](https://i.imgur.com/x1cBdhq.png) -For each error, you will find a description, file path and position, and a link to a solution example (see the following section). +Zu jedem Fehler finden Sie eine Beschreibung, Dateipfad und -position sowie einen Link zu einem Lösungsbeispiel (siehe folgenden Abschnitt). -## Run your local queries against the preview schema +## Führen Sie Ihre lokalen Abfragen anhand des Vorschauschemas aus -We provide an endpoint `https://api-next.thegraph.com/` that runs a `graph-node` version that has validations turned on. +Wir stellen einen Endpunkt „https://api-next.thegraph.com/“ bereit, der eine „graph-node“-Version ausführt, bei der Validierungen aktiviert sind. -You can try out queries by sending them to: +Sie können Abfragen ausprobieren, indem Sie diese an folgende Adresse senden: - `https://api-next.thegraph.com/subgraphs/id/` @@ -64,28 +64,28 @@ oder - `https://api-next.thegraph.com/subgraphs/name//` -To work on queries that have been flagged as having validation errors, you can use your favorite GraphQL query tool, like Altair or [GraphiQL](https://cloud.hasura.io/public/graphiql), and try your query out. Those tools will also mark those errors in their UI, even before you run it. +Um Abfragen zu bearbeiten, bei denen Validierungsfehler gemeldet wurden, können Sie Ihr bevorzugtes GraphQL-Abfragetool wie Altair oder [GraphiQL] (https://cloud.hasura.io/public/graphiql) verwenden und Ihre Abfrage ausprobieren. Diese Tools markieren diese Fehler auch in ihrer Benutzeroberfläche, noch bevor Sie sie ausführen. -## How to solve issues +## So lösen Sie Probleme -Below, you will find all the GraphQL validations errors that could occur on your existing GraphQL operations. +Nachfolgend finden Sie alle GraphQL-Validierungsfehler, die bei Ihren vorhandenen GraphQL-Vorgängen auftreten können. -### GraphQL variables, operations, fragments, or arguments must be unique +### GraphQL-Variablen, -Operationen, -Fragmente oder -Argumente müssen einzigartig sein -We applied rules for ensuring that an operation includes a unique set of GraphQL variables, operations, fragments, and arguments. +Wir haben Regeln angewendet, um sicherzustellen, dass eine Operation einen eindeutigen Satz von GraphQL-Variablen, -Operationen, -Fragmenten und -Argumenten enthält. -A GraphQL operation is only valid if it does not contain any ambiguity. +Eine GraphQL-Operation ist nur dann gültig, wenn sie keine Mehrdeutigkeit enthält. -To achieve that, we need to ensure that some components in your GraphQL operation must be unique. +Um dies zu erreichen, müssen wir sicherstellen, dass einige Komponenten in Ihrer GraphQL-Operation eindeutig sind. -Here's an example of a few invalid operations that violates these rules: +Hier ist ein Beispiel für einige ungültige Vorgänge, die gegen diese Regeln verstoßen: -**Duplicate Query name (#UniqueOperationNamesRule)** +**Doppelter Abfragename (#UniqueOperationNamesRule)** ```graphql -# The following operation violated the UniqueOperationName -# rule, since we have a single operation with 2 queries -# with the same name +# Der folgende Vorgang hat den UniqueOperationName +# -Regel verletzt, da wir eine einzige Operation mit 2 Abfragen +# mit demselben Namen haben query myData { id } @@ -108,11 +108,11 @@ query myData2 { } ``` -**Duplicate Fragment name (#UniqueFragmentNamesRule)** +**Doppelter Fragmentname (#UniqueFragmentNamesRule)** ```graphql -# The following operation violated the UniqueFragmentName -# rule. +# Der folgende Vorgang hat den UniqueFragmentName +# -Regel verletzt. query myData { id ...MyFields @@ -136,19 +136,19 @@ query myData { ...MyFieldsMetadata } -fragment MyFieldsMetadata { # assign a unique name to fragment +fragment MyFieldsMetadata { # dem Fragment einen eindeutigen Namen zuweisen metadata } -fragment MyFieldsName { # assign a unique name to fragment +fragment MyFieldsName { # dem Fragment einen eindeutigen Namen zuweisen name } ``` -**Duplicate variable name (#UniqueVariableNamesRule)** +**Doppelter Variablenname (#UniqueVariableNamesRule)** ```graphql -# The following operation violates the UniqueVariables +# Die folgende Operation verstößt gegen die UniqueVariables query myData($id: String, $id: Int) { id ...MyFields @@ -159,16 +159,16 @@ _Lösung:_ ```graphql query myData($id: String) { - # keep the relevant variable (here: `$id: String`) + # die relevante Variable beibehalten (hier: „$id: String“) id ...MyFields } ``` -**Duplicate argument name (#UniqueArgument)** +**Doppelter Argumentname (#UniqueArgument)** ```graphql -# The following operation violated the UniqueArguments +# Die folgende Operation hat die UniqueArguments verletzt query myData($id: ID!) { userById(id: $id, id: "1") { id @@ -186,13 +186,13 @@ query myData($id: ID!) { } ``` -**Duplicate anonymous query (#LoneAnonymousOperationRule)** +**Duplizierte anonyme Abfrage (#LoneAnonymousOperationRule)** -Also, using two anonymous operations will violate the `LoneAnonymousOperation` rule due to conflict in the response structure: +Außerdem verstößt die Verwendung von zwei anonymen Vorgängen aufgrund eines Konflikts in der Antwortstruktur gegen die Regel „LoneAnonymousOperation“: ```graphql -# This will fail if executed together in -# a single operation with the following two queries: +# Dies wird fehlschlagen, wenn es gleichzeitig in +# einer einzelnen Operation mit den folgenden zwei Abfragen ausgeführt wird: query { someField } @@ -211,7 +211,7 @@ query { } ``` -Or name the two queries: +Oder benennen Sie die beiden Abfragen: ```graphql query FirstQuery { @@ -223,20 +223,20 @@ query SecondQuery { } ``` -### Overlapping Fields +### Überlappende Felder -A GraphQL selection set is considered valid only if it correctly resolves the eventual result set. +Ein GraphQL-Auswahlsatz wird nur dann als gültig angesehen, wenn er den endgültigen Ergebnissatz korrekt auflöst. -If a specific selection set, or a field, creates ambiguity either by the selected field or by the arguments used, the GraphQL service will fail to validate the operation. +Wenn ein bestimmter Auswahlsatz oder ein Feld entweder durch das ausgewählte Feld oder durch die verwendeten Argumente Mehrdeutigkeiten erzeugt, kann der GraphQL-Dienst den Vorgang nicht validieren. -Here are a few examples of invalid operations that violate this rule: +Hier sind einige Beispiele für ungültige Vorgänge, die gegen diese Regel verstoßen: -**Conflicting fields aliases (#OverlappingFieldsCanBeMergedRule)** +**Widersprüchliche Feldaliase (#OverlappingFieldsCanBeMergedRule)** ```graphql -# Aliasing fields might cause conflicts, either with -# other aliases or other fields that exist on the -# GraphQL schema. +# Alias-Felder können Konflikte verursachen, entweder mit +# anderen Aliasen oder anderen Feldern, die im +# GraphQL-Schema vorhanden sind. query { dogs { name: nickname @@ -256,11 +256,11 @@ query { } ``` -**Conflicting fields with arguments (#OverlappingFieldsCanBeMergedRule)** +**Widersprüchliche Felder mit Argumenten (#OverlappingFieldsCanBeMergedRule)** ```graphql -# Different arguments might lead to different data, -# so we can't assume the fields will be the same. +# Unterschiedliche Argumente können zu unterschiedlichen Daten führen, +# daher können wir nicht davon ausgehen, dass die Felder gleich sind. query { dogs { doesKnowCommand(dogCommand: SIT) @@ -280,12 +280,12 @@ query { } ``` -Also, in more complex use-cases, you might violate this rule by using two fragments that might cause a conflict in the eventually expected set: +Außerdem könnten Sie in komplexeren Anwendungsfällen gegen diese Regel verstoßen, indem Sie zwei Fragmente verwenden, die einen Konflikt in der letztendlich erwarteten Menge verursachen könnten: ```graphql query { - # Eventually, we have two "x" definitions, pointing - # to different fields! + # Letztendlich haben wir zwei „x“-Definitionen, die + # auf verschiedene Felder verweisen! ...A ...B } @@ -299,7 +299,7 @@ fragment B on Type { } ``` -In addition to that, client-side GraphQL directives like `@skip` and `@include` might lead to ambiguity, for example: +Darüber hinaus können clientseitige GraphQL-Direktiven wie „@skip“ und „@include“ zu Unklarheiten führen, zum Beispiel: ```graphql fragment mergeSameFieldsWithSameDirectives on Dog { @@ -308,18 +308,18 @@ fragment mergeSameFieldsWithSameDirectives on Dog { } ``` -[You can read more about the algorithm here.](https://spec.graphql.org/June2018/#sec-Field-Selection-Merging) +[Mehr über den Algorithmus können Sie hier lesen.](https://spec.graphql.org/June2018/#sec-Field-Selection-Merging) -### Unused Variables or Fragments +### Unbenutzte Variablen oder Fragmente -A GraphQL operation is also considered valid only if all operation-defined components (variables, fragments) are used. +Eine GraphQL-Operation gilt auch nur dann als gültig, wenn alle durch die Operation definierten Komponenten (Variablen, Fragmente) verwendet werden. -Here are a few examples for GraphQL operations that violates these rules: +Hier sind einige Beispiele für GraphQL-Operationen, die gegen diese Regeln verstoßen: -**Unused variable** (#NoUnusedVariablesRule) +**Unbenutzte Variable** (#NoUnusedVariablesRule) ```graphql -# Invalid, because $someVar is never used. +# Ungültig, da $someVar nie verwendet wird. query something($someVar: String) { someData } @@ -333,10 +333,10 @@ query something { } ``` -**Unused Fragment** (#NoUnusedFragmentsRule) +**Unbenutztes Fragment** (#NoUnusedFragmentsRule) ```graphql -# Invalid, because fragment AllFields is never used. +# Ungültig, da das Fragment AllFields nie verwendet wird. query something { someData } @@ -350,22 +350,22 @@ fragment AllFields { # unused :( _Lösung:_ ```graphql -# Invalid, because fragment AllFields is never used. +# Ungültig, da das Fragment AllFields nie verwendet wird. query something { someData } -# remove the `AllFields` fragment +# das „AllFields“-Fragment entfernen ``` -### Invalid or missing Selection-Set (#ScalarLeafsRule) +### Ungültiger oder fehlender Auswahlsatz (#ScalarLeafsRule) -Also, a GraphQL field selection is only valid if the following is validated: +Außerdem ist eine GraphQL-Feldauswahl nur dann gültig, wenn Folgendes validiert ist: -- An object field must-have selection set specified. -- An edge field (scalar, enum) must not have a selection set specified. +- Für ein Objektfeld muss ein Auswahlsatz angegeben werden. +- Für ein Kantenfeld (Skalar, Enumeration) darf kein Auswahlsatz angegeben sein. -Here are a few examples of violations of these rules with the following Schema: +Hier sind einige Beispiele für Verstöße gegen diese Regeln mit dem folgenden Schema: ```graphql type Image { @@ -382,12 +382,12 @@ type Query { } ``` -**Invalid Selection-Set** +**Ungültiger Auswahlsatz** ```graphql query { user { - id { # Invalid, because "id" is of type ID and does not have sub-fields + id { # Ungültig, da „id“ vom Typ ID ist und keine Unterfelder hat } } @@ -404,13 +404,13 @@ query { } ``` -**Missing Selection-Set** +**Fehlender Auswahlsatz** ```graphql query { user { id - image # `image` requires a Selection-Set for sub-fields! + image # `image` erfordert einen Auswahlsatz für Unterfelder! } } ``` @@ -428,49 +428,49 @@ query { } ``` -### Incorrect Arguments values (#VariablesInAllowedPositionRule) +### Falsche Argumentwerte (#VariablesInAllowedPositionRule) -GraphQL operations that pass hard-coded values to arguments must be valid, based on the value defined in the schema. +GraphQL-Operationen, die fest codierte Werte an Argumente übergeben, müssen basierend auf dem im Schema definierten Wert gültig sein. -Here are a few examples of invalid operations that violate these rules: +Hier sind einige Beispiele für ungültige Vorgänge, die gegen diese Regeln verstoßen: ```graphql query purposes { - # If "name" is defined as "String" in the schema, - # this query will fail during validation. + # Wenn „name“ im Schema als „String“ definiert ist, + # schlägt diese Abfrage während der Validierung fehl. purpose(name: 1) { id } } -# This might also happen when an incorrect variable is defined: +# Dies kann auch passieren, wenn eine falsche Variable definiert wurde: query purposes($name: Int!) { - # If "name" is defined as `String` in the schema, - # this query will fail during validation, because the - # variable used is of type `Int` + # Wenn „name“ im Schema als „String“ definiert ist, + # schlägt diese Abfrage während der Validierung fehl, da die + # verwendete Variable vom Typ „Int“ ist purpose(name: $name) { id } } ``` -### Unknown Type, Variable, Fragment, or Directive (#UnknownX) +### Unbekannter Typ, unbekannte Variable, unbekanntes Fragment oder unbekannte Direktive (#UnknownX) -The GraphQL API will raise an error if any unknown type, variable, fragment, or directive is used. +Die GraphQL-API löst einen Fehler aus, wenn ein unbekannter Typ, eine unbekannte Variable, ein unbekanntes Fragment oder eine unbekannte Direktive verwendet wird. -Those unknown references must be fixed: +Diese unbekannten Referenzen müssen korrigiert werden: -- rename if it was a typo -- otherwise, remove +- umbenennen, wenn es ein Tippfehler war +- andernfalls entfernen -### Fragment: invalid spread or definition +### Fragment: ungültiger Spread oder ungültige Definition -**Invalid Fragment spread (#PossibleFragmentSpreadsRule)** +**Ungültige Fragmentverteilung (#PossibleFragmentSpreadsRule)** -A Fragment cannot be spread on a non-applicable type. +Ein Fragment kann nicht auf einen nicht anwendbaren Typ verteilt werden. -Example, we cannot apply a `Cat` fragment to the `Dog` type: +Beispiel: Wir können kein „Cat“-Fragment auf den Typ „Dog“ anwenden: ```graphql query { @@ -484,33 +484,33 @@ fragment CatSimple on Cat { } ``` -**Invalid Fragment definition (#FragmentsOnCompositeTypesRule)** +**Ungültige Fragmentdefinition (#FragmentsOnCompositeTypesRule)** -All Fragment must be defined upon (using `on ...`) a composite type, in short: object, interface, or union. +Alle Fragmente müssen auf einem zusammengesetzten Typ (mit „on ...“) definiert werden, kurz gesagt: Objekt, Schnittstelle oder Union. -The following examples are invalid, since defining fragments on scalars is invalid. +Die folgenden Beispiele sind ungültig, da die Definition von Fragmenten auf Skalaren ungültig ist. ```graphql fragment fragOnScalar on Int { - # we cannot define a fragment upon a scalar (`Int`) + # wir können kein Fragment auf einem Skalar („Int“) definieren. something } fragment inlineFragOnScalar on Dog { ... on Boolean { - # `Boolean` is not a subtype of `Dog` + # `Boolean` ist kein Subtyp von `Dog` somethingElse } } ``` -### Directives usage +### Verwendung von Direktiven -**Directive cannot be used at this location (#KnownDirectivesRule)** +**Direktive kann an dieser Stelle nicht verwendet werden (#KnownDirectivesRule)** -Only GraphQL directives (`@...`) supported by The Graph API can be used. +Es können nur GraphQL-Direktiven („@...“) verwendet werden, die von der Graph-API unterstützt werden. -Here is an example with The GraphQL supported directives: +Hier ist ein Beispiel mit von GraphQL unterstützten Direktiven: ```graphql query { @@ -521,11 +521,11 @@ query { } ``` -_Note: `@stream`, `@live`, `@defer` are not supported._ +_Hinweis: „@stream“, „@live“, „@defer“ werden nicht unterstützt._ -**Directive can only be used once at this location (#UniqueDirectivesPerLocationRule)** +**Direktive kann nur einmal an diesem Standort verwendet werden (#UniqueDirectivesPerLocationRule)** -The directives supported by The Graph can only be used once per location. +Die von The Graph unterstützten Direktiven können nur einmal pro Standort verwendet werden. Folgendes ist ungültig (und überflüssig): diff --git a/website/src/pages/de/resources/roles/curating.mdx b/website/src/pages/de/resources/roles/curating.mdx index 7d145d84ab5e..40f0110f505f 100644 --- a/website/src/pages/de/resources/roles/curating.mdx +++ b/website/src/pages/de/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Kuratieren --- -Kuratoren sind entscheidend für die dezentrale Wirtschaft von The Graph. Sie nutzen ihr Wissen über das web3-Ökosystem, um die Subgraphen zu bewerten und zu signalisieren, die von The Graph Network indiziert werden sollten. Über den Graph Explorer sehen die Kuratoren die Netzwerkdaten, um Signalisierungsentscheidungen zu treffen. Im Gegenzug belohnt The Graph Network Kuratoren, die auf qualitativ hochwertige Subgraphen hinweisen, mit einem Anteil an den Abfragegebühren, die diese Subgraphen generieren. Die Höhe der signalisierten GRT ist eine der wichtigsten Überlegungen für Indexer bei der Entscheidung, welche Subgraphen indiziert werden sollen. +Kuratoren sind entscheidend für die dezentrale Wirtschaft von The Graph. Sie nutzen ihr Wissen über das web3-Ökosystem, um die Subgraphen zu bewerten und zu signalisieren, die von The Graph Network indiziert werden sollten. Über den Graph Explorer sehen die Kuratoren die Netzwerkdaten, um Signalisierungsentscheidungen zu treffen. Im Gegenzug belohnt The Graph Network Kuratoren, die auf qualitativ hochwertige Subgraphen hinweisen, mit einem Anteil an den Abfragegebühren, die diese Subgraphen generieren. Die Höhe der signalisierten GRT ist eine der wichtigsten Überlegungen für Indexierer bei der Entscheidung, welche Subgraphen indiziert werden sollen. ## Was bedeutet Signalisierung für The Graph Network? -Bevor Verbraucher einen Subgraphen abfragen können, muss er indiziert werden. An dieser Stelle kommt die Kuratierung ins Spiel. Damit Indexer erhebliche Abfragegebühren für hochwertige Subgraphen verdienen können, müssen sie wissen, welche Subgraphen indiziert werden sollen. Wenn Kuratoren ein Signal für einen Subgraphen geben, wissen Indexer, dass ein Subgraph gefragt und von ausreichender Qualität ist, so dass er indiziert werden sollte. +Bevor Verbraucher einen Subgraphen abfragen können, muss er indiziert werden. An dieser Stelle kommt die Kuratierung ins Spiel. Damit Indexierer erhebliche Abfragegebühren für hochwertige Subgraphen verdienen können, müssen sie wissen, welche Subgraphen indiziert werden sollen. Wenn Kuratoren ein Signal für einen Subgraphen geben, wissen Indexierer, dass ein Subgraph gefragt und von ausreichender Qualität ist, so dass er indiziert werden sollte. -Kuratoren machen das The Graph Netzwerk effizient und [signaling](#how-to-signal) ist der Prozess, den Kuratoren verwenden, um Indexer wissen zu lassen, dass ein Subgraph gut zu indizieren ist. Indexer können dem Signal eines Kurators vertrauen, da Kuratoren nach dem Signalisieren einen Kurationsanteil für den Subgraphen prägen, der sie zu einem Teil der zukünftigen Abfragegebühren berechtigt, die der Subgraph verursacht. +Kuratoren machen das The Graph Netzwerk effizient und [Signalisierung](#how-to-signal) ist der Prozess, den Kuratoren verwenden, um Indexierer wissen zu lassen, dass ein Subgraph gut zu indizieren ist. Indexierer können dem Signal eines Kurators vertrauen, da Kuratoren nach dem Signalisieren einen Kurationsanteil für den Subgraphen prägen, der sie zu einem Teil der zukünftigen Abfragegebühren berechtigt, die der Subgraph verursacht. -Die Signale der Kuratoren werden als ERC20-Token dargestellt, die Graph Curation Shares (GCS) genannt werden. Diejenigen, die mehr Abfragegebühren verdienen wollen, sollten ihre GRT an Subgraphen signalisieren, von denen sie vorhersagen, dass sie einen starken Gebührenfluss an das Netzwerk generieren werden. Kuratoren können nicht für schlechtes Verhalten bestraft werden, aber es gibt eine Einlagensteuer für Kuratoren, um von schlechten Entscheidungen abzuschrecken, die der Integrität des Netzwerks schaden könnten. Kuratoren werden auch weniger Abfragegebühren verdienen, wenn sie einen Subgraphen von geringer Qualität kuratieren, weil es weniger Abfragen zu bearbeiten gibt oder weniger Indexer, die sie bearbeiten. +Die Signale der Kuratoren werden als ERC20-Token dargestellt, die Graph Curation Shares (GCS) genannt werden. Diejenigen, die mehr Abfragegebühren verdienen wollen, sollten ihre GRT an Subgraphen signalisieren, von denen sie vorhersagen, dass sie einen starken Gebührenfluss an das Netzwerk generieren werden. Kuratoren können nicht für schlechtes Verhalten bestraft werden, aber es gibt eine Einlagensteuer für Kuratoren, um von schlechten Entscheidungen abzuschrecken, die der Integrität des Netzwerks schaden könnten. Kuratoren werden auch weniger Abfragegebühren verdienen, wenn sie einen Subgraphen von geringer Qualität kuratieren, weil es weniger Abfragen zu bearbeiten gibt oder weniger Indexierer, die sie bearbeiten. -Der [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) stellt die Indizierung aller Subgraphen sicher und signalisiert, dass GRT auf einem bestimmten Subgraphen mehr Indexer anzieht. Dieser Anreiz für zusätzliche Indexer durch Kuration zielt darauf ab, die Servicequalität für Abfragen zu verbessern, indem die Latenzzeit verringert und die Netzwerkverfügbarkeit erhöht wird. +Der [Sunrise Upgrade Indexierer](/archived/sunrise/#what-is-the-upgrade-indexer) stellt die Indizierung aller Subgraphen sicher und signalisiert, dass GRT auf einem bestimmten Subgraphen mehr Indexierer anzieht. Dieser Anreiz für zusätzliche Indexierer durch Kuration zielt darauf ab, die Servicequalität für Abfragen zu verbessern, indem die Latenzzeit verringert und die Netzwerkverfügbarkeit erhöht wird. Bei der Signalisierung können Kuratoren entscheiden, ob sie für eine bestimmte Version des Subgraphen signalisieren wollen oder ob sie die automatische Migration verwenden wollen. Bei der automatischen Migration werden die Freigaben eines Kurators immer auf die neueste vom Entwickler veröffentlichte Version aktualisiert. Wenn sie sich stattdessen für eine bestimmte Version entscheiden, bleiben die Freigaben immer auf dieser spezifischen Version. Wenn Sie Unterstützung bei der Kuratierung benötigen, um die Qualität des Dienstes zu verbessern, senden Sie bitte eine Anfrage an das Edge & Node-Team unter support@thegraph.zendesk.com und geben Sie die Subgraphen an, für die Sie Unterstützung benötigen. -Indexer können Subgraphen für die Indizierung auf der Grundlage von Kurationssignalen finden, die sie im Graph Explorer sehen (siehe Screenshot unten). +Indexierer können Subgraphen für die Indizierung auf der Grundlage von Kurationssignalen finden, die sie im Graph Explorer sehen (siehe Screenshot unten). ![Explorer-Subgrafen](/img/explorer-subgraphs.png) ## Wie man signalisiert -Auf der Registerkarte Kurator im Graph Explorer können Kuratoren bestimmte Subgraphen auf der Grundlage von Netzwerkstatistiken an- und abmelden. Einen schrittweisen Überblick über die Vorgehensweise im Graph Explorer finden Sie [hier](/subgraphs/explorer/) +Auf der Registerkarte „Kurator“ im Graph Explorer können Kuratoren bestimmte Subgraphen auf der Grundlage von Netzwerkstatistiken an- und abmelden. Einen schrittweisen Überblick über die Vorgehensweise im Graph Explorer finden Sie [hier](/subgraphs/explorer/) Ein Kurator kann sich dafür entscheiden, ein Signal für eine bestimmte Subgraph-Version abzugeben, oder er kann sein Signal automatisch auf die neueste Produktionsversion dieses Subgraphen migrieren lassen. Beides sind gültige Strategien und haben ihre eigenen Vor- und Nachteile. -Die Signalisierung einer bestimmten Version ist besonders nützlich, wenn ein Subgraph von mehreren Dapps verwendet wird. Eine Dapp muss den Subgraph vielleicht regelmäßig mit neuen Funktionen aktualisieren. Eine andere Dapp zieht es vielleicht vor, eine ältere, gut getestete Version des Subgraphs zu verwenden. Bei der ersten Kuration fällt eine Standardsteuer von 1% an. +Die Signalisierung einer bestimmten Version ist besonders nützlich, wenn ein Subgraph von mehreren Dapps verwendet wird. Eine Dapp muss den Subgraphen vielleicht regelmäßig mit neuen Funktionen aktualisieren. Eine andere Dapp zieht es vielleicht vor, eine ältere, gut getestete Version des Subgraphen zu verwenden. Bei der ersten Kuration fällt eine Standardsteuer von 1% an. Die automatische Migration Ihres Signals zum neuesten Produktions-Build kann sich als nützlich erweisen, um sicherzustellen, dass Sie weiterhin Abfragegebühren anfallen. Jedes Mal, wenn Sie kuratieren, fällt eine Kuratierungssteuer von 1 % an. Außerdem zahlen Sie bei jeder Migration eine Kuratierungssteuer von 0,5 %. Subgraph-Entwickler werden davon abgehalten, häufig neue Versionen zu veröffentlichen - sie müssen eine Kurationssteuer von 0,5 % auf alle automatisch migrierten Kurationsanteile zahlen. -> **Anmerkung**: Die erste Adresse, die einen bestimmten Subgraph signalisiert, wird als erster Kurator betrachtet und muss viel mehr Arbeit leisten als die übrigen folgenden Kuratoren, da der erste Kurator die Kurationsaktien-Token initialisiert und außerdem Token in den Graph-Proxy überträgt. +> **Anmerkung**: Die erste Adresse, die einen bestimmten Subgraphen signalisiert, wird als erster Kurator betrachtet und muss viel mehr Arbeit leisten als die übrigen folgenden Kuratoren, da der erste Kurator die Kurationsaktien-Token initialisiert und außerdem Token in den Graph-Proxy überträgt. ## Abhebung Ihrer GRT @@ -40,7 +40,7 @@ Die Kuratoren haben jederzeit die Möglichkeit, ihre signalisierten GRT zurückz Anders als beim Delegieren müssen Sie, wenn Sie sich entscheiden, Ihr signalisiertes GRT abzuheben, keine Abkühlungsphase abwarten und erhalten den gesamten Betrag (abzüglich der 1 % Kurationssteuer). -Sobald ein Kurator sein Signal zurückzieht, können die Indexer den Subgraphen weiter indizieren, auch wenn derzeit kein aktives GRT signalisiert wird. +Sobald ein Kurator sein Signal zurückzieht, können die Indexierer den Subgraphen weiter indizieren, auch wenn derzeit kein aktives GRT signalisiert wird. Es wird jedoch empfohlen, dass Kuratoren ihr signalisiertes GRT bestehen lassen, nicht nur um einen Teil der Abfragegebühren zu erhalten, sondern auch um die Zuverlässigkeit und Betriebszeit des Subgraphen zu gewährleisten. @@ -48,8 +48,8 @@ Es wird jedoch empfohlen, dass Kuratoren ihr signalisiertes GRT bestehen lassen, 1. Der Abfragemarkt ist bei The Graph noch sehr jung, und es besteht das Risiko, dass Ihr %APY aufgrund der noch jungen Marktdynamik niedriger ist als Sie erwarten. 2. Kurationsgebühr - wenn ein Kurator GRT auf einem Subgraphen meldet, fällt eine Kurationsgebühr von 1% an. Diese Gebühr wird verbrannt. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Ein Subgraph kann aufgrund eines Fehlers fehlschlagen. Für einen fehlgeschlagenen Subgraph fallen keine Abfragegebühren an. Daher müssen Sie warten, bis der Entwickler den Fehler behebt und eine neue Version bereitstellt. +3. (Nur Ethereum) Wenn Kuratoren ihre Anteile verbrennen, um GRT abzuziehen, wird die GRT-Bewertung der verbleibenden Anteile reduziert. Bitte beachten Sie, dass Kuratoren in manchen Fällen beschließen können, ihre Anteile **alle auf einmal** zu verbrennen. Dies kann der Fall sein, wenn ein Dapp-Entwickler die Versionierung/Verbesserung und Abfrage seines Subgraphen einstellt oder wenn ein Subgraph ausfällt. Infolgedessen können die verbleibenden Kuratoren möglicherweise nur einen Bruchteil ihres ursprünglichen GRT abheben. Für eine Netzwerkrolle mit einem geringeren Risikoprofil siehe [Delegators](/resources/roles/delegating/). +4. Ein Subgraph kann aufgrund eines Fehlers fehlschlagen. Für einen fehlgeschlagenen Subgraphen fallen keine Abfragegebühren an. Daher müssen Sie warten, bis der Entwickler den Fehler behebt und eine neue Version bereitstellt. - Wenn Sie die neueste Version eines Subgraphen abonniert haben, werden Ihre Anteile automatisch zu dieser neuen Version migriert. Dabei fällt eine Kurationsgebühr von 0,5 % an. - Wenn Sie für eine bestimmte Version eines Subgraphen ein Signal gegeben haben und dieses fehlschlägt, müssen Sie Ihre Kurationsanteile manuell verbrennen. Sie können dann ein Signal für die neue Subgraph-Version geben, wodurch eine Kurationssteuer von 1 % anfällt. diff --git a/website/src/pages/de/resources/roles/delegating/delegating.mdx b/website/src/pages/de/resources/roles/delegating/delegating.mdx index 5bdf77f185b0..04f196f7ba16 100644 --- a/website/src/pages/de/resources/roles/delegating/delegating.mdx +++ b/website/src/pages/de/resources/roles/delegating/delegating.mdx @@ -2,54 +2,54 @@ title: Delegieren --- -To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +Wenn Sie sofort mit dem Delegieren beginnen möchten, schauen Sie sich [Delegieren in the Graph] (https://thegraph.com/explorer/delegate?chain=arbitrum-one) an. ## Überblick -Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. +Delegatoren verdienen GRT, indem sie GRT an Indexer delegieren, was die Sicherheit und Funktionalität des Netzwerks erhöht. -## Benefits of Delegating +## Vorteile des Delegierens -- Strengthen the network’s security and scalability by supporting Indexers. -- Earn a portion of rewards generated by the Indexers. +- Stärkung der Sicherheit und Skalierbarkeit des Netzwerks durch Unterstützung von Indexierern. +- Verdienen vom Teil der Rewards, die von den Indexierern generiert werden. -## How Does Delegation Work? +## Wie funktioniert die Delegation? -Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. +Delegatoren erhalten GRT Rewards von dem/den Indexierer(n), an den/die sie ihr GRT delegieren. -An Indexer's ability to process queries and earn rewards depends on three key factors: +Die Fähigkeit eines Indexers, Abfragen zu verarbeiten und Rewards zu verdienen, hängt von drei Schlüsselfaktoren ab: -1. The Indexer's Self-Stake (GRT staked by the Indexer). -2. The total GRT delegated to them by Delegators. -3. The price the Indexer sets for queries. +1. Die Selbstbeteiligung des Indexierers (GRT, die vom Indexer eingesetzt werden). +2. Die gesamte GRT, die ihnen von den Delegatoren übertragen wurde. +3. Der Preis, den der Indexierer für Abfragen festlegt. -The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. +Je mehr GRT eingesetzt und an einen Indexierer delegiert werden, desto mehr Abfragen können bedient werden, was zu höheren potenziellen Rewards sowohl für den Delegator als auch für den Indexierer führt. -### What is Delegation Capacity? +### Was ist Delegationskapazität? -Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. +Die Delegationskapazität bezieht sich auf die maximale Menge an GRT, die ein Indexierer von Delegatoren annehmen kann, basierend auf der Selbstbeteiligung des Indexierers. -The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. +The Graph Network beinhaltet ein Delegationsverhältnis von 16, d. h. ein Indexierer kann bis zum 16-fachen seines Eigenanteils an delegiertem GRT annehmen. -For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. +Ein Beispiel: Wenn ein Indexierer eine Selbstabnahme von 1 Mio. GRT hat, beträgt seine Delegationskapazität 16 Mio. GRT. -### Why Does Delegation Capacity Matter? +### Warum ist die Delegationskapazität so wichtig? -If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. +Wenn ein Indexierer seine Delegationskapazität überschreitet, werden die Rewards für alle Delegatoren verwässert, da das überschüssige delegierte GRT innerhalb des Protokolls nicht effektiv genutzt werden kann. -This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. +Daher ist es für die Delegatoren von entscheidender Bedeutung, die aktuelle Delegationskapazität eines Indexierers zu bewerten, bevor sie einen Indexierer auswählen. -Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. +Indexierer können ihre Delegationskapazität erhöhen, indem sie ihre Selbstbeteiligung erhöhen und damit das Limit für delegierte Token anheben. -## Delegation on The Graph +## Delegation auf The Graph -> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). +> Bitte beachten Sie, dass dieser Leitfaden nicht auf Schritte wie die Einrichtung von MetaMask eingeht. Die Ethereum Community bietet eine [umfassende Ressource zu Wallets] (https://ethereum.org/en/wallets/). -There are two sections in this guide: +Dieser Leitfaden besteht aus zwei Abschnitten: - Die Risiken der Übertragung von Token in The Graph Network - Wie man als Delegator die erwarteten Erträge berechnet @@ -58,7 +58,7 @@ There are two sections in this guide: Nachfolgend sind die wichtigsten Risiken aufgeführt, die mit der Tätigkeit eines Delegators im Protokoll verbunden sind. -### The Delegation Tax +### Die Delegationssteuer Delegatoren können nicht für schlechtes Verhalten bestraft werden, aber es gibt eine Steuer für Delegatoren, um von schlechten Entscheidungen abzuschrecken, die die Integrität des Netzes beeinträchtigen könnten. @@ -68,21 +68,21 @@ Als Delegator ist es wichtig, die folgenden Punkte zu verstehen: - Um auf Nummer sicher zu gehen, sollten Sie Ihre potenzielle Rendite berechnen, wenn Sie einen Indexer beauftragen. Als Beispiel könnten Sie berechnen, wie viele Tage es dauern wird, bis Sie die 0,5 % Steuer auf Ihre Delegation zurückverdient haben. -### The Undelegation Period +### Der Zeitraum der Undelegation -When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. +Wenn ein Delegator die Delegation aufhebt, gilt für seine Token eine Aufhebungsfrist von 28 Tagen für die Delegation. -This means they cannot transfer their tokens or earn any rewards for 28 days. +Das bedeutet, dass sie 28 Tage lang weder ihre Token übertragen noch Rewards verdienen können. -After the undelegation period, GRT will return to your crypto wallet. +Nach Ablauf der Aufhebungsfrist wird GRT in Ihre Wallet zurückgegeben. ### Warum ist das wichtig? -If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. +Wenn Sie sich für einen Indexer entscheiden, der nicht vertrauenswürdig ist oder keine gute Arbeit leistet, werden Sie die Abtretung rückgängig machen wollen. Das bedeutet, dass Sie die Möglichkeit verlieren, Rewards zu verdienen. -As a result, it’s recommended that you choose an Indexer wisely. +Es empfiehlt sich daher, einen Indexierer mit Bedacht auszuwählen. -![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) +![Aufhebung der Bindung der Delegation. Beachten Sie die 0,5 %ige Gebühr in der Benutzeroberfläche der Delegation sowie die 28-tägige Frist für die Aufhebung der Bindung](/img/Delegation-Unbonding.png) #### Parameter der Delegation @@ -92,29 +92,29 @@ Um zu verstehen, wie man einen vertrauenswürdigen Indexer auswählt, müssen Si - Wenn die Rewardkürzung eines Indexers auf 100% eingestellt ist, erhalten Sie als Delegator 0 Rewards für die Indexierung. - Wenn er auf 80 % eingestellt ist, erhalten Sie als Delegator 20 %. -![Indexing Reward Cut. The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.](/img/Indexing-Reward-Cut.png) +![Indexing Reward Cut. Der oberste Indexierer gibt den Delegatoren 90% der Belohnungen. Der mittlere gibt den Delegierten 20%. Der untere gibt den Delegatoren ~83%.](/img/Indexing-Reward-Cut.png) - **Kürzung der Abfragegebühren** - Dies ist genau wie die Indizierung der Rewardkürzung, aber es gilt für die Renditen der Abfragegebühren, die der Indexer einnimmt. -- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. +- Es wird dringend empfohlen, [The Graph Discord] (https://discord.gg/graphprotocol) zu erkunden, um festzustellen, welche Indexierer den besten sozialen und technischen Ruf haben. -- Many Indexers are active in Discord and will be happy to answer your questions. +- Viele Indexierer sind in Discord aktiv und beantworten gerne Ihre Fragen. ## Berechnung der erwarteten Rendite der Delegatoren -> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +> Berechnen Sie den ROI für Ihre Delegation [hier] (https://thegraph.com/explorer/delegate?chain=arbitrum-one). -A Delegator must consider a variety of factors to determine a return: +Ein Delegator muss eine Vielzahl von Faktoren berücksichtigen, um eine Rendite zu bestimmen: -An Indexer's ability to use the delegated GRT available to them impacts their rewards. +Die Fähigkeit eines Indexierers, die ihm zur Verfügung stehende delegierte GRT zu nutzen, wirkt sich auf seine Rewards aus. -If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. +Wenn ein Indexierer nicht alle ihm zur Verfügung stehenden GRT einsetzt, verpasst er möglicherweise die Maximierung des Ertragspotenzials für sich und seine Delegatoren. -Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. +Indexierer können eine Zuweisung schließen und Rewards jederzeit innerhalb des Zeitfensters von 1 bis 28 Tagen abholen. Werden die Rewards jedoch nicht umgehend abgeholt, kann der Gesamtbetrag der Rewards niedriger erscheinen, selbst wenn ein bestimmter Prozentsatz der Rewards nicht abgeholt wird. ### Berücksichtigung der Senkung der Abfrage- und Indizierungsgebühren -You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. +Sie sollten einen Indexierer wählen, der seine Abfrage- und Indizierungsgebühren transparent festlegt. Die Formel lautet: diff --git a/website/src/pages/de/resources/roles/delegating/undelegating.mdx b/website/src/pages/de/resources/roles/delegating/undelegating.mdx index 23da5ee0f456..047aca0cca1a 100644 --- a/website/src/pages/de/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/de/resources/roles/delegating/undelegating.mdx @@ -1,73 +1,69 @@ --- -title: Undelegating +title: Aufheben der Delegierung --- -Learn how to withdraw your delegated tokens through [Graph Explorer](https://thegraph.com/explorer) or [Arbiscan](https://arbiscan.io/). +Erfahren Sie, wie Sie Ihre delegierten Token über [Graph Explorer] (https://thegraph.com/explorer) oder [Arbiscan] (https://arbiscan.io/) abheben können. -> To avoid this in the future, it's recommended that you select an Indexer wisely. To learn how to select and Indexer, check out the Delegate section in Graph Explorer. +> Um dies in Zukunft zu vermeiden, empfiehlt es sich, einen Indexierer mit Bedacht auszuwählen. Wie Sie einen Indexierer auswählen, erfahren Sie im Abschnitt Delegieren im Graph Explorer. -## How to Withdraw Using Graph Explorer +## Wie man mit Graph Explorer abhebt ### Schritt für Schritt -1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. +1. Besuchen Sie [Graph Explorer] (https://thegraph.com/explorer). Bitte vergewissern Sie sich, dass Sie den Explorer und **nicht** Subgraph Studio verwenden. -2. Click on your profile. You can find it on the top right corner of the page. +2. Klicken Sie auf Ihr Profil. Sie finden es in der oberen rechten Ecke der Seite. + - Vergewissern Sie sich, dass Ihre Wallet verbunden ist. Wenn sie nicht verbunden ist, sehen Sie stattdessen die Schaltfläche „Verbinden“. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. +3. Sobald Sie sich in Ihrem Profil befinden, klicken Sie auf die Registerkarte „Delegieren". Auf der Registerkarte „Delegieren“ können Sie die Liste der Indexierer einsehen, an die Sie delegiert haben. -3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. +4. Klicken Sie auf den Indexierer, von dem Sie Ihre Token abheben möchten. + - Achten Sie darauf, dass Sie sich den Indexierer notieren, denn Sie müssen ihn wiederfinden, wenn Sie etwas abheben wollen. -4. Click on the Indexer from which you wish to withdraw your tokens. +5. Wählen Sie die Option „Delegation aufheben“, indem Sie auf die drei Punkte neben dem Indexierer auf der rechten Seite klicken (siehe Abbildung unten): - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. + ![Schaltfläche „Delegieren aufheben“](/img/undelegate-button.png) -5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: +6. Kehren Sie nach ca. [28 Epochen](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 Tagen) zum Abschnitt „Delegieren“ zurück und suchen Sie den Indexierer, von dem Sie die Delegierung aufgehoben haben. - ![Undelegate button](/img/undelegate-button.png) +7. Sobald Sie den Indexierer gefunden haben, klicken Sie auf die drei Punkte daneben und fahren Sie fort, alle Ihre Token abzuheben. -6. After approximately [28 epochs](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 days), return to the Delegate section and locate the specific Indexer you undelegated from. +## Wie man mit Arbiscan abhebt -7. Once you find the Indexer, click on the three dots next to them and proceed to withdraw all your tokens. - -## How to Withdraw Using Arbiscan - -> This process is primarily useful if the UI in Graph Explorer experiences issues. +> Dieser Prozess ist vor allem dann sinnvoll, wenn die Benutzeroberfläche im Graph Explorer Probleme aufweist. ### Schritt für Schritt -1. Find your delegation transaction on Arbiscan. - - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) - -2. Navigate to "Transaction Action" where you can find the staking extension contract: +1. Finden Sie Ihre Delegationstransaktion auf Arbiscan. + - Hier ist eine [Datenbeispiel-Transaktion auf Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) +2. Navigieren Sie zu „Transaktionsaktion“, wo Sie den Staking-Verlängerungsvertrag finden können: + - [Dies ist der Staking-Verlängerungsvertrag für das oben genannte Datenbeispiel](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) -3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) +3. Klicken Sie dann auf „Vertrag“. ![Registerkarte „Vertrag“ auf Arbiscan, zwischen NFT-Transfers und Ereignissen](/img/arbiscan-contract.png) -4. Scroll to the bottom and copy the Contract ABI. There should be a small button next to it that allows you to copy everything. +4. Scrollen Sie nach unten und kopieren Sie die Vertrags-ABI. Es sollte eine kleine Schaltfläche daneben sein, mit der Sie alles kopieren können. -5. Click on your profile button in the top right corner of the page. If you haven't created an account yet, please do so. +5. Klicken Sie auf Ihr Profil in der oberen rechten Ecke der Seite. Wenn Sie noch kein Konto erstellt haben, tun Sie dies bitte. -6. Once you're in your profile, click on "Custom ABI”. +6. Sobald Sie in Ihrem Profil sind, klicken Sie auf „Benutzerdefinierte ABI“. -7. Paste the custom ABI you copied from the staking extension contract, and add the custom ABI for the address: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**sample address**) +7. Fügen Sie die benutzerdefinierte ABI ein, die Sie aus dem Vertrag über die Stakerweiterung kopiert haben, und fügen Sie die benutzerdefinierte ABI für die Adresse hinzu: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**sample address**) -8. Go back to the [staking extension contract](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Now, call the `unstake` function in the [Write as Proxy tab](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), which has been added thanks to the custom ABI, with the number of tokens that you delegated. +8. Gehen Sie zurück zum [Staking-Erweiterungsvertrag] (https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Rufen Sie nun die Funktion „Unstake“ in der Registerkarte [Als Proxy schreiben] (https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), die dank der benutzerdefinierten ABI hinzugefügt wurde, mit der Anzahl der Token auf, die Sie delegiert haben. -9. If you don't know how many tokens you delegated, you can call `getDelegation` on the Read Custom tab. You will need to paste your address (delegator address) and the address of the Indexer that you delegated to, as shown in the following screenshot: +9. Wenn Sie nicht wissen, wie viele Token Sie delegiert haben, können Sie `getDelegation` auf der Registerkarte „Benutzerdefiniertes Lesen“ aufrufen. Sie müssen Ihre Adresse (Adresse des Delegators) und die Adresse des Indexirers, an den Sie delegiert haben, einfügen, wie im folgenden Screenshot gezeigt ist: - ![Both of the addresses needed](/img/get-delegate.png) + ![Beide Adressen benötigt](/img/get-delegate.png) - - This will return three numbers. The first number is the amount you can unstake. + - Sie erhalten dann drei Zahlen. Die erste Zahl ist der Betrag, den Sie unstaken (abheben) können. -10. After you have called `unstake`, you can withdraw after approximately 28 epochs (28 days) by calling the `withdraw` function. +10. Nachdem Sie `unstake` aufgerufen haben, können Sie nach ca. 28 Epochen (28 Tagen) durch Aufruf der Funktion `withdraw` abheben. -11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: +11. Sie können sehen, wie viel Sie zum Abheben zur Verfügung haben, indem Sie die Funktion `getWithdrawableDelegatedTokens` auf „Benutzerdefiniertes Lesen“ aufrufen und Ihr Delegations-Tupel übergeben. Siehe Bildschirmfoto unten: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + Rufen Sie `getWithdrawableDelegatedTokens` auf, um die Anzahl der Token zu sehen, die abgehoben werden können](/img/withdraw-available.png) ## Zusätzliche Ressourcen -To delegate successfully, review the [delegating documentation](/resources/roles/delegating/delegating/) and check out the delegate section in Graph Explorer. +Um erfolgreich zu delegieren, lesen Sie die [Delegierungsdokumentation](/resources/roles/delegating/delegating/) und schauen Sie sich den Abschnitt „Delegieren“ im Graph Explorer an. diff --git a/website/src/pages/de/resources/subgraph-studio-faq.mdx b/website/src/pages/de/resources/subgraph-studio-faq.mdx index a6e114083fc7..423b6b5059b3 100644 --- a/website/src/pages/de/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/de/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio-FAQs ## 1. Was ist Subgraph Studio? -[Subgraph Studio] (https://thegraph.com/studio/) ist eine App zur Erstellung, Verwaltung und Veröffentlichung von Subgraphen und API-Schlüsseln. +[Subgraph Studio] (https://thegraph.com/studio/) ist eine dApp zur Erstellung, Verwaltung und Veröffentlichung von Subgraphen und API-Schlüsseln. ## 2. Wie erstelle ich einen API-Schlüssel? @@ -24,7 +24,7 @@ Ja, Subgraphen, die auf Arbitrum One veröffentlicht wurden, können auf eine ne Beachten Sie, dass Sie den Subgrafen nach der Übertragung nicht mehr in Studio sehen oder bearbeiten können. -## 6. Wie finde ich Abfrage-URLs für Subgraphen, wenn ich kein Entwickler des Subgraphen bin, den ich verwenden möchte? +## 6. Wie finde ich Abfrage-URLs für Subgraphen, wenn ich kein Programmierer des Subgraphen bin, den ich verwenden möchte? Die Abfrage-URL eines jeden Subgraphen finden Sie im Abschnitt Subgraph Details des Graph Explorers. Wenn Sie auf die Schaltfläche „Abfrage“ klicken, werden Sie zu einem Fenster weitergeleitet, in dem Sie die Abfrage-URL des gewünschten Subgraphen sehen können. Sie können dann den `` Platzhalter durch den API-Schlüssel ersetzen, den Sie in Subgraph Studio verwenden möchten. diff --git a/website/src/pages/de/resources/tokenomics.mdx b/website/src/pages/de/resources/tokenomics.mdx index 3dd13eb7d06a..738afa0e395e 100644 --- a/website/src/pages/de/resources/tokenomics.mdx +++ b/website/src/pages/de/resources/tokenomics.mdx @@ -1,103 +1,103 @@ --- title: Tokenomics des The Graph Netzwerks sidebarTitle: Tokenomics -description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. +description: The Graph Network wird durch leistungsstarke Tokenomics incentiviert. So funktioniert GRT, der The Graph-eigene Arbeits-Utility-Token. --- ## Überblick -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph ist ein dezentrales Protokoll, das einen einfachen Zugang zu Blockchain-Daten ermöglicht. Es indiziert Blockchain-Daten ähnlich wie Google das Web indiziert. Wenn Sie eine Dapp verwendet haben, die Daten aus einem Subgraphут abruft, haben Sie wahrscheinlich mit The Graph interagiert. Heute nutzen Tausende von [beliebten Dapps](https://thegraph.com/explorer) im web3-Ökosystem The Graph. ## Besonderheiten -The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. +Das Modell von The Graph ähnelt einem B2B2C-Modell, wird aber von einem dezentralen Netzwerk angetrieben, in dem die Teilnehmer zusammenarbeiten, um den Endnutzern Daten im Austausch für GRT Rewards zur Verfügung zu stellen. GRT ist der Utility-Token für The Graph. Er koordiniert und fördert die Interaktion zwischen Datenanbietern und Verbrauchern innerhalb des Netzwerks. -The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/subgraphs/billing/). +The Graph spielt eine wichtige Rolle dabei, Blockchain-Daten besser zugänglich zu machen und unterstützt einen Marktplatz für deren Austausch. Wenn Sie mehr über das „Pay-for-what-you-need“-Modell von The Graph erfahren möchten, sehen Sie sich die [kostenlosen und wachstumsorientierten Pläne](/subgraphs/billing/) an. -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +- GRT-Token-Adresse im Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +- GRT-Token-Adresse auf Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## The Roles of Network Participants +## Die Rollen der Netzwerkteilnehmer -There are four primary network participants: +Es gibt vier primäre Netzwerkteilnehmer: -1. Delegators - Delegate GRT to Indexers & secure the network +1. Delegatoren - Delegieren Sie GRT an Indexierer und sichern Sie das Netzwerk -2. Kuratoren - Finden Sie die besten Untergraphen für Indexer +2. Kuratoren - Finden Sie die besten Subgrafen für Indexierer -3. Developers - Build & query subgraphs +3. Entwickler - Erstellen & Abfragen von Subgrafen 4. Indexer - Das Rückgrat der Blockchain-Daten -Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). +Fischer und Schlichter tragen auch durch andere Beiträge zum Erfolg des Netzwerks bei und unterstützen die Arbeit der anderen Hauptbeteiligten. Weitere Informationen über die Rollen des Netzwerks finden Sie in [diesem Artikel] (https://thegraph.com/blog/the-graph-grt-token-economics/). -![Tokenomics diagram](/img/updated-tokenomics-image.png) +![Tokenomics-Diagramm](/img/aktualisiertes-tokenomics-bild.png) -## Delegators (Passively earn GRT) +## Delegatoren (verdienen passiv GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexierer werden von Delegatoren mit GRT betraut, wodurch sich der Anteil des Indexierers an Subgraphen im Netzwerk erhöht. Im Gegenzug erhalten die Delegatoren einen prozentualen Anteil an allen Abfragegebühren und Rewards des Indexierers. Jeder Indexierer legt den Anteil, den er an die Delegatoren vergütet, selbständig fest, wodurch ein Wettbewerb zwischen den Indexierern entsteht, um Delegatoren anzuziehen. Die meisten Indexierer bieten zwischen 9-12% jährlich. -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. +Ein Datenbeispiel: Wenn ein Delegator 15.000 GRT an einen Indexierer delegiert, der 10 % anbietet, würde der Delegator jährlich ~ 1.500 GRT an Rewards erhalten. -There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. +Es gibt eine Delegationssteuer von 0,5 %, die jedes Mal erhoben wird, wenn ein Delegator GRT an das Netzwerk delegiert. Wenn ein Delegator beschließt, sein delegiertes GRT zurückzuziehen, muss er die 28-Epochen-Frist abwarten, in der die Bindung aufgehoben wird. Jede Epoche besteht aus 6.646 Blöcken, was bedeutet, dass 28 Epochen ungefähr 26 Tagen entsprechen. -If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. +Wenn Sie dies lesen, können Sie sofort Delegator werden, indem Sie auf die [Netzwerkteilnehmerseite] (https://thegraph.com/explorer/participants/indexers) gehen und GRT an einen Indexierer Ihrer Wahl delegieren. -## Curators (Earn GRT) +## Kuratoren (GRT verdienen) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Kuratoren identifizieren qualitativ hochwertige Subgraphen und „kuratieren“ sie (d.h. signalisieren GRT auf ihnen), um Kurationsanteile zu verdienen, die einen Prozentsatz aller zukünftigen Abfragegebühren garantieren, die durch den Subgraphen generiert werden. Obwohl jeder unabhängige Netzwerkteilnehmer ein Kurator sein kann, gehören Entwickler von Subgraphen in der Regel zu den ersten Kuratoren für ihre eigenen Subgraphen, da sie sicherstellen wollen, dass ihr Subgraph indiziert wird. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Die Entwickler von Subgraphen werden ermutigt, ihren Subgraphen mit mindestens 3.000 GRT zu kuratieren. Diese Zahl kann jedoch von der Netzwerkaktivität und der Beteiligung der Community beeinflusst werden. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Kuratoren zahlen eine Kurationssteuer von 1%, wenn sie einen neuen Subgraphen kuratieren. Diese Kurationssteuer wird verbrannt, wodurch das Angebot an GRT sinkt. -## Developers +## Entwickler -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Entwickler erstellen Subgraphen und fragen sie ab, um Blockchain-Daten abzurufen. Da Subgraphen quelloffen sind, können Entwickler bestehende Subgraphen abfragen, um Blockchain-Daten in ihre Dapps zu laden. Entwickler zahlen für ihre Abfragen in GRT, das an die Netzwerkteilnehmer verteilt wird. -### Erstellung eines Untergraphen +### Erstellen eines Subgraphen -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Entwickler können [einen Subgraphen erstellen](/developing/creating-a-subgraph/), um Daten auf der Blockchain zu indizieren. Subgraphen sind Anweisungen für Indexierer darüber, welche Daten an Verbraucher geliefert werden sollen. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Sobald Entwickler ihren Subgraphen gebaut und getestet haben, können sie ihn im dezentralen Netzwerk von The Graph veröffentlichen (/subgraphs/developing/publishing/publishing-a-subgraph/). -### Abfrage eines vorhandenen Untergraphen +### Abfrage vorhandener Subgraphen -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Sobald ein Subgraph im dezentralen Netzwerk von The Graph [veröffentlicht](/subgraphs/developing/publishing/publishing-a-subgraph/) wurde, kann jeder einen API-Schlüssel erstellen, GRT zu seinem Guthaben hinzufügen und den Subgraphen abfragen. -Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. +Subgraphen werden [mit GraphQL abgefragt](/subgraphs/querying/introduction/), und die Abfragegebühren werden mit GRT in [Subgraph Studio](https://thegraph.com/studio/) bezahlt. Die Abfragegebühren werden an die Netzwerkteilnehmer auf der Grundlage ihrer Beiträge zum Protokoll verteilt. -1% of the query fees paid to the network are burned. +1 % der an das Netz gezahlten Abfragegebühren werden verbrannt. -## Indexers (Earn GRT) +## Indexierer (GRT verdienen) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexierer sind das Rückgrat von The Graph. Sie betreiben unabhängige Hardware und Software, die das dezentrale Netzwerk von The Graph antreiben. Indexierer versorgen die Verbraucher mit Daten, die auf Anweisungen von Subgraphen basieren. -Indexers can earn GRT rewards in two ways: +Indexierer können GRT-Rewards auf zwei Arten verdienen: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Abfragegebühren**: GRT, die von Entwicklern oder Nutzern für die Abfrage von Subgraph-Daten gezahlt werden. Abfragegebühren werden gemäß der exponentiellen Rabattfunktion direkt an Indexierer verteilt (siehe GIP [hier](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing Rewards**: Die jährliche Ausgabe von 3% wird an Indexierer verteilt, basierend auf der Anzahl der Subgraphen, die sie indexieren. Diese Rewards sind ein Anreiz für Indexierer, Subgraphen zu indizieren, gelegentlich vor Beginn der Abfragegebühren, um Proofs of Indexing (POIs) zu sammeln und einzureichen, die bestätigen, dass sie Daten korrekt indiziert haben. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Jedem Subgraphen wird ein Teil der gesamten Netzwerk-Token-Ausgabe zugeteilt, basierend auf der Höhe des Kurationssignals des Subgraphen. Dieser Betrag wird dann an Indexierer auf der Grundlage ihres zugewiesenen Anteils an dem Subgraphen belohnt. -In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. +Um einen Indexierungsknoten betreiben zu können, müssen Indexierer mindestens 100.000 GRT selbst in das Netzwerk einbringen. Für Indexierer besteht ein Anreiz, GRT im Verhältnis zur Anzahl der von ihnen bearbeiteten Abfragen selbst einzunehmen. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexierer können ihre GRT-Zuteilungen auf Subgraphen erhöhen, indem sie GRT-Delegationen von Delegatoren akzeptieren, und sie können bis zum 16-fachen ihres ursprünglichen Eigenanteils akzeptieren. Wenn ein Indexierer „überdelegiert“ wird (d.h. mehr als das 16-fache seines ursprünglichen Eigenanteils), kann er die zusätzlichen GRT von Delegatoren nicht nutzen, bis er seinen Eigenanteil im Netzwerk erhöht. -The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. +Die Höhe der Rewards, die ein Indexierer erhält, kann je nach Eigenanteil des Indexierers, akzeptierter Delegation, Qualität der Dienstleistung und vielen weiteren Faktoren variieren. -## Token Supply: Burning & Issuance +## Token-Versorgung: Burning & Emission -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +Das anfängliche Token-Angebot beträgt 10 Milliarden GRT, mit einem Ziel von 3 % Neuemissionen pro Jahr, um Indexierer für die Zuweisung von Anteilen an Subgraphen zu belohnen. Das bedeutet, dass das Gesamtangebot an GRT-Token jedes Jahr um 3 % steigen wird, da neue Token an Indexierer für ihren Beitrag zum Netzwerk ausgegeben werden. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph ist mit mehreren Brennmechanismen ausgestattet, um die Ausgabe neuer Token auszugleichen. Ungefähr 1 % des GRT-Angebots wird jährlich durch verschiedene Aktivitäten im Netzwerk verbrannt, und diese Zahl steigt, da die Netzwerkaktivität weiter zunimmt. Zu diesen Burning-Aktivitäten gehören eine Delegationssteuer von 0,5 %, wenn ein Delegator GRT an einen Indexierer delegiert, eine Kurationssteuer von 1 %, wenn Kuratoren ein Signal auf einem Subgraphen geben, und 1 % der Abfragegebühren für Blockchain-Daten. -![Total burned GRT](/img/total-burned-grt.jpeg) +[[Insgesamt verbrannte GRT](/img/total-burned-grt.jpeg) -In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. +Zusätzlich zu diesen regelmäßig stattfindenden Verbrennungsaktivitäten verfügt der GRT-Token auch über einen Slashing-Mechanismus, um böswilliges oder unverantwortliches Verhalten von Indexierern zu bestrafen. Wenn ein Indexierer geslashed wird, werden 50% seiner Rewards für die Epoche verbrannt (während die andere Hälfte an den Fischer geht), und sein Eigenanteil wird um 2,5% gekürzt, wobei die Hälfte dieses Betrags verbrannt wird. Dies trägt dazu bei, dass Indexierer einen starken Anreiz haben, im besten Interesse des Netzwerks zu handeln und zu dessen Sicherheit und Stabilität beizutragen. -## Improving the Protocol +## Verbesserung des Protokolls -The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). +The Graph Network entwickelt sich ständig weiter, und es werden laufend Verbesserungen an der wirtschaftlichen Gestaltung des Protokolls vorgenommen, um allen Netzwerkteilnehmern die bestmögliche Erfahrung zu bieten. Der The Graph-Rat überwacht die Protokolländerungen, und die Mitglieder der Community sind aufgerufen, sich daran zu beteiligen. Beteiligen Sie sich an der Verbesserung des Protokolls im [The Graph Forum] (https://forum.thegraph.com/). diff --git a/website/src/pages/de/sps/introduction.mdx b/website/src/pages/de/sps/introduction.mdx index 6f1270848072..396c53077fd1 100644 --- a/website/src/pages/de/sps/introduction.mdx +++ b/website/src/pages/de/sps/introduction.mdx @@ -3,28 +3,29 @@ title: Einführung in Substreams-Powered Subgraphen sidebarTitle: Einführung --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Steigern Sie die Effizienz und Skalierbarkeit Ihres Subgraphen, indem Sie [Substreams](/substreams/introduction/) verwenden, um vorindizierte Blockchain-Daten zu streamen. ## Überblick -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Verwenden Sie ein Substreams-Paket (`.spkg`) als Datenquelle, um Ihrem Subgraph Zugang zu einem Strom von vorindizierten Blockchain-Daten zu geben. Dies ermöglicht eine effizientere und skalierbarere Datenverarbeitung, insbesondere bei großen oder komplexen Blockchain-Netzwerken. ### Besonderheiten Es gibt zwei Methoden zur Aktivierung dieser Technologie: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Verwendung von Substreams [triggers](/sps/triggers/)**: Nutzen Sie ein beliebiges Substreams-Modul, indem Sie das Protobuf-Modell über einen Subgraph-Handler importieren und Ihre gesamte Logik in einen Subgraph verschieben. Diese Methode erstellt die Subgraph-Entitäten direkt im Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Unter Verwendung von [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: Wenn Sie einen größeren Teil der Logik in Substreams schreiben, können Sie die Ausgabe des Moduls direkt in [graph-node](/indexing/tooling/graph-node/) verwenden. In graph-node können Sie die Substreams-Daten verwenden, um Ihre Subgraph-Entitäten zu erstellen. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +Sie können wählen, wo Sie Ihre Logik platzieren möchten, entweder im Subgraph oder in Substreams. Überlegen Sie jedoch, was mit Ihren Datenanforderungen übereinstimmt, da Substreams ein parallelisiertes Modell hat und Auslöser linear in den Graphknoten verbraucht werden. ### Zusätzliche Ressourcen -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +Unter den folgenden Links finden Sie Anleitungen zur Verwendung von Tools zur Codegenerierung, mit denen Sie schnell Ihr erstes durchgängiges Substreams-Projekt erstellen können: - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/de/sps/sps-faq.mdx b/website/src/pages/de/sps/sps-faq.mdx index 72005f6cfc09..705188578529 100644 --- a/website/src/pages/de/sps/sps-faq.mdx +++ b/website/src/pages/de/sps/sps-faq.mdx @@ -5,17 +5,17 @@ sidebarTitle: FAQ ## Was sind Substreams? -Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications. +Substreams ist eine außergewöhnlich leistungsstarke Verarbeitungsmaschine, die umfangreiche Blockchain-Datenströme verarbeiten kann. Sie ermöglicht es Ihnen, Blockchain-Daten für eine schnelle und nahtlose Verarbeitung durch Endbenutzeranwendungen zu verfeinern und zu gestalten. -Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere. +Genauer gesagt handelt es sich um eine Blockchain-agnostische, parallelisierte und Streaming-first-Engine, die als Blockchain-Datenumwandlungsschicht dient. Sie wird von [Firehose](https://firehose.streamingfast.io/) angetrieben und ermöglicht es Entwicklern, Rust-Module zu schreiben, auf Community-Modulen aufzubauen, eine extrem leistungsstarke Indizierung bereitzustellen und ihre Daten überall [zu versenken](/substreams/developing/sinks/). -Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. +Substreams wird von [StreamingFast](https://www.streamingfast.io/) entwickelt. Besuchen Sie die [Substreams-Dokumentation](/substreams/introduction/), um mehr über Substreams zu erfahren. ## Was sind Substreams-basierte Subgraphen? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +Die [Substreams-basierte Subgraphen](/sps/introduction/) kombinieren die Leistungsfähigkeit von Substreams mit der Abfragefähigkeit von Subgraphen. Bei der Veröffentlichung eines Substreams-basierten Subgraphen können die von den Substreams-Transformationen erzeugten Daten [Entitätsänderungen ausgeben](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs), die mit Subgraph-Entitäten kompatibel sind. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +Wenn Sie bereits mit der Entwicklung von Subgraphen vertraut sind, dann beachten Sie, dass Substreams-basierte Subgraphen dann abgefragt werden können, als ob sie von der AssemblyScript-Transformationsschicht erzeugt worden wären, mit allen Vorteilen von Subgraphen, wie der Bereitstellung einer dynamischen und flexiblen GraphQL-API. ## Wie unterscheiden sich Substreams-basierte Subgraphen von Subgraphen? @@ -25,7 +25,7 @@ Im Gegensatz dazu haben Substreams-basierte Subgraphen eine einzige Datenquelle, ## Was sind die Vorteile der Verwendung von Substreams-basierten Subgraphen? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-basierte Subgraphen kombinieren alle Vorteile von Substreams mit der Abfragefähigkeit von Subgraphen. Sie bieten The Graph eine bessere Zusammensetzbarkeit und eine leistungsstarke Indizierung. Sie ermöglichen auch neue Datenanwendungsfälle; sobald Sie beispielsweise Ihren Substreams-basierten Subgraphen erstellt haben, können Sie Ihre [Substreams-Module] (https://docs.substreams.dev/reference-material/substreams-components/modules#modules) für die Ausgabe an verschiedene [Senken] (https://substreams.streamingfast.io/reference-and-specs/manifests#sink) wie PostgreSQL, MongoDB und Kafka wiederverwenden. ## Was sind die Vorteile von Substreams? @@ -63,11 +63,11 @@ Die Verwendung von Firehose bietet viele Vorteile, darunter: - Nutzung von Flat Files: Blockchain-Daten werden in Flat Files extrahiert, der billigsten und optimalsten verfügbaren Rechenressource. -## Wo erhalten Entwickler weitere Informationen über Substreams-basieren Subgraphen und Substreams? +## Wo erhalten Entwickler weitere Informationen über Substreams-basierten Subgraphen und Substreams? In der [Substreams-Dokumentation](/substreams/introduction/) erfahren Sie, wie Sie Substreams-Module erstellen können. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +Die [Dokumentation zu Substreams-basierten Subgraphen](/sps/introduction/) zeigt Ihnen, wie Sie diese für die Bereitstellung in The Graph verpacken können. Das [neueste Substreams Codegen-Tool] (https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) ermöglicht es Ihnen, ein Substreams-Projekt ohne jeglichen Code zu booten. @@ -75,7 +75,7 @@ Das [neueste Substreams Codegen-Tool] (https://streamingfastio.medium.com/substr Rust-Module sind das Äquivalent zu den AssemblyScript-Mappern in Subgraphen. Sie werden auf ähnliche Weise in WASM kompiliert, aber das Programmiermodell ermöglicht eine parallele Ausführung. Sie definieren die Art der Transformationen und Aggregationen, die Sie auf die Blockchain-Rohdaten anwenden möchten. -See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. +Weitere Informationen finden Sie in der [Moduldokumentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules). ## Was macht Substreams kompositionsfähig? @@ -85,7 +85,7 @@ Als Datenbeispiel kann Alice ein DEX-Preismodul erstellen, Bob kann damit einen ## Wie können Sie einen Substreams-basierten Subgraphen erstellen und einsetzen? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +Nach der [Definition](/sps/introduction/) eines Subgraphen können Sie den Graph CLI verwenden, um ihn in [Subgraph Studio](https://thegraph.com/studio/) einzusetzen. ## Wo finde ich Datenbeispiele für Substreams und Substreams-basierte Subgraphen? diff --git a/website/src/pages/de/sps/triggers.mdx b/website/src/pages/de/sps/triggers.mdx index 5bf7350c6b5f..792dee351596 100644 --- a/website/src/pages/de/sps/triggers.mdx +++ b/website/src/pages/de/sps/triggers.mdx @@ -2,15 +2,15 @@ title: Trigger für Substreams --- -Use Custom Triggers and enable the full use GraphQL. +Verwenden Sie Custom Triggers und aktivieren Sie die volle Nutzung von GraphQL. ## Überblick -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Mit benutzerdefinierten Triggern können Sie Daten direkt in Ihre Subgraph-Mappings-Datei und Entitäten senden, die Tabellen und Feldern ähneln. So können Sie die GraphQL-Schicht vollständig nutzen. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +Durch den Import der Protobuf-Definitionen, die von Ihrem Substreams-Modul ausgegeben werden, können Sie diese Daten in Ihrem Subgraph-Handler empfangen und verarbeiten. Dies gewährleistet eine effiziente und schlanke Datenverwaltung innerhalb des Subgraph-Frameworks. -### Defining `handleTransactions` +### Definieren von `handleTransactions` Der folgende Code veranschaulicht, wie eine Funktion `handleTransactions` in einem Subgraph-Handler definiert wird. Diese Funktion empfängt rohe Substream-Bytes als Parameter und dekodiert sie in ein `Transactions`-Objekt. Für jede Transaktion wird eine neue Subgraph-Entität erstellt. @@ -34,14 +34,14 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +Das sehen Sie in der Datei `mappings.ts`: 1. Die Bytes, die die Substreams enthalten, werden in das generierte `Transactions`-Objekt dekodiert. Dieses Objekt wird wie jedes andere AssemblyScript-Objekt verwendet 2. Looping über die Transaktionen 3. Erstellen einer neuen Subgraph-Entität für jede Transaktion -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +Ein ausführliches Datenbeispiel für einen auslöserbasierten Subgraphen finden Sie [hier](/sps/tutorial/). ### Zusätzliche Ressourcen -To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/). +Um Ihr erstes Projekt im Entwicklungscontainer zu erstellen, lesen Sie einen der [Schritt-für-Schritt-Guide](/substreams/developing/dev-container/). diff --git a/website/src/pages/de/sps/tutorial.mdx b/website/src/pages/de/sps/tutorial.mdx index 395bb0433bd7..598a1f340089 100644 --- a/website/src/pages/de/sps/tutorial.mdx +++ b/website/src/pages/de/sps/tutorial.mdx @@ -1,15 +1,15 @@ --- -title: 'Tutorial: Einrichten eines Substreams-basierten Subgraphen auf Solana' +title: "Tutorial: Einrichten eines Substreams-basierten Subgraphen auf Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Erfolgreiche Einrichtung eines auslösungsbasierten Substreams-powered Subgraphs für ein Solana SPL-Token. ## Los geht’s For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) -### Prerequisites +### Voraussetzungen Bevor Sie beginnen, stellen Sie Folgendes sicher: @@ -65,25 +65,25 @@ Sie erzeugen ein `subgraph.yaml`-Manifest, das das Substreams-Paket als Datenque ```yaml --- dataSources: - - art: substreams - Name: mein_Projekt_sol - Netzwerk: solana-mainnet-beta - Quelle: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: package: - moduleName: map_spl_transfers # Modul definiert in der substreams.yaml - Datei: ./mein-projekt-sol-v0.1.0.spkg + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 - Art: substreams/graph-entities - Datei: ./src/mappings.ts + apiVersion: 0.0.9 + kind: substreams/graph-entities + file: ./src/mappings.ts handler: handleTriggers ``` ### Schritt 3: Definieren Sie Entitäten in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Definieren Sie die Felder, die Sie in Ihren Subgraph-Entitäten speichern wollen, indem Sie die Datei `schema.graphql` aktualisieren. -Here is an example: +Hier ist ein Beispiel: ```graphql type MyTransfer @entity { @@ -99,9 +99,9 @@ Dieses Schema definiert eine `MyTransfer`-Entität mit Feldern wie `id`, `amount ### Schritt 4: Umgang mit Substreams Daten in `mappings.ts` -With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. +Mit den erzeugten Protobuf-Objekten können Sie nun die dekodierten Substreams-Daten in Ihrer Datei `mappings.ts` im Verzeichnis `./src`verarbeiten. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +Das folgende Beispiel zeigt, wie die nicht abgeleiteten Überweisungen, die mit der Orca-Kontonummer verbunden sind, in die Subgraph-Entitäten extrahiert werden: ```ts import { Protobuf } from 'as-proto/assembly' @@ -142,11 +142,11 @@ npm run protogen Dieser Befehl konvertiert die Protobuf-Definitionen in AssemblyScript, so dass Sie sie im Handler des Subgraphen verwenden können. -### Conclusion +### Schlussfolgerung -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Herzlichen Glückwunsch! Sie haben erfolgreich einen Trigger-basierten Substreams-powered Subgraph für ein Solana SPL-Token eingerichtet. Im nächsten Schritt können Sie Ihr Schema, Ihre Mappings und Module an Ihren spezifischen Anwendungsfall anpassen. -### Video Tutorial +### Video-Anleitung diff --git a/website/src/pages/de/subgraphs/_meta-titles.json b/website/src/pages/de/subgraphs/_meta-titles.json index 0556abfc236c..1338cbaa797d 100644 --- a/website/src/pages/de/subgraphs/_meta-titles.json +++ b/website/src/pages/de/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", - "cookbook": "Cookbook", - "best-practices": "Best Practices" + "querying": "Abfragen", + "developing": "Entwicklung", + "guides": "Anleitungen", + "best-practices": "Bewährte Praktiken" } diff --git a/website/src/pages/de/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/de/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..109a388ddd19 100644 --- a/website/src/pages/de/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/de/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,25 +1,25 @@ --- -title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +title: Best Practice 4 für Subgraphen - Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von eth_calls +sidebarTitle: Vermeidung von eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` sind Aufrufe, die von einem Subgraphen zu einem Ethereum-Knoten gemacht werden können. Diese Aufrufe benötigen eine beträchtliche Menge an Zeit, um Daten zurückzugeben, was die Indexierung verlangsamt. Entwerfen Sie nach Möglichkeit intelligente Verträge, die alle benötigten Daten ausgeben, damit Sie keine `eth_calls` verwenden müssen. -## Why Avoiding `eth_calls` Is a Best Practice +## Warum die Vermeidung von `eth_calls` eine gute Praxis ist -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphen sind für die Indizierung von Ereignisdaten optimiert, die von intelligenten Verträgen ausgegeben werden. Ein Subgraph kann auch die Daten indizieren, die von einem `eth_calls` stammen. Dies kann jedoch die Indizierung von Subgraphen erheblich verlangsamen, da `eth_calls` externe Aufrufe an Smart Contracts erfordern. Die Reaktionsfähigkeit dieser Aufrufe hängt nicht vom Subgraphen ab, sondern von der Konnektivität und Reaktionsfähigkeit des Ethereum-Knotens, der abgefragt wird. Indem wir eth_calls in unseren Subgraphen minimieren oder eliminieren, können wir unsere Indizierungsgeschwindigkeit erheblich verbessern. -### What Does an eth_call Look Like? +### Wie sieht ein eth_call aus? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` sind häufig erforderlich, wenn die für einen Subgraphen benötigten Daten nicht über emittierte Ereignisse verfügbar sind. Betrachten wir zum Beispiel ein Szenario, in dem ein Subgraph feststellen muss, ob ERC20-Token Teil eines bestimmten Pools sind, der Vertrag aber nur ein einfaches `Transfer`-Ereignis aussendet und kein Ereignis, das die benötigten Daten enthält: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); ``` -Suppose the tokens' pool membership is determined by a state variable named `getPoolInfo`. In this case, we would need to use an `eth_call` to query this data: +Angenommen, die Zugehörigkeit der Token zum Pool wird durch eine Statusvariable namens `getPoolInfo` bestimmt. In diesem Fall müssten wir einen `eth_call` verwenden, um diese Daten abzufragen: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -27,34 +27,36 @@ import { ERC20, Transfer } from '../generated/ERC20/ERC20' import { TokenTransaction } from '../generated/schema' export function handleTransfer(event: Transfer): void { - let transaction = new TokenTransaction(event.transaction.hash.toHex()) + let transaction = new TokenTransaction(event.transaction.hash.toHex()) - // Bind the ERC20 contract instance to the given address: - let instance = ERC20.bind(event.address) + // Binde die ERC20-Vertragsinstanz an die angegebene Adresse: + let instance = ERC20. bind(event.address) - // Retrieve pool information via eth_call - let poolInfo = instance.getPoolInfo(event.params.to) + // Abrufen von Pool-Informationen über eth_call + let poolInfo = instance.getPoolInfo(event.params.to) - transaction.pool = poolInfo.toHexString() - transaction.from = event.params.from.toHexString() - transaction.to = event.params.to.toHexString() - transaction.value = event.params.value + transaction.pool = poolInfo.toHexString() + transaction.from = event.params.from.toHexString() + transaction.to = event.params.to.toHexString() + transaction.value = event.params.value - transaction.save() + transaction.save() } + +Übersetzt mit DeepL.com (kostenlose Version) ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +Dies ist funktional, aber nicht ideal, da es die Indizierung unseres Subgraphen verlangsamt. -## How to Eliminate `eth_calls` +## Wie man `eth_calls` beseitigt -Ideally, the smart contract should be updated to emit all necessary data within events. For instance, modifying the smart contract to include pool information in the event could eliminate the need for `eth_calls`: +Idealerweise sollte der Smart Contract so aktualisiert werden, dass er alle erforderlichen Daten in Ereignissen ausgibt. Wenn der Smart Contract beispielsweise so geändert wird, dass er Pool-Informationen in das Ereignis aufnimmt, könnte die Notwendigkeit von `eth_calls` entfallen: ``` event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +Mit dieser Aktualisierung kann der Subgraph die benötigten Daten ohne externe Aufrufe direkt indizieren: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -73,17 +75,17 @@ export function handleTransferWithPool(event: TransferWithPool): void { } ``` -This is much more performant as it has eliminated the need for `eth_calls`. +Dies ist sehr viel leistungsfähiger, da es die Notwendigkeit von `eth_calls` beseitigt hat. -## How to Optimize `eth_calls` +## Wie man `eth_calls` optimiert -If modifying the smart contract is not possible and `eth_calls` are required, read “[Improve Subgraph Indexing Performance Easily: Reduce eth_calls](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)” by Simon Emanuel Schmid to learn various strategies on how to optimize `eth_calls`. +Wenn eine Änderung des Smart Contracts nicht möglich ist und `eth_calls` benötigt werden, lesen Sie „[Verbessern Sie die Leistung der Subgraph-Indizierung ganz einfach: Reduzieren Sie eth_calls](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)“ von Simon Emanuel Schmid, um verschiedene Strategien zur Optimierung von `eth_calls` zu lernen. -## Reducing the Runtime Overhead of `eth_calls` +## Verringerung des Laufzeit-Overheads von `eth_calls` -For the `eth_calls` that can not be eliminated, the runtime overhead they introduce can be minimized by declaring them in the manifest. When `graph-node` processes a block it performs all declared `eth_calls` in parallel before handlers are run. Calls that are not declared are executed sequentially when handlers run. The runtime improvement comes from performing calls in parallel rather than sequentially - that helps reduce the total time spent in calls but does not eliminate it completely. +Für die `eth_calls`, die nicht eliminiert werden können, kann der Laufzeit-Overhead, den sie verursachen, minimiert werden, indem sie im Manifest deklariert werden. Wenn `graph-node` einen Block verarbeitet, führt er alle deklarierten `eth_calls` parallel aus, bevor die Handler ausgeführt werden. Aufrufe, die nicht deklariert sind, werden sequentiell ausgeführt, wenn die Handler laufen. Die Laufzeitverbesserung kommt dadurch zustande, dass die Aufrufe parallel und nicht sequentiell ausgeführt werden - das trägt dazu bei, die Gesamtzeit für die Aufrufe zu reduzieren, beseitigt sie aber nicht vollständig. -Currently, `eth_calls` can only be declared for event handlers. In the manifest, write +Derzeit können `eth_calls` nur für Event-Handler deklariert werden. Im Manifest, schreiben Sie ```yaml event: TransferWithPool(address indexed, address indexed, uint256, bytes32 indexed) @@ -92,26 +94,26 @@ calls: ERC20.poolInfo: ERC20[event.address].getPoolInfo(event.params.to) ``` -The portion highlighted in yellow is the call declaration. The part before the colon is simply a text label that is only used for error messages. The part after the colon has the form `Contract[address].function(params)`. Permissible values for address and params are `event.address` and `event.params.`. +Der gelb hervorgehobene Teil ist die Aufrufdeklaration. Der Teil vor dem Doppelpunkt ist einfach eine Textbeschriftung, die nur für Fehlermeldungen verwendet wird. Der Teil nach dem Doppelpunkt hat die Form `Contract[address].function(params)`. Zulässige Werte für Adresse und Parameter sind `event.address` und `event.params.`. -The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. +Der Handler selbst greift auf das Ergebnis dieses `eth_call` genau wie im vorherigen Abschnitt zu, indem er sich an den Vertrag bindet und den Aufruf tätigt. graph-node speichert die Ergebnisse der deklarierten `eth_calls` im Speicher und der Aufruf des Handlers ruft das Ergebnis aus diesem Speicher-Cache ab, anstatt einen tatsächlichen RPC-Aufruf zu tätigen. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Hinweis: Deklarierte eth_calls können nur in Subgraphen mit specVersion >= 1.2.0 gemacht werden. -## Conclusion +## Schlussfolgerung -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +Sie können die Indizierungsleistung erheblich verbessern, indem Sie die `eth_calls` in Ihren Subgraphen minimieren oder eliminieren. -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/de/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..49fb2c1f8ff8 100644 --- a/website/src/pages/de/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/de/subgraphs/best-practices/derivedfrom.mdx @@ -1,29 +1,29 @@ --- -title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +title: Best Practice 2 für Subgraphen - Verbessern Sie die Indizierung und die Reaktionsfähigkeit bei Abfragen durch die Verwendung von @derivedFrom +sidebarTitle: Arrays mit @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in Ihrem Schema können die Leistung eines Subgraphen stark verlangsamen, wenn sie über Tausende von Einträgen hinauswachsen. Wenn möglich, sollte bei der Verwendung von Arrays die Direktive `@derivedFrom` verwendet werden, da sie die Bildung großer Arrays verhindert, Handler vereinfacht und die Größe einzelner Entitäten reduziert, was die Indizierungsgeschwindigkeit und die Abfrageleistung erheblich verbessert. -## How to Use the `@derivedFrom` Directive +## Verwendung der `@derivedFrom`-Direktive -You just need to add a `@derivedFrom` directive after your array in your schema. Like this: +Sie müssen nur eine `@derivedFrom`-Direktive nach Ihrem Array in Ihrem Schema hinzufügen. Zum Beispiel so: ```graphql comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` schafft effiziente Eins-zu-Viel-Beziehungen, die es einer Entität ermöglichen, sich dynamisch mit mehreren verwandten Entitäten auf der Grundlage eines Feldes in der verwandten Entität zu verbinden. Durch diesen Ansatz entfällt die Notwendigkeit, auf beiden Seiten der Beziehung doppelte Daten zu speichern, wodurch der Subgraph effizienter wird. -### Example Use Case for `@derivedFrom` +### Beispiel für die Verwendung von `@derivedFrom` -An example of a dynamically growing array is a blogging platform where a “Post” can have many “Comments”. +Ein Beispiel für ein dynamisch wachsendes Array ist eine Blogging-Plattform, auf der ein „Post“ viele „Kommentare“ haben kann. -Let’s start with our two entities, `Post` and `Comment` +Beginnen wir mit unseren beiden Entitäten, „Post“ und „Kommentar“. -Without optimization, you could implement it like this with an array: +Ohne Optimierung könnte man es so mit einem Array implementieren: ```graphql type Post @entity { @@ -39,9 +39,9 @@ type Comment @entity { } ``` -Arrays like these will effectively store extra Comments data on the Post side of the relationship. +Arrays wie diese speichern effektiv zusätzliche Comments-Daten auf der Post-Seite der Beziehung. -Here’s what an optimized version looks like using `@derivedFrom`: +So sieht eine optimierte Version aus, die `@derivedFrom` verwendet: ```graphql type Post @entity { @@ -58,32 +58,32 @@ type Comment @entity { } ``` -Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. +Durch Hinzufügen der Direktive `@derivedFrom` speichert dieses Schema die „Comments“ nur auf der „Comments“-Seite der Beziehung und nicht auf der „Post“-Seite der Beziehung. Arrays werden in einzelnen Zeilen gespeichert, wodurch sie sich erheblich ausdehnen können. Dies kann zu besonders großen Größen führen, wenn ihr Wachstum unbegrenzt ist. -This will not only make our subgraph more efficient, but it will also unlock three features: +Dadurch wird unser Subgraph nicht nur effizienter, sondern es werden auch drei Funktionen freigeschaltet: -1. We can query the `Post` and see all of its comments. +1. Wir können den `Post` abfragen und alle seine Kommentare sehen. -2. We can do a reverse lookup and query any `Comment` and see which post it comes from. +2. Wir können eine Rückwärtssuche durchführen und jeden `Comment` abfragen und sehen, von welchem Beitrag er stammt. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. Mit [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) können wir direkt auf Daten aus virtuellen Beziehungen in unseren Subgraphen-Mappings zugreifen und diese bearbeiten. -## Conclusion +## Schlussfolgerung -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Verwenden Sie die Direktive `@derivedFrom` in Subgraphen, um dynamisch wachsende Arrays effektiv zu verwalten und die Effizienz der Indizierung und des Datenabrufs zu verbessern. -For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +Eine ausführlichere Erklärung von Strategien zur Vermeidung großer Arrays finden Sie im Blog von Kevin Jones: [Best Practices bei der Subgraph-Entwicklung: Vermeiden großer Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/de/subgraphs/best-practices/grafting-hotfix.mdx index f0297328b52d..bfff7009381b 100644 --- a/website/src/pages/de/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/de/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,68 +1,68 @@ --- -title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +title: Best Practice 6 für Subgraphen - Verwendung von Grafting für die schnelle Hotfix-Bereitstellung +sidebarTitle: Grafting und Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting ist eine leistungsstarke Funktion bei der Entwicklung von Subgrafenen, die es Ihnen ermöglicht, neue Subgraphen zu erstellen und bereitzustellen, während Sie die indizierten Daten aus bestehenden Subgraphen wiederverwenden. ### Überblick -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +Diese Funktion ermöglicht die schnelle Bereitstellung von Hotfixes für kritische Probleme, so dass nicht der gesamte Subgraph von Grund auf neu indiziert werden muss. Durch die Bewahrung historischer Daten minimiert Grafting Ausfallzeiten und gewährleistet die Kontinuität der Datendienste. -## Benefits of Grafting for Hotfixes +## Vorteile des Graftings für Hotfixes -1. **Rapid Deployment** +1. **Schnelle Bereitstellung** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Ausfallzeiten minimieren**: Wenn in einem Subgraphen ein kritischer Fehler auftritt und die Indizierung unterbrochen wird, können Sie mit Hilfe vom Grafting sofort eine Lösung bereitstellen, ohne auf die erneute Indizierung zu warten. + - **Sofortige Wiederherstellung**: Der neue Subgraph geht vom letzten indizierten Block aus und gewährleistet, dass die Datendienste nicht unterbrochen werden. -2. **Data Preservation** +2. **Datenaufbewahrung** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. - - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + - **Wiederverwendung historischer Daten**: Beim Grafting werden die vorhandenen Daten aus dem Basis-Subgraphen kopiert, so dass Sie keine wertvollen historischen Datensätze verlieren. + - **Konsistenz**: Bewahrt die Datenkontinuität, was für Anwendungen, die auf konsistente historische Daten angewiesen sind, von entscheidender Bedeutung ist. -3. **Efficiency** - - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. - - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. +3. **Effizienz** + - **Zeit und Ressourcen sparen**: Vermeidet den Rechenaufwand für die Neuindizierung großer Datensätze. + - **Fokus auf Behebungen**: Ermöglicht es den Entwicklern, sich auf die Lösung von Problemen zu konzentrieren, anstatt die Datenwiederherstellung zu verwalten. -## Best Practices When Using Grafting for Hotfixes +## Best Practices bei der Verwendung von Grafting für Hotfixes -1. **Initial Deployment Without Grafting** +1. **Erster Einsatz ohne Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Starten Sie sauber**: Setzen Sie Ihren ersten Subgraphen immer ohne Grafting ein, um sicherzustellen, dass er stabil ist und wie erwartet funktioniert. + - **Testen Sie gründlich**: Überprüfen Sie die Leistung des Subgraphen, um den Bedarf an zukünftigen Hotfixes zu minimieren. -2. **Implementing the Hotfix with Grafting** +2. **Implementierung des Hotfix mit Grafting** - - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Identifizieren Sie das Problem**: Wenn ein kritischer Fehler auftritt, ermitteln Sie die Blocknummer des letzten erfolgreich indizierten Ereignisses. + - **Erstellen Sie einen neuen Subgraphen**: Entwickeln Sie einen neuen Subgraphen, der den Hotfix enthält. + - **Konfigurieren Sie Grafting**: Verwenden Sie Grafting, um Daten bis zur identifizierten Blocknummer aus dem ausgefallenen Subgraphen zu kopieren. + - **Stellen Sie schnell bereit**: Veröffentlichen Sie den grafted Subgrafen, um den Dienst so schnell wie möglich wiederherzustellen. -3. **Post-Hotfix Actions** +3. **Post-Hotfix-Aktionen** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. - > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Überwachen Sie die Leistung**: Stellen Sie sicher, dass der übertragene Subgraph korrekt indiziert wird und der Hotfix das Problem behebt. + - **Veröffentlichen Sie ohne Grafting erneut**: Sobald der Subgraph stabil ist, können Sie eine neue Version des Subgraphen ohne Grafting für die langfristige Wartung bereitstellen. + > Hinweis: Es wird nicht empfohlen, sich auf unbegrenzte Zeit aufs Grafting zu verlassen, da dies künftige Aktualisierungen und Wartungsarbeiten erschweren kann. + - **Aktualisieren Sie Referenzen**: Leiten Sie alle Dienste oder Anwendungen um, damit sie den neuen, nicht übertragenen Subgraphen verwenden. -4. **Important Considerations** - - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. +4. **Wichtige Hinweise** + - **Sorgfältige Blockauswahl**: Wählen Sie die Graft-Blocknummer sorgfältig aus, um Datenverluste zu vermeiden. + - **Tipp**: Verwenden Sie die Blocknummer des letzten korrekt verarbeiteten Ereignisses. + - **Verwenden Sie die Bereitstellung-ID**: Stellen Sie sicher, dass Sie auf die Einsatz-ID des Basis-Subgraphen verweisen, nicht auf die ID des Subgraphen. + - **Anmerkung**: Die Bereitstellung-ID ist der eindeutige Bezeichner für einen bestimmten Subgraphen-Bereitstellung. + - **Funktionserklärung**: Vergessen Sie nicht, Grafting im Subgraphenmanifest unter Funktionen zu deklarieren. -## Example: Deploying a Hotfix with Grafting +## Beispiel: Bereitstellen eines Hotfixes mit Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Angenommen, Sie haben einen Subgraphen, der einen Smart Contract verfolgt, der aufgrund eines kritischen Fehlers nicht mehr indiziert wird. Hier erfahren Sie, wie Sie mithilfe vom Grafting einen Hotfix bereitstellen können. -1. **Failed Subgraph Manifest (subgraph.yaml)** +1. **Fehlgeschlagenes Subgraph-Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -88,9 +88,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing file: ./src/old-lock.ts ``` -2. **New Grafted Subgraph Manifest (subgraph.yaml)** +2. **Neues grafted Subgraph-Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,71 +117,71 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph - block: 6000000 # Last successfully indexed block + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph + block: 6000000 # Letzter erfolgreich indizierter Block ``` -**Explanation:** +**Erläuterung:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. -- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. -- **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. - - **block**: Block number where grafting should begin. +- **Aktualisierung der Datenquelle**: Der neue Subgraph zeigt auf 0xNewContractAddress, bei dem es sich um eine feste Version des Smart Contracts handeln könnte. +- **Startblock**: Wird auf einen Block nach dem letzten erfolgreich indizierten Block gesetzt, um eine erneute Bearbeitung des Fehlers zu vermeiden. +- **Grafting-Konfiguration**: + - **base**: Einsatz-ID des fehlgeschlagenen Subgraphen. + - **block**: Nummer des Blocks, in dem das Grafting beginnen soll. -3. **Deployment Steps** +3. **Bereitstellungsschritte** - - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). - - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - - **Deploy the Subgraph**: - - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - **Aktualisieren Sie den Code**: Implementieren Sie den Hotfix in Ihre Mapping-Skripte (z. B. handleWithdrawal). + - **Passen Sie das Manifest an**: Wie oben gezeigt, aktualisieren Sie die Datei `subgraph.yaml` mit den Grafting-Konfigurationen. + - **Stellen Sie den Subgraphen bereit**: + - Authentifizieren Sie sich mit der Graph CLI. + - Stellen Sie den neuen Subgraphen mit `graph deploy` bereit. -4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. - - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. +4. **Post-Bereitstellung** + - **Überprüfen Sie die Indizierung**: Prüfen Sie, ob der Subgraph vom Graft-Punkt aus korrekt indiziert ist. + - **Überwachen Sie Daten**: Stellen Sie sicher, dass neue Daten erfasst werden und der Hotfix wirksam ist. + - **Planen Sie die Wiederveröffentlichung**: Planen Sie die Bereitstellung einer nicht übertragenen Version für langfristige Stabilität. -## Warnings and Cautions +## Warnungen und Vorsichtshinweise -While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. +Obwohl Grafting ein leistungsfähiges Tool für die schnelle Bereitstellung von Hotfixes ist, gibt es bestimmte Szenarien, in denen es vermieden werden sollte, um die Datenintegrität zu wahren und eine optimale Leistung zu gewährleisten. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. -- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Inkompatible Schemaänderungen**: Wenn Ihr Hotfix eine Änderung des Typs vorhandener Felder oder das Entfernen von Feldern aus Ihrem Schema erfordert, ist das Grafting nicht geeignet. Das Grafting erwartet, dass das Schema des neuen Subgraphen mit dem Schema des Basis-Subgraphen kompatibel ist. Inkompatible Änderungen können zu Dateninkonsistenzen und Fehlern führen, da die vorhandenen Daten nicht mit dem neuen Schema übereinstimmen. +- **Wesentliche Überarbeitungen der Mapping-Logik**: Wenn der Hotfix wesentliche Änderungen an der Mapping-Logik vornimmt, z. B. die Verarbeitung von Ereignissen oder die Änderung von Handler-Funktionen, funktioniert das Grafting möglicherweise nicht korrekt. Die neue Logik ist möglicherweise nicht mit den Daten kompatibel, die unter der alten Logik verarbeitet wurden, was zu falschen Daten oder einer fehlgeschlagenen Indizierung führt. +- **Bereitstellungen des Graph-Netzwerk**: Grafting wird nicht für Subgraphen empfohlen, die für das dezentrale Netzwerk (Mainnet) von The Graph bestimmt sind. Es kann die Indizierung verkomplizieren und wird möglicherweise nicht von allen Indexierern vollständig unterstützt, was zu unerwartetem Verhalten oder erhöhten Kosten führen kann. Für Mainnet-Bereitstellungen ist es sicherer, den Subgraphen von Grund auf neu zu indizieren, um volle Kompatibilität und Zuverlässigkeit zu gewährleisten. -### Risk Management +### Risikomanagement -- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. -- **Testing**: Always test grafting in a development environment before deploying to production. +- **Datenintegrität**: Falsche Blocknummern können zu Datenverlust oder -duplizierung führen. +- **Testen**: Testen Sie das Grafting immer in einer Entwicklungsumgebung, bevor Sie es in der Produktion einsetzen. -## Conclusion +## Schlussfolgerung -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting ist eine effektive Strategie für die Bereitstellung von Hotfixes bei der Entwicklung von Subgraphen, die es Ihnen folgendes ermöglicht: -- **Quickly Recover** from critical errors without re-indexing. -- **Preserve Historical Data**, maintaining continuity for applications and users. -- **Ensure Service Availability** by minimizing downtime during critical fixes. +- **Schnelle Wiederherstellung** bei kritischen Fehlern ohne Neuindizierung. +- **Historische Daten aufbewahren**, um die Kontinuität für Anwendungen und Benutzer zu erhalten. +- **Sicherung der Serviceverfügbarkeit** durch Minimierung der Ausfallzeiten bei kritischen Reparaturen. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +Es ist jedoch wichtig, das Grafting mit Bedacht einzusetzen und bewährte Verfahren zu befolgen, um die Risiken zu minimieren. Planen Sie nach der Stabilisierung Ihres Subgraphen mit dem Hotfix die Bereitstellung einer Version ohne Grafting, um die langfristige Wartbarkeit zu gewährleisten. ## Zusätzliche Ressourcen -- **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting -- **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. +- **[Grafting-Dokumentation](/subgraphs/cookbook/grafting/)**: Ersetzen eines Vertrags und Beibehaltung seiner Historie mit Grafting +- **[Verstehen der Bereitstellung-IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Lernen Sie den Unterschied zwischen Bereitstellung-ID und Subgraph-ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +Durch die Integration von Grafting in Ihren Subgraphen-Entwicklungs-Workflow können Sie Ihre Fähigkeit verbessern, schnell auf Probleme zu reagieren, und sicherstellen, dass Ihre Datendienste robust und zuverlässig bleiben. -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/de/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..04ca2fd1e0db 100644 --- a/website/src/pages/de/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/de/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,15 +1,15 @@ --- -title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +title: Best Practice 3 für Subgraphen - Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs +sidebarTitle: Unveränderliche Entitäten und Bytes als IDs --- ## TLDR -Using Immutable Entities and Bytes for IDs in our `schema.graphql` file [significantly improves ](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/) indexing speed and query performance. +Die Verwendung von unveränderlichen Entitäten und Bytes für IDs in unserer Datei `schema.graphql` verbessert die Indizierungsgeschwindigkeit und die Abfrageleistung erheblich (https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). -## Immutable Entities +## Unveränderliche Entitäten -To make an entity immutable, we simply add `(immutable: true)` to an entity. +Um eine Entität unveränderlich zu machen, fügen wir einfach `(immutable: true)` zu einer Entität hinzu. ```graphql type Transfer @entity(immutable: true) { @@ -20,21 +20,21 @@ type Transfer @entity(immutable: true) { } ``` -By making the `Transfer` entity immutable, graph-node is able to process the entity more efficiently, improving indexing speeds and query responsiveness. +Indem die Entität `Transfer` unveränderlich gemacht wird, ist Graph-Node in der Lage, die Entität effizienter zu verarbeiten, was die Indizierungsgeschwindigkeit und die Reaktionsfähigkeit bei Abfragen verbessert. -Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging onchain event data, such as a `Transfer` event being logged as a `Transfer` entity. +Die Strukturen von unveränderlichen Entitäten werden sich in Zukunft nicht ändern. Eine ideale Entität, um eine unveränderliche Entität zu werden, wäre eine Entität, die direkt Onchain-Ereignisdaten protokolliert, z. B. ein `Transfer`-Ereignis, das als `Transfer`-Entität protokolliert wird. -### Under the hood +### Unter der Haube -Mutable entities have a 'block range' indicating their validity. Updating these entities requires the graph node to adjust the block range of previous versions, increasing database workload. Queries also need filtering to find only live entities. Immutable entities are faster because they are all live and since they won't change, no checks or updates are required while writing, and no filtering is required during queries. +Veränderliche Entitäten haben einen 'block range', der ihre Gültigkeit angibt. Bei der Aktualisierung dieser Entitäten muss der Graph-Knoten den Blockbereich früherer Versionen anpassen, was die Datenbankbelastung erhöht. Außerdem müssen Abfragen gefiltert werden, um nur aktive Entitäten zu finden. Unveränderliche Entitäten sind schneller, weil sie alle live sind und sich nicht ändern, so dass beim Schreiben keine Überprüfungen oder Aktualisierungen erforderlich sind und bei Abfragen keine Filterung erforderlich ist. -### When not to use Immutable Entities +### Wann man keine unveränderlichen Entitäten verwenden sollte -If you have a field like `status` that needs to be modified over time, then you should not make the entity immutable. Otherwise, you should use immutable entities whenever possible. +Wenn Sie ein Feld wie `status` haben, das im Laufe der Zeit geändert werden muss, dann sollten Sie die Entität nicht unveränderlich machen. Ansonsten sollten Sie, wann immer möglich, unveränderliche Entitäten verwenden. -## Bytes as IDs +## Bytes als IDs -Every entity requires an ID. In the previous example, we can see that the ID is already of the Bytes type. +Jede Entität benötigt eine ID. Im vorherigen Beispiel sehen wir, dass die ID bereits vom Typ Bytes ist. ```graphql type Transfer @entity(immutable: true) { @@ -45,19 +45,19 @@ type Transfer @entity(immutable: true) { } ``` -While other types for IDs are possible, such as String and Int8, it is recommended to use the Bytes type for all IDs due to character strings taking twice as much space as Byte strings to store binary data, and comparisons of UTF-8 character strings must take the locale into account which is much more expensive than the bytewise comparison used to compare Byte strings. +Es sind zwar auch andere Typen für IDs möglich, z. B. String und Int8, es wird jedoch empfohlen, den Typ Bytes für alle IDs zu verwenden, da Zeichenketten doppelt so viel Platz wie Byte-Zeichenketten benötigen, um binäre Daten zu speichern, und Vergleiche von UTF-8-Zeichenketten das Gebietsschema berücksichtigen müssen, was sehr viel teurer ist als der byteweise Vergleich, der zum Vergleich von Byte-Zeichenketten verwendet wird. -### Reasons to Not Use Bytes as IDs +### Gründe, keine Bytes als IDs zu verwenden -1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. -3. Indexing and querying performance improvements are not desired. +1. Wenn Entitäts-IDs für den Menschen lesbar sein müssen, wie z. B. automatisch inkrementierte numerische IDs oder lesbare Zeichenketten, sollten Bytes für IDs nicht verwendet werden. +2. Wenn die Daten eines Subgraphen in ein anderes Datenmodell integriert werden, das keine Bytes als IDs verwendet, sollten Bytes als IDs nicht verwendet werden. +3. Verbesserungen der Indizierungs- und Abfrageleistung sind nicht erwünscht. -### Concatenating With Bytes as IDs +### Verkettung mit Bytes als IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +In vielen Subgraphen ist es gängige Praxis, zwei Eigenschaften eines Ereignisses durch String-Verkettung zu einer einzigen ID zu kombinieren, z. B. durch `event.transaction.hash.toHex() + „-“ + event.logIndex.toString()`. Da dies jedoch eine Zeichenkette zurückgibt, beeinträchtigt dies die Indizierung von Subgraphen und die Abfrageleistung erheblich. -Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. +Stattdessen sollten wir die Methode `concatI32()` zur Verkettung von Ereigniseigenschaften verwenden. Diese Strategie führt zu einer \`Bytes'-ID, die viel leistungsfähiger ist. ```typescript export function handleTransfer(event: TransferEvent): void { @@ -74,11 +74,11 @@ export function handleTransfer(event: TransferEvent): void { } ``` -### Sorting With Bytes as IDs +### Sortieren mit Bytes als IDs -Sorting using Bytes as IDs is not optimal as seen in this example query and response. +Die Sortierung nach Bytes als IDs ist nicht optimal, wie in dieser Beispielabfrage und -antwort zu sehen ist. -Query: +Abfrage: ```graphql { @@ -91,7 +91,7 @@ Query: } ``` -Query response: +Antwort auf die Abfrage: ```json { @@ -120,9 +120,9 @@ Query response: } ``` -The IDs are returned as hex. +Die IDs werden als Hexadezimalzahlen zurückgegeben. -To improve sorting, we should create another field on the entity that is a BigInt. +Um die Sortierung zu verbessern, sollten wir ein weiteres Feld auf der Entität erstellen, das ein BigInt ist. ```graphql type Transfer @entity { @@ -134,9 +134,9 @@ type Transfer @entity { } ``` -This will allow for sorting to be optimized sequentially. +Dadurch kann die Sortierung nacheinander optimiert werden. -Query: +Abfrage: ```graphql { @@ -147,7 +147,7 @@ Query: } ``` -Query Response: +Antwort auf die Abfrage: ```json { @@ -170,22 +170,22 @@ Query Response: } ``` -## Conclusion +## Schlussfolgerung -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Es hat sich gezeigt, dass die Verwendung von unveränderlichen Entitäten und Bytes als IDs die Effizienz von Subgraphen deutlich verbessert. Insbesondere haben Tests eine Steigerung der Abfrageleistung um bis zu 28 % und eine Beschleunigung der Indizierungsgeschwindigkeit um bis zu 48 % ergeben. -Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). +Lesen Sie mehr über die Verwendung von unveränderlichen Entitäten und Bytes als IDs in diesem Blogbeitrag von David Lutterkort, Software Engineer bei Edge & Node: [Zwei einfache Leistungsverbesserungen für Subgrafen] (https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/best-practices/pruning.mdx b/website/src/pages/de/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..6688d9bdfabd 100644 --- a/website/src/pages/de/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/de/subgraphs/best-practices/pruning.mdx @@ -1,26 +1,26 @@ --- -title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +title: Best Practice 1 für Subgraphen - Verbessern Sie die Abfragegeschwindigkeit mit Subgraph Pruning +sidebarTitle: Pruning mit indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) entfernt archivierte Entitäten aus der Datenbank des Subgraphen bis zu einem bestimmten Block, und das Entfernen unbenutzter Entitäten aus der Datenbank eines Subgraphen verbessert die Abfrageleistung eines Subgraphen, oft dramatisch. Die Verwendung von `indexerHints` ist ein einfacher Weg, einen Subgraphen zu beschneiden. -## How to Prune a Subgraph With `indexerHints` +## Wie man einen Subgraphen mit `indexerHints` beschneidet -Add a section called `indexerHints` in the manifest. +Fügen Sie dem Manifest einen Abschnitt mit dem Namen `indexerHints` hinzu. -`indexerHints` has three `prune` options: +`indexerHints` hat drei Optionen für ‚prune‘: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. -- `prune: `: Sets a custom limit on the number of historical blocks to retain. -- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. +- `prune: auto`: Behält die minimal notwendige Historie, wie vom Indexierer festgelegt, bei und optimiert so die Abfrageleistung. Dies ist die allgemein empfohlene Einstellung und die Standardeinstellung für alle mit `graph-cli` >= 0.66.0 erstellten Subgraphen. +- `prune: `: Legt eine benutzerdefinierte Grenze für die Anzahl der zu speichernden historischen Blöcke fest. +- `prune: never`: Kein Pruning der historischen Daten; behält die gesamte Historie bei und ist der Standard, wenn es keinen `indexerHints` Abschnitt gibt. Die Option `prune: never` sollte gewählt werden, wenn [Zeitreiseabfragen] (/subgraphs/querying/graphql-api/#time-travel-queries) gewünscht sind. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +Wir können `indexerHints` zu unseren Subgraphen hinzufügen, indem wir unsere `subgraph.yaml` aktualisieren: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -31,26 +31,26 @@ dataSources: network: mainnet ``` -## Important Considerations +## Wichtige Überlegungen -- If [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data. +- Wenn neben dem Pruning auch [Zeitreiseabfragen](/subgraphs/querying/graphql-api/#time-travel-queries) gewünscht werden, muss das Pruning genau durchgeführt werden, um die Funktionalität der Zeitreiseabfrage zu erhalten. Aus diesem Grund ist es im Allgemeinen nicht empfehlenswert, `indexerHints: prune: auto` mit Zeitreiseabfragen zu verwenden. Verwenden Sie stattdessen `indexerHints: prune: <>`, um genau auf eine Blockhöhe zu beschneiden, die die für Zeitreiseabfragen erforderlichen historischen Daten beibehält, oder verwenden Sie `prune: never`, um alle Daten zu erhalten. -- It is not possible to [graft](/subgraphs/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months). +- Es ist nicht möglich, [Grafting](/subgraphs/cookbook/grafting/) in einer Blockhöhe vorzunehmen, die beschnitten wurde. Wenn das Grafting routinemäßig durchgeführt wird und Pruning gewünscht ist, wird empfohlen, `indexerHints: prune: <>` zu verwenden, das eine bestimmte Anzahl von Blöcken (z. B. genug für sechs Monate) genau beibehält. -## Conclusion +## Schlussfolgerung -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Das Pruning unter Verwendung von `indexerHints` ist eine bewährte Methode für die Entwicklung von Subgraphen, die eine erhebliche Verbesserung der Abfrageleistung ermöglicht. -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/best-practices/timeseries.mdx b/website/src/pages/de/subgraphs/best-practices/timeseries.mdx index 060540f991bf..9a49023d6f5c 100644 --- a/website/src/pages/de/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/de/subgraphs/best-practices/timeseries.mdx @@ -1,84 +1,88 @@ --- -title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +title: Best Practice 5 für Subgraphen - Vereinfachen und Optimieren mit Zeitreihen und Aggregationen +sidebarTitle: Zeitreihen und Aggregationen --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Die Nutzung der neuen Zeitreihen- und Aggregationsfunktion in Subgraphen kann sowohl die Indizierungsgeschwindigkeit als auch die Abfrageleistung erheblich verbessern. ## Überblick -Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. +Zeitreihen und Aggregationen reduzieren den Datenverarbeitungsaufwand und beschleunigen Abfragen, indem sie Aggregationsberechnungen in die Datenbank verlagern und den Mapping-Code vereinfachen. Dieser Ansatz ist besonders effektiv bei der Verarbeitung großer Mengen zeitbasierter Daten. -## Benefits of Timeseries and Aggregations +## Vorteile von Zeitreihen und Aggregationen -1. Improved Indexing Time +1. Verbesserte Indizierungszeit -- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. -- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. +- Weniger zu ladende Daten: Mappings verarbeiten weniger Daten, da die Rohdatenpunkte als unveränderliche Zeitreiheneinheiten gespeichert werden. +- Datenbank-verwaltete Aggregationen: Aggregationen werden automatisch von der Datenbank berechnet, wodurch sich die Arbeitsbelastung der Mappings verringert. -2. Simplified Mapping Code +2. Vereinfachter Mapping-Code -- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. -- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. +- Keine manuellen Berechnungen: Entwickler müssen keine komplexe Aggregationslogik mehr in Mappings schreiben. +- Geringere Komplexität: Vereinfacht die Codewartung und minimiert das Fehlerpotenzial. -3. Dramatically Faster Queries +3. Deutlich schnellere Abfragen -- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. -- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. +- Unveränderliche Daten: Alle Zeitreihendaten sind unveränderbar, was eine effiziente Speicherung und Abfrage ermöglicht. +- Effiziente Datentrennung: Die Aggregate werden getrennt von den Rohdaten der Zeitreihen gespeichert, so dass bei Abfragen deutlich weniger Daten verarbeitet werden müssen - oft um mehrere Größenordnungen weniger. -### Important Considerations +### Wichtige Überlegungen -- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. -- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. -- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. +- Unveränderliche Daten: Einmal geschriebene Zeitreihendaten können nicht mehr verändert werden, was die Datenintegrität gewährleistet und die Indizierung vereinfacht. +- Automatische ID- und Zeitstempel-Verwaltung: ID- und Zeitstempel-Felder werden automatisch von Graph-Node verwaltet, wodurch mögliche Fehler vermieden werden. +- Effiziente Datenspeicherung: Durch die Trennung von Rohdaten und Aggregaten wird die Speicherung optimiert, und Abfragen werden schneller ausgeführt. -## How to Implement Timeseries and Aggregations +## Implementierung von Zeitreihen und Aggregationen -### Defining Timeseries Entities +### Voraussetzungen -A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: +Sie benötigen `spec Version 1.1.0` für diese Funktion. -- Immutable: Timeseries entities are always immutable. -- Mandatory Fields: - - `id`: Must be of type `Int8!` and is auto-incremented. - - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. +### Definition von Zeitreihenelementen -Example: +Ein Zeitreihenelement stellt Rohdatenpunkte dar, die im Laufe der Zeit gesammelt wurden. Sie wird mit der Annotation `@entity(timeseries: true)` definiert. Zentrale Anforderungen: + +- Unveränderlich: Zeitreihenentitäten sind immer unveränderlich. +- Pflichtfelder: + - `id`: Muss vom Typ `Int8!` sein und wird automatisch inkrementiert. + - `timestamp`: Muss vom Typ `Timestamp!` sein und wird automatisch auf den Blockzeitstempel gesetzt. + +Beispiel: ```graphql type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` -### Defining Aggregation Entities +### Definition von Aggregationsetntitäten -An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: +Eine Aggregationsentitäten berechnet aggregierte Werte aus einer Zeitreihenquelle. Sie wird mit der Annotation `@aggregation` definiert. Schlüsselkomponenten: -- Annotation Arguments: - - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). +- Anmerkungsargumente: + - `intervals`: Gibt Zeitintervalle an (z. B. `["hour", "day"]`). -Example: +Beispiel: ```graphql type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In diesem Beispiel aggregiert Stats das Betragsfeld von Data über stündliche und tägliche Intervalle und berechnet die Summe. -### Querying Aggregated Data +### Abfrage von aggregierten Daten -Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. +Aggregationen werden über Abfragefelder dargestellt, die das Filtern und Abrufen auf der Grundlage von Dimensionen und Zeitintervallen ermöglichen. -Example: +Beispiel: ```graphql { @@ -98,13 +102,13 @@ Example: } ``` -### Using Dimensions in Aggregations +### Verwendung von Dimensionen in Aggregationen -Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. +Dimensionen sind nicht aggregierte Felder, die zur Gruppierung von Datenpunkten verwendet werden. Sie ermöglichen Aggregationen auf der Grundlage bestimmter Kriterien, wie z. B. eines Tokens in einer Finanzanwendung. -Example: +Beispiel: -### Timeseries Entity +### Zeitreihen-Entität ```graphql type TokenData @entity(timeseries: true) { @@ -116,7 +120,7 @@ type TokenData @entity(timeseries: true) { } ``` -### Aggregation Entity with Dimension +### Aggregationsentität mit Dimension ```graphql type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { @@ -129,15 +133,15 @@ type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { } ``` -- Dimension Field: token groups the data, so aggregates are computed per token. -- Aggregates: - - totalVolume: Sum of amount. - - priceUSD: Last recorded priceUSD. - - count: Cumulative count of records. +- Dimensionsfeld: Das Token gruppiert die Daten, so dass die Aggregate pro Token berechnet werden. +- Aggregate: + - totalVolume: Summe der Beträge. + - priceUSD: Letzter aufgezeichneter Preis in USD. + - count: Kumulative Anzahl der Datensätze. -### Aggregation Functions and Expressions +### Aggregationsfunktionen und Ausdrücke -Supported aggregation functions: +Unterstützte Aggregationsfunktionen: - sum - count @@ -146,50 +150,50 @@ Supported aggregation functions: - first - last -### The arg in @aggregate can be +### Das Argument in @aggregate kann sein -- A field name from the timeseries entity. -- An expression using fields and constants. +- Ein Feldname aus der Zeitreihenentität. +- Ein Ausdruck mit Feldern und Konstanten. -### Examples of Aggregation Expressions +### Beispiele für Aggregationsausdrücke -- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") -- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") -- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") +- Summe Tokenwert: @aggregate(fn: „sum“, arg: „preisUSD \_betrag“) +- Größter positiver Betrag: @aggregate(fn: „max“, arg: „greatest(amount0, amount1, 0)“) +- Bedingte Summe: @aggregate(fn: „sum“, arg: „case when amount0 > amount1 then amount0 else 0 end“) -Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. +Zu den unterstützten Operatoren und Funktionen gehören grundlegende arithmetische Operatoren (+, -, \_, /), Vergleichsoperatoren, logische Operatoren (und, oder, nicht) und SQL-Funktionen wie greatest, least, coalesce, usw. -### Query Parameters +### Abfrage-Parameter -- interval: Specifies the time interval (e.g., "hour"). -- where: Filters based on dimensions and timestamp ranges. -- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). +- intervall: Gibt das Zeitintervall an (z. B. „Stunde“). +- where: Filter auf der Grundlage von Dimensionen und Zeitstempelbereichen. +- timestamp_gte / timestamp_lt: Filter für Start- und Endzeiten (Mikrosekunden seit Epoche). -### Notes +### Anmerkungen -- Sorting: Results are automatically sorted by timestamp and id in descending order. -- Current Data: An optional current argument can include the current, partially filled interval. +- Sortieren: Die Ergebnisse werden automatisch nach Zeitstempel und ID in absteigender Reihenfolge sortiert. +- Aktuelle Daten: Ein optionales aktuelles Argument kann das aktuelle, teilweise gefüllte Intervall enthalten. -### Conclusion +### Schlussfolgerung -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Die Implementierung von Zeitreihen und Aggregationen in Subgraphen ist ein bewährtes Verfahren für Projekte, die mit zeitbasierten Daten arbeiten. Dieser Ansatz: -- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. -- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. -- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. +- Verbessert die Leistung: Beschleunigt die Indizierung und Abfrage durch Reduzierung des Datenverarbeitungs-Overheads. +- Vereinfacht die Entwicklung: Manuelle Aggregationslogik in Mappings ist nicht mehr erforderlich. +- Skaliert Effizienz: Verarbeitet große Datenmengen, ohne Kompromisse bei Geschwindigkeit und Reaktionsfähigkeit einzugehen. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +Durch die Übernahme dieses Musters können Entwickler effizientere und skalierbare Subgraphen erstellen und den Endbenutzern einen schnelleren und zuverlässigeren Datenzugriff bieten. Um mehr über die Implementierung von Zeitreihen und Aggregationen zu erfahren, lesen Sie die [Readme-Datei zu Zeitreihen und Aggregationen](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) und ziehen Sie in Erwägung, mit dieser Funktion in Ihren Subgraphen zu experimentieren. -## Subgraph Best Practices 1-6 +## Best Practices 1-6 für Subgraphen -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Verbesserung der Abfragegeschwindigkeit mit Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Verbesserung der Indizierungs- und der Reaktionsfähigkeit bei Abfragen durch Verwendung von @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Verbesserung der Indizierungs- und Abfrageleistung durch Verwendung unveränderlicher Entitäten und Bytes als IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Verbesserung der Indizierungsgeschwindigkeit durch Vermeidung von `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Vereinfachen und Optimieren mit Zeitreihen und Aggregationen](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Grafting für schnelle Hotfix-Bereitstellung verwenden](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/de/subgraphs/billing.mdx b/website/src/pages/de/subgraphs/billing.mdx index 7014ebf64d61..2fed1d944f78 100644 --- a/website/src/pages/de/subgraphs/billing.mdx +++ b/website/src/pages/de/subgraphs/billing.mdx @@ -1,22 +1,24 @@ --- -title: Billing +title: Abrechnung --- -## Querying Plans +## Abfrage von Plänen Es gibt zwei Pläne für die Abfrage von Subgraphen in The Graph Network. -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- \*Kostenloser Plan\*\*: Der Free Plan beinhaltet 100.000 kostenlose monatliche Abfragen mit vollem Zugriff auf die Subgraph Studio Testumgebung. Dieser Plan ist für Hobbyisten, Hackathon-Teilnehmer und diejenigen mit Nebenprojekten gedacht, die The Graph ausprobieren möchten, bevor sie ihre Dapp skalieren. -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **Wachstumsplan (Growth Plan)**: Der Growth Plan beinhaltet alles, was im Free Plan enthalten ist, wobei alle Abfragen nach 100.000 monatlichen Abfragen eine Zahlung mit GRT oder Kreditkarte erfordern. Der Growth Plan ist flexibel genug, um Teams abzudecken, die Dapps für eine Vielzahl von Anwendungsfällen etabliert haben. + +Erfahren Sie mehr über die Preisgestaltung [here](https://thegraph.com/studio-pricing/). ## Abfrage Zahlungen mit Kreditkarte - Um die Abrechnung mit Kredit-/Debitkarten einzurichten, müssen die Benutzer Subgraph Studio (https://thegraph.com/studio/) aufrufen - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). - 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". + 1. Rufen Sie die Seite [Subgraph Studio Billing] (https://thegraph.com/studio/subgraphs/billing/) auf. + 2. Klicken Sie oben rechts auf der Seite auf die Schaltfläche „Wallet verbinden“. Sie werden zur Wallet-Auswahlseite weitergeleitet. Wählen Sie Ihr Wallet aus und klicken Sie auf „Verbinden“. 3. Wählen Sie „ Upgrade Plan“, wenn Sie vom Free Plan upgraden oder wählen Sie „Manage Plan“, wenn Sie GRT bereits in der Vergangenheit zu Ihrem Abrechnungssaldo hinzugefügt haben. Als Nächstes können Sie die Anzahl der Abfragen schätzen, um einen Kostenvoranschlag zu erhalten, dieser Schritt ist jedoch nicht erforderlich. 4. Um eine Zahlung per Kreditkarte zu wählen, wählen Sie „Kreditkarte“ als Zahlungsmethode und geben Sie Ihre Kreditkartendaten ein. Diejenigen, die Stripe bereits verwendet haben, können die Funktion „Link“ verwenden, um ihre Daten automatisch auszufüllen. - Die Rechnungen werden am Ende eines jeden Monats erstellt. Für alle Abfragen, die über das kostenlose Kontingent hinausgehen, muss eine aktive Kreditkarte hinterlegt sein. @@ -25,9 +27,9 @@ Es gibt zwei Pläne für die Abfrage von Subgraphen in The Graph Network. Subgraph-Nutzer können The Graph Token (oder GRT) verwenden, um für Abfragen im The Graph Network zu bezahlen. Mit GRT werden Rechnungen am Ende eines jeden Monats bearbeitet und erfordern ein ausreichendes Guthaben an GRT, um Abfragen über die Free-Plan-Quote von 100.000 monatlichen Abfragen hinaus durchzuführen. Sie müssen die von Ihren API-Schlüsseln generierten Gebühren bezahlen. Mit dem Abrechnungsvertrag können Sie: -- Add and withdraw GRT from your account balance. -- Keep track of your balances based on how much GRT you have added to your account balance, how much you have removed, and your invoices. -- Automatically pay invoices based on query fees generated, as long as there is enough GRT in your account balance. +- GRT zu Ihrem Rechnungsguthaben hinzufügen oder abziehen. +- Ihre Salden und Ihre Rechnungen im Auge behalten, basierend darauf, wie viel GRT Sie Ihrem Abrechnungsguthaben hinzugefügt und wie viel Sie entfernt haben. +- Rechnungen automatisch auf der Grundlage der generierten Abfragegebühren bezahlen, solange Ihr Rechnungssaldo über genügend GRT verfügt. ### GRT auf Arbitrum oder Ethereum @@ -45,17 +47,17 @@ Um für Abfragen zu bezahlen, brauchen Sie GRT auf Arbitrum. Hier sind ein paar - Alternativ können Sie GRT auch direkt auf Arbitrum über einen dezentralen Handelsplatz erwerben. -> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). +> In diesem Abschnitt wird davon ausgegangen, dass Sie bereits GRT in Ihrem Geldbeutel haben und auf Arbitrum sind. Wenn Sie keine GRT haben, können Sie lernen, wie man GRT [hier](#getting-grt) bekommt. Sobald Sie GRT überbrücken, können Sie es zu Ihrem Rechnungssaldo hinzufügen. ### Hinzufügen von GRT mit einer Wallet -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". +1. Rufen Sie die [Subgraph Studio Abrechnungsseite](https://thegraph.com/studio/subgraphs/billing/) auf. +2. Klicken Sie oben rechts auf der Seite auf die Schaltfläche „Wallet verbinden“. Sie werden zur Wallet-Auswahlseite weitergeleitet. Wählen Sie Ihr Wallet aus und klicken Sie auf „Verbinden“. 3. Wählen Sie die Schaltfläche „ Manage “ in der oberen rechten Ecke. Erstmalige Nutzer sehen die Option „Upgrade auf den Wachstumsplan“, während wiederkehrende Nutzer auf „Von der Wallet einzahlen“ klicken. 4. Verwenden Sie den Slider, um die Anzahl der Abfragen zu schätzen, die Sie monatlich erwarten. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Vorschläge für die Anzahl der Abfragen, die Sie verwenden können, finden Sie auf unserer Seite **Häufig gestellte Fragen**. 5. Wählen Sie „Kryptowährung“. GRT ist derzeit die einzige Kryptowährung, die im The Graph Network akzeptiert wird. 6. Wählen Sie die Anzahl der Monate, die Sie im Voraus bezahlen möchten. - Die Zahlung im Voraus verpflichtet Sie nicht zu einer zukünftigen Nutzung. Ihnen wird nur das berechnet, was Sie verbrauchen, und Sie können Ihr Guthaben jederzeit abheben. @@ -68,20 +70,20 @@ Sobald Sie GRT überbrücken, können Sie es zu Ihrem Rechnungssaldo hinzufügen ### GRT über eine Wallet abheben -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Rufen Sie die [Subgraph Studio Abrechnungsseite](https://thegraph.com/studio/subgraphs/billing/) auf. 2. Klicken Sie auf die Schaltfläche „Connect Wallet“ in der oberen rechten Ecke der Seite. Wählen Sie Ihre Wallet aus und klicken Sie auf „Verbinden“. 3. Klicken Sie auf die Schaltfläche „Verwalten“ in der oberen rechten Ecke der Seite. Wählen Sie „GRT abheben“. Ein Seitenfenster wird angezeigt. 4. Geben Sie den Betrag der GRT ein, den Sie abheben möchten. 5. Klicken Sie auf „GRT abheben“, um die GRT von Ihrem Kontostand abzuheben. Unterschreiben Sie die zugehörige Transaktion in Ihrer Wallet. Dies kostet Gas. Die GRT werden an Ihre Arbitrum Wallet gesendet. 6. Sobald die Transaktion bestätigt ist, werden die GRT von Ihrem Kontostand in Ihrem Arbitrum Wallet abgezogen. -### Adding GRT using a multisig wallet +### Hinzufügen von GRT mit einer Multisig-Wallet -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. +1. Rufen Sie die [Subgraph Studio Abrechnungsseite](https://thegraph.com/studio/subgraphs/billing/) auf. +2. Klicken Sie auf die Schaltfläche „Connect Wallet“ in der oberen rechten Ecke der Seite. Wählen Sie Ihre Wallet aus und klicken Sie auf „Verbinden“. Wenn Sie [Gnosis-Safe] (https://gnosis-safe.io/) verwenden, können Sie sowohl Ihre Multisig- als auch Ihre signierende Wallet verbinden. Signieren Sie dann die zugehörige Nachricht. Dies kostet kein Gas. 3. Wählen Sie die Schaltfläche „ Manage “ in der oberen rechten Ecke. Erstmalige Nutzer sehen die Option „Upgrade auf den Wachstumsplan“, während wiederkehrende Nutzer auf „Von der Wallet einzahlen“ klicken. 4. Verwenden Sie den Slider, um die Anzahl der Abfragen zu schätzen, die Sie monatlich erwarten. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Vorschläge für die Anzahl der Abfragen, die Sie verwenden können, finden Sie auf unserer Seite **Häufig gestellte Fragen**. 5. Wählen Sie „Kryptowährung“. GRT ist derzeit die einzige Kryptowährung, die im The Graph Network akzeptiert wird. 6. Wählen Sie die Anzahl der Monate, die Sie im Voraus bezahlen möchten. - Die Zahlung im Voraus verpflichtet Sie nicht zu einer zukünftigen Nutzung. Ihnen wird nur das berechnet, was Sie verbrauchen, und Sie können Ihr Guthaben jederzeit abheben. @@ -99,7 +101,7 @@ In diesem Abschnitt erfahren Sie, wie Sie GRT dazu bringen können, die Abfrageg Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Gehen Sie zu [Coinbase] (https://www.coinbase.com/) und erstellen Sie ein Konto. 2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. 3. Sobald Sie Ihre Identität überprüft haben, können Sie GRT kaufen. Dazu klicken Sie auf die Schaltfläche „Kaufen/Verkaufen“ oben rechts auf der Seite. 4. Wählen Sie die Währung, die Sie kaufen möchten. Wählen Sie GRT. @@ -107,19 +109,19 @@ Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Coinbase. 6. Wählen Sie die Menge an GRT, die Sie kaufen möchten. 7. Überprüfen Sie Ihren Einkauf. Überprüfen Sie Ihren Einkauf und klicken Sie auf „GRT kaufen“. 8. Bestätigen Sie Ihren Kauf. Bestätigen Sie Ihren Kauf und Sie haben GRT erfolgreich gekauft. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). +9. Sie können das GRT von Ihrem Konto auf Ihre Wallet wie z.B. [MetaMask](https://metamask.io/) übertragen. - Um GRT auf Ihre Wallet zu übertragen, klicken Sie auf die Schaltfläche „Konten“ oben rechts auf der Seite. - Klicken Sie auf die Schaltfläche „Senden“ neben dem GRT Konto. - Geben Sie den Betrag an GRT ein, den Sie senden möchten, und die Wallet-Adresse, an die Sie ihn senden möchten. - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -Bitte beachten Sie, dass Coinbase Sie bei größeren Kaufbeträgen möglicherweise 7-10 Tage warten lässt, bevor Sie den vollen Betrag in eine Krypto-Wallet überweisen. -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Sie können mehr über den Erwerb von GRT auf Coinbase [hier](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency) erfahren. ### Binance Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Gehen Sie zu [Binance] (https://www.binance.com/en) und erstellen Sie ein Konto. 2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. 3. Sobald Sie Ihre Identität überprüft haben, können Sie GRT kaufen. Dazu klicken Sie auf die Schaltfläche „Jetzt kaufen“ auf dem Banner der Homepage. 4. Sie werden zu einer Seite weitergeleitet, auf der Sie die Währung auswählen können, die Sie kaufen möchten. Wählen Sie GRT. @@ -127,27 +129,27 @@ Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Binance. 6. Wählen Sie die Menge an GRT, die Sie kaufen möchten. 7. Überprüfen Sie Ihren Kauf und klicken Sie auf „GRT kaufen“. 8. Bestätigen Sie Ihren Kauf und Sie werden Ihr GRT in Ihrer Binance Spot Wallet sehen können. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. +9. Sie können das GRT von Ihrem Konto auf Ihre Geldbörse wie [MetaMask](https://metamask.io/) abheben. + - [Um das GRT auf Ihr Wallet abzuheben (https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570), fügen Sie die Adresse Ihres Wallets der Whitelist für Abhebungen hinzu. - Klicken Sie auf die Schaltfläche „Wallet“, klicken Sie auf Abheben und wählen Sie GRT. - Geben Sie den GRT-Betrag ein, den Sie senden möchten, und die Wallet-Adresse, die auf der Whitelist steht. - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Sie können mehr über den Erwerb von GRT auf Binance [hier](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582) erfahren. ### Uniswap So können Sie GRT auf Uniswap kaufen. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. +1. Gehen Sie zu [Uniswap] (https://app.uniswap.org/swap?chain=arbitrum) und verbinden Sie Ihre Wallet. 2. Wählen Sie den Token, von dem Sie tauschen möchten. Wählen Sie ETH. 3. Wählen Sie den Token, in den Sie tauschen möchten. Wählen Sie GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) + - Vergewissern Sie sich, dass Sie gegen den richtigen Token tauschen. Die GRT Smart Contract Adresse auf Arbitrum One ist: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) 4. Geben Sie den Betrag an ETH ein, den Sie tauschen möchten. 5. Klicken Sie auf „Swap“. 6. Bestätigen Sie die Transaktion in Ihrer Wallet und warten Sie auf die Abwicklung der Transaktion. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Sie können mehr über den Erwerb von GRT auf Coinbase [hier](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-) erfahren. ## Ether erhalten @@ -157,7 +159,7 @@ In diesem Abschnitt erfahren Sie, wie Sie Ether (ETH) erhalten können, um Trans Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Gehen Sie zu [Coinbase] (https://www.coinbase.com/) und erstellen Sie ein Konto. 2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. 3. Sobald Sie Ihre Identität bestätigt haben, können Sie ETH kaufen, indem Sie auf die Schaltfläche „Kaufen/Verkaufen“ oben rechts auf der Seite klicken. 4. Wählen Sie die Währung, die Sie kaufen möchten. Wählen Sie ETH. @@ -165,20 +167,20 @@ Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von GRT auf Coinbase. 6. Geben Sie die Menge an ETH ein, die Sie kaufen möchten. 7. Überprüfen Sie Ihren Kauf und klicken Sie auf „ETH kaufen“. 8. Bestätigen Sie Ihren Kauf und Sie haben erfolgreich ETH gekauft. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). +9. Sie können die ETH von Ihrem Coinbase-Konto auf Ihr Wallet wie [MetaMask](https://metamask.io/) übertragen. - Um die ETH auf Ihre Wallet zu übertragen, klicken Sie auf die Schaltfläche „Konten“ oben rechts auf der Seite. - Klicken Sie auf die Schaltfläche „Senden“ neben dem ETH-Konto. - Geben Sie den ETH-Betrag ein, den Sie senden möchten, und die Wallet-Adresse, an die Sie ihn senden möchten. - Stellen Sie sicher, dass Sie an Ihre Ethereum Wallet Adresse auf Arbitrum One senden. - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Sie können mehr über den Erwerb von ETH auf Coinbase [hier](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency) erfahren. ### Binance Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von ETH auf Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Gehen Sie zu [Binance] (https://www.binance.com/en) und erstellen Sie ein Konto. 2. Sobald Sie ein Konto erstellt haben, müssen Sie Ihre Identität durch ein Verfahren verifizieren, das als KYC (oder Know Your Customer) bekannt ist. Dies ist ein Standardverfahren für alle zentralisierten oder verwahrten Krypto-Börsen. 3. Sobald Sie Ihre Identität verifiziert haben, kaufen Sie ETH, indem Sie auf die Schaltfläche „Jetzt kaufen“ auf dem Banner der Homepage klicken. 4. Wählen Sie die Währung, die Sie kaufen möchten. Wählen Sie ETH. @@ -186,14 +188,14 @@ Dies ist eine Schritt-für-Schritt-Anleitung für den Kauf von ETH auf Binance. 6. Geben Sie die Menge an ETH ein, die Sie kaufen möchten. 7. Überprüfen Sie Ihren Kauf und klicken Sie auf „ETH kaufen“. 8. Bestätigen Sie Ihren Kauf und Sie werden Ihre ETH in Ihrer Binance Spot Wallet sehen. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). +9. Sie können die ETH von Ihrem Konto auf Ihr Wallet wie [MetaMask](https://metamask.io/) abheben. - Um die ETH auf Ihre Wallet abzuheben, fügen Sie die Adresse Ihrer Wallet zur Abhebungs-Whitelist hinzu. - Klicken Sie auf die Schaltfläche „Wallet“, klicken Sie auf „withdraw“ und wählen Sie ETH. - Geben Sie den ETH-Betrag ein, den Sie senden möchten, und die Adresse der Wallet, die auf der Whitelist steht, an die Sie den Betrag senden möchten. - Stellen Sie sicher, dass Sie an Ihre Ethereum Wallet Adresse auf Arbitrum One senden. - Klicken Sie auf „Weiter“ und bestätigen Sie Ihre Transaktion. -You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Sie können mehr über den Erwerb von ETH auf Binance [hier](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582) erfahren. ## FAQs zur Rechnungsstellung @@ -203,11 +205,11 @@ Sie müssen nicht im Voraus wissen, wie viele Abfragen Sie benötigen werden. Ih Wir empfehlen Ihnen, die Anzahl der Abfragen, die Sie benötigen, zu überschlagen, damit Sie Ihr Guthaben nicht häufig aufstocken müssen. Eine gute Schätzung für kleine bis mittelgroße Anwendungen ist, mit 1 Mio. bis 2 Mio. Abfragen pro Monat zu beginnen und die Nutzung in den ersten Wochen genau zu überwachen. Bei größeren Anwendungen ist es sinnvoll, die Anzahl der täglichen Besuche auf Ihrer Website mit der Anzahl der Abfragen zu multiplizieren, die Ihre aktivste Seite beim Öffnen auslöst. -Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. +Natürlich können sich sowohl neue als auch bestehende Nutzer an das BD-Team von Edge & Node wenden, um mehr über die voraussichtliche Nutzung zu erfahren. ### Kann ich GRT von meinem Rechnungssaldo abheben? -Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). +Ja, Sie können jederzeit GRT, die nicht bereits für Abfragen verwendet wurden, von Ihrem Abrechnungskonto abheben. Der Abrechnungsvertrag ist nur dafür gedacht, GRT aus dem Ethereum-Mainnet in das Arbitrum-Netzwerk zu übertragen. Wenn Sie Ihre GRT von Arbitrum zurück ins Ethereum Mainnet transferieren möchten, müssen Sie die [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) verwenden. ### Was passiert, wenn mein Guthaben aufgebraucht ist? Werde ich eine Warnung erhalten? diff --git a/website/src/pages/de/subgraphs/cookbook/arweave.mdx b/website/src/pages/de/subgraphs/cookbook/arweave.mdx index 02dd4f8398fc..975f84b7a277 100644 --- a/website/src/pages/de/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/de/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Building Subgraphs on Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are To be able to build and deploy Arweave Subgraphs, you need two packages: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Subgraph's components -There are three components of a subgraph: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ Defines the data sources of interest, and how they should be processed. Arweave Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ``` $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Subgraf-Manifest-Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet @@ -99,7 +99,7 @@ Arweave data sources support two types of handlers: ## Schema-Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript-Mappings @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Querying an Arweave Subgraph -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Beispiele von Subgrafen -Here is an example subgraph for reference: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a subgraph index Arweave and other chains? +### Can a Subgraph index Arweave and other chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. ### Can I index the stored files on Arweave? Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). -### Can I identify Bundlr bundles in my subgraph? +### Can I identify Bundlr bundles in my Subgraph? This is not currently supported. @@ -188,7 +188,7 @@ The source.owner can be the user's public key or account address. ### What is the current encryption format? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/de/subgraphs/cookbook/enums.mdx b/website/src/pages/de/subgraphs/cookbook/enums.mdx index 0b2fe58b4e34..911f6f54a340 100644 --- a/website/src/pages/de/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/de/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/de/subgraphs/cookbook/grafting.mdx b/website/src/pages/de/subgraphs/cookbook/grafting.mdx index ee92710b3059..72b0f391fb28 100644 --- a/website/src/pages/de/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/de/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Replace a Contract and Keep its History With Grafting --- -In this guide, you will learn how to build and deploy new subgraphs by grafting existing subgraphs. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## What is Grafting? -Grafting reuses the data from an existing subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. Also, it can be used when adding a feature to a subgraph that takes long to index from scratch. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -22,38 +22,38 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## Building an Existing Subgraph -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Subgraf-Manifest-Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Grafting Manifest Definition -Grafting requires adding two new items to the original subgraph manifest: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Deploying the Base Subgraph -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the subgraph is indexing properly, you can quickly update the subgraph with grafting. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Deploying the Grafting Subgraph The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ It should return the following: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Zusätzliche Ressourcen diff --git a/website/src/pages/de/subgraphs/cookbook/near.mdx b/website/src/pages/de/subgraphs/cookbook/near.mdx index d748e4787563..09e60f03dba0 100644 --- a/website/src/pages/de/subgraphs/cookbook/near.mdx +++ b/website/src/pages/de/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Building Subgraphs on NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## What is NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## What are NEAR subgraphs? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Block handlers: these are run on every new block - Receipt handlers: run every time a message is executed at a specified account @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Building a NEAR Subgraph -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -There are three aspects of subgraph definition: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ```bash $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Subgraf-Manifest-Definition -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ NEAR data sources support two types of handlers: ### Schema-Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript-Mappings @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -The node configuration will depend on where the subgraph is being deployed. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ We will provide more information on running the above components soon. ## Querying a NEAR Subgraph -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Beispiele von Subgrafen -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### How does the beta work? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Can a subgraph index both NEAR and EVM chains? +### Can a Subgraph index both NEAR and EVM chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. -### Can subgraphs react to more specific triggers? +### Can Subgraphs react to more specific triggers? Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### Can NEAR subgraphs make view calls to NEAR accounts during mappings? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? This is not supported. We are evaluating whether this functionality is required for indexing. -### Can I use data source templates in my NEAR subgraph? +### Can I use data source templates in my NEAR Subgraph? This is not currently supported. We are evaluating whether this functionality is required for indexing. -### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### My question hasn't been answered, where can I get more help building NEAR subgraphs? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## References diff --git a/website/src/pages/de/subgraphs/cookbook/polymarket.mdx b/website/src/pages/de/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/de/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/de/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/de/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/de/subgraphs/cookbook/secure-api-keys-nextjs.mdx index 4122439152b8..ae47c7e66060 100644 --- a/website/src/pages/de/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/de/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## Überblick -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) @@ -118,6 +118,6 @@ Start our Next.js application using `npm run dev`. Verify that the server compon ![Server-side rendering](/img/api-key-server-side-rendering.png) -### Conclusion +### Schlussfolgerung By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/de/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/de/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..48f902788f70 --- /dev/null +++ b/website/src/pages/de/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Überblick + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Los geht’s + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Besonderheiten + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Zusätzliche Ressourcen + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/de/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/de/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..2d0c8b31f05f --- /dev/null +++ b/website/src/pages/de/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Einführung + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Los geht’s + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Zusätzliche Ressourcen + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/de/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/de/subgraphs/cookbook/subgraph-debug-forking.mdx index 6610f19da66d..91aa7484d2ec 100644 --- a/website/src/pages/de/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/de/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Quick and Easy Subgraph Debugging Using Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## Ok, what is it? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## What?! How? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## Please, show me some code! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. The usual way to attempt a fix is: 1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Wait for it to sync-up. 4. If it breaks again go back to 1, otherwise: Hooray! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. If it breaks again, go back to 1, otherwise: Hooray! Now, you may have 2 questions: @@ -69,18 +69,18 @@ Now, you may have 2 questions: And I answer: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Forking is easy, no need to sweat: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! So, here is what I do: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/de/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/de/subgraphs/cookbook/subgraph-uncrashable.mdx index 0cc91a0fa2c3..a08e2a7ad8c9 100644 --- a/website/src/pages/de/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/de/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Safe Subgraph Code Generator --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Why integrate with Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. @@ -26,4 +26,4 @@ Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/de/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/de/subgraphs/cookbook/transfer-to-the-graph.mdx index a97a3c618c03..19320be3d304 100644 --- a/website/src/pages/de/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/de/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und verbinden Sie Ihre Wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Verwendung von [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Beispiel -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Zusätzliche Ressourcen -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/de/subgraphs/developing/_meta-titles.json b/website/src/pages/de/subgraphs/developing/_meta-titles.json index 01a91b09ed77..7035d7a7491b 100644 --- a/website/src/pages/de/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/de/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "Erstellen", + "deploying": "Bereitstellung", + "publishing": "Veröffentlichung", + "managing": "Verwaltung" } diff --git a/website/src/pages/de/subgraphs/developing/creating/advanced.mdx b/website/src/pages/de/subgraphs/developing/creating/advanced.mdx index 1a8debdf98c5..38b0aead992e 100644 --- a/website/src/pages/de/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/advanced.mdx @@ -1,43 +1,43 @@ --- -title: Advanced Subgraph Features +title: Erweiterte Subgraph-Funktionen --- ## Überblick -Add and implement advanced subgraph features to enhanced your subgraph's built. +Fügen Sie fortgeschrittene Subgraph-Funktionen hinzu und implementieren Sie sie, um Ihre Subgraphen zu verbessern. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Ab `specVersion` `0.0.4` müssen Subgraph-Funktionen explizit im Abschnitt `features` auf der obersten Ebene der Manifestdatei unter Verwendung ihres `camelCase`-Namens deklariert werden, wie in der folgenden Tabelle aufgeführt: -| Feature | Name | -| ---------------------------------------------------- | ---------------- | -| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | -| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| Funktion | Name | +| ----------------------------------------------------- | ---------------- | +| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | +| [Volltextsuche](#defining-fulltext-search-fields) | "Volltextsuche" | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +Wenn ein Subgraph beispielsweise die Funktionen **Volltextsuche** und **Nicht fatale Fehler** verwendet, sollte das Feld „Features“ im Manifest lauten: ```yaml -specVersion: 0.0.4 -description: Gravatar for Ethereum -features: +specVersion: 1.3.0 +Beschreibung: Gravatar für Ethereum +Funktionen: - fullTextSearch - nonFatalErrors -dataSources: ... +Datenquellen: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Beachten Sie, dass die Verwendung einer Funktion ohne deren Deklaration zu einem **Validierungsfehler** beim Einsatz von Subgraphen führt, aber keine Fehler auftreten, wenn eine Funktion deklariert, aber nicht verwendet wird. -## Timeseries and Aggregations +## Subgraph Best Practice 5: Timeseries and Aggregations -Prerequisites: +Voraussetzungen: -- Subgraph specVersion must be ≥1.1.0. +- Subgraph specVersion muss ≥1.1.0 sein. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Zeitreihen und Aggregationen ermöglichen es Ihrem Subgraph, Statistiken wie den täglichen Durchschnittspreis, stündliche Gesamttransfers und mehr zu verfolgen. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +Mit dieser Funktion werden zwei neue Typen von Subgraph-Entitäten eingeführt. Zeitreihen-Entitäten zeichnen Datenpunkte mit Zeitstempeln auf. Aggregations-Entitäten führen vordeklarierte Berechnungen an den Zeitreihen-Datenpunkten auf stündlicher oder täglicher Basis durch und speichern dann die Ergebnisse für den einfachen Zugriff über GraphQL. -### Example Schema +### Beispiel-Schema ```graphql type Data @entity(timeseries: true) { @@ -46,226 +46,226 @@ type Data @entity(timeseries: true) { price: BigDecimal! } -type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { +type Stats @aggregation(intervals: [„hour“, „day“], source: „Data“) { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: „sum“, arg: „price“) } ``` -### How to Define Timeseries and Aggregations +### Definition von Zeitreihen und Aggregationen -Timeseries entities are defined with `@entity(timeseries: true)` in the GraphQL schema. Every timeseries entity must: +Zeitreihenentitäten werden mit `@entity(timeseries: true)` im GraphQL-Schema definiert. Jede Zeitreihen-Entität muss: -- have a unique ID of the int8 type -- have a timestamp of the Timestamp type -- include data that will be used for calculation by aggregation entities. +- haben eine eindeutige ID vom Typ int8 +- einen Zeitstempel vom Typ Zeitstempel haben +- Daten enthalten, die von den Aggregationseinheiten für die Berechnung verwendet werden. -These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the aggregation entities. +Diese Timeseries-Entitäten können in regulären Trigger-Handlern gespeichert werden und dienen als „Rohdaten“ für die Aggregationsentitäten. Aggregation entities are defined with `@aggregation` in the GraphQL schema. Every aggregation entity defines the source from which it will gather data (which must be a timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). -Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. +Die Aggregationseinheiten werden automatisch auf der Grundlage der angegebenen Quelle am Ende des gewünschten Intervalls berechnet. -#### Available Aggregation Intervals +#### Verfügbare Aggregationsintervalle -- `hour`: sets the timeseries period every hour, on the hour. -- `day`: sets the timeseries period every day, starting and ending at 00:00. +- Stunde": setzt den Zeitraum der Zeitreihe stündlich, zur vollen Stunde. +- Tag": legt den Zeitraum der Zeitreihe für jeden Tag fest, beginnend und endend um 00:00 Uhr. -#### Available Aggregation Functions +#### Verfügbare Aggregationsfunktionen -- `sum`: Total of all values. -- `count`: Number of values. -- `min`: Minimum value. -- `max`: Maximum value. -- `first`: First value in the period. -- `last`: Last value in the period. +- Summe": Summe aller Werte. +- Anzahl": Anzahl der Werte. +- Min": Minimaler Wert. +- `max`: Maximaler Wert. +- "erster": Erster Wert in der Periode. +- Letzter Wert: Letzter Wert in der Periode. -#### Example Aggregations Query +#### Beispiel-Aggregationsabfrage ```graphql { - stats(interval: "hour", where: { timestamp_gt: 1704085200 }) { + stats(interval: „hour“, where: { timestamp_gt: 1704085200 }) { id - timestamp - sum + Zeitstempel + Summe } } ``` -[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. +[Lesen Sie mehr](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) über Zeitreihen und Aggregationen. ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexierungsfehler bei bereits synchronisierten Subgraphen führen standardmäßig dazu, dass der Subgraph fehlschlägt und die Synchronisierung beendet wird. Subgraphen können alternativ so konfiguriert werden, dass die Synchronisierung bei Fehlern fortgesetzt wird, indem die vom Handler, der den Fehler verursacht hat, vorgenommenen Änderungen ignoriert werden. Dies gibt den Autoren von Untergraphen Zeit, ihre Subgraphen zu korrigieren, während die Abfragen weiterhin gegen den letzten Block ausgeführt werden, obwohl die Ergebnisse aufgrund des Fehlers, der den Fehler verursacht hat, inkonsistent sein könnten. Beachten Sie, dass einige Fehler immer noch fatal sind. Um nicht fatal zu sein, muss der Fehler als deterministisch bekannt sein. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Hinweis:** Das The Graph Netzwerk unterstützt noch keine nicht-fatalen Fehler, und Entwickler sollten keine Subgraphen, die diese Funktionalität nutzen, über das Studio im Netzwerk bereitstellen. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Zur Aktivierung von nicht schwerwiegenden Fehlern muss das folgende Funktionskennzeichen im Manifest des Subgraphen gesetzt werden: ```yaml -specVersion: 0.0.4 -description: Gravatar for Ethereum -features: +specVersion: 1.3.0 +Beschreibung: Gravatar für Ethereum +Merkmale: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +Die Abfrage muss sich auch für die Abfrage von Daten mit potenziellen Inkonsistenzen durch das Argument `subgraphError` entscheiden. Es wird auch empfohlen, `_meta` abzufragen, um zu prüfen, ob der Subgraph Fehler übersprungen hat, wie in diesem Beispiel: ```graphql foos(first: 100, subgraphError: allow) { - id + id } _meta { - hasIndexingErrors + hasIndexingErrors } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +Wenn der Subgraph auf einen Fehler stößt, gibt diese Abfrage sowohl die Daten als auch einen Graphql-Fehler mit der Meldung `„indexing_error“` zurück, wie in dieser Beispielantwort: ```graphql -"data": { - "foos": [ - { - "id": "0xdead" - } - ], - "_meta": { - "hasIndexingErrors": true - } +"Daten": { + "foos": [ + { + "id": "0xdead" + } + ], + "_meta": { + "hasIndexingErrors": true + } }, "errors": [ - { - "message": "indexing_error" - } + { + "message": "indexing_error" + } ] ``` -## IPFS/Arweave File Data Sources +## IPFS/Arweave File Datenquellen -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +Dateidatenquellen sind eine neue Subgraph-Funktionalität für den Zugriff auf Off-Chain-Daten während der Indizierung in einer robusten, erweiterbaren Weise. Dateidatenquellen unterstützen das Abrufen von Dateien aus dem IPFS und aus Arweave. -> This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. +> Damit wird auch die Grundlage für die deterministische Indizierung von Off-Chain-Daten sowie für die potenzielle Einführung beliebiger HTTP-Daten geschaffen. ### Überblick -Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found. +Anstatt die Dateien während der Ausführung des Handlers „in line“ zu holen, werden Vorlagen eingeführt, die als neue Datenquellen für eine bestimmte Dateikennung erzeugt werden können. Diese neuen Datenquellen rufen die Dateien ab, versuchen es erneut, wenn sie nicht erfolgreich sind, und führen einen speziellen Handler aus, wenn die Datei gefunden wird. -This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. +Dies ist vergleichbar mit den [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), die zur dynamischen Erstellung neuer kettenbasierter Datenquellen verwendet werden. -> This replaces the existing `ipfs.cat` API +> Dies ersetzt die bestehende API „ipfs.cat“. -### Upgrade guide +### Upgrade-Leitfaden -#### Update `graph-ts` and `graph-cli` +#### Aktualisierung von `graph-ts` und `graph-cli` -File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 +Dateidatenquellen erfordern graph-ts >=0.29.0 und graph-cli >=0.33.1 -#### Add a new entity type which will be updated when files are found +#### Hinzufügen eines neuen Entitätstyps, der aktualisiert wird, wenn Dateien gefunden werden -File data sources cannot access or update chain-based entities, but must update file specific entities. +Dateidatenquellen können nicht auf kettenbasierte Entitäten zugreifen oder diese aktualisieren, sondern müssen dateispezifische Entitäten aktualisieren. -This may mean splitting out fields from existing entities into separate entities, linked together. +Dies kann bedeuten, dass Felder aus bestehenden Entitäten in separate, miteinander verbundene Entitäten aufgeteilt werden. -Original combined entity: +Ursprüngliche kombinierte Einheit: ```graphql type Token @entity { - id: ID! - tokenID: BigInt! - tokenURI: String! - externalURL: String! - ipfsURI: String! - image: String! - name: String! - description: String! - type: String! - updatedAtTimestamp: BigInt - owner: User! + id: ID! + tokenID: BigInt! + tokenURI: String! + externalURL: String! + ipfsURI: String! + image: String! + name: String! + description: String! + type: String! + updatedAtTimestamp: BigInt + owner: User! } ``` -New, split entity: +Neu, geteilte Einheit: ```graphql type Token @entity { - id: ID! - tokenID: BigInt! - tokenURI: String! - ipfsURI: TokenMetadata - updatedAtTimestamp: BigInt - owner: String! + id: ID! + tokenID: BigInt! + tokenURI: String! + ipfsURI: TokenMetadata + updatedAtTimestamp: BigInt + owner: String! } type TokenMetadata @entity { - id: ID! - image: String! - externalURL: String! - name: String! - description: String! + id: ID! + image: String! + externalURL: String! + name: String! + description: String! } ``` -If the relationship is 1:1 between the parent entity and the resulting file data source entity, the simplest pattern is to link the parent entity to a resulting file entity by using the IPFS CID as the lookup. Get in touch on Discord if you are having difficulty modelling your new file-based entities! +Wenn die Beziehung zwischen der übergeordneten Entität und der resultierenden Dateidatenquellen-Entität 1:1 ist, besteht das einfachste Muster darin, die übergeordnete Entität mit einer resultierenden Datei-Entität zu verknüpfen, indem die IPFS CID als Lookup verwendet wird. Kontaktieren Sie uns auf Discord, wenn Sie Schwierigkeiten bei der Modellierung Ihrer neuen dateibasierten Entitäten haben! -> You can use [nested filters](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. +> Sie können [nested filters](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering) verwenden, um übergeordnete Entitäten auf der Grundlage dieser verschachtelten Entitäten zu filtern. -#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` +#### Hinzufügen einer neuen Schablonen-Datenquelle mit „Art: file/ipfs“ oder „Art: file/arweave“. -This is the data source which will be spawned when a file of interest is identified. +Dies ist die Datenquelle, die erzeugt wird, wenn eine Datei von Interesse identifiziert wird. ```yaml -templates: - - name: TokenMetadata - kind: file/ipfs +Vorlagen: + - name: TokenMetadaten + Art: Datei/ipfs mapping: - apiVersion: 0.0.7 - language: wasm/assemblyscript - file: ./src/mapping.ts + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Datei: ./src/mapping.ts handler: handleMetadata - entities: - - TokenMetadata + Entitäten: + - TokenMetadaten abis: - name: Token - file: ./abis/Token.json + Datei: ./abis/Token.json ``` -> Currently `abis` are required, though it is not possible to call contracts from within file data sources +> Derzeit sind „abis“ erforderlich, obwohl es nicht möglich ist, Verträge aus Dateidatenquellen aufzurufen. -The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. +Die Dateidatenquelle muss alle Entitätstypen, mit denen sie interagieren wird, unter „Entitäten“ ausdrücklich erwähnen. Siehe [limitations](#limitations) für weitere Details. -#### Create a new handler to process files +#### Erstellen Sie einen neuen Handler zur Verarbeitung von Dateien -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/subgraphs/developing/creating/graph-ts/api/#json-api)). +Dieser Handler sollte einen `Bytes`-Parameter akzeptieren, der den Inhalt der Datei darstellt, wenn diese gefunden wird und dann verarbeitet werden kann. Oft handelt es sich dabei um eine JSON-Datei, die mit `graph-ts`-Helfern verarbeitet werden kann ([documentation](/subgraphs/developing/creating/graph-ts/api/#json-api)). -The CID of the file as a readable string can be accessed via the `dataSource` as follows: +Auf das CID der Datei als lesbare Zeichenkette kann über die `dataSource` wie folgt zugegriffen werden: ```typescript const cid = dataSource.stringParam() ``` -Example handler: +Beispiel-Handler: ```typescript -import { json, Bytes, dataSource } from '@graphprotocol/graph-ts' -import { TokenMetadata } from '../generated/schema' +Importiere { json, Bytes, dataSource } von '@graphprotocol/graph-ts' +importiere { TokenMetadata } von '.. generated/schema' export function handleMetadata(content: Bytes): void { - let tokenMetadata = new TokenMetadata(dataSource.stringParam()) + let tokenMetadata = new TokenMetadata(dataSource. tringParam()) const value = json.fromBytes(content).toObject() if (value) { - const image = value.get('image') + const image = value. et('image') const name = value.get('name') - const description = value.get('description') - const externalURL = value.get('external_url') + const description = value. et('description') + const externalURL = Wert. et('external_url') if (name && image && description && externalURL) { - tokenMetadata.name = name.toString() - tokenMetadata.image = image.toString() + tokenMetadata. ame = name.toString() + tokenMetadata. mage = image.toString() tokenMetadata.externalURL = externalURL.toString() - tokenMetadata.description = description.toString() + tokenMetadata. escription = description.toString() } tokenMetadata.save() @@ -273,24 +273,24 @@ export function handleMetadata(content: Bytes): void { } ``` -#### Spawn file data sources when required +#### Dateidatenquellen bei Bedarf abrufen -You can now create file data sources during execution of chain-based handlers: +Sie können jetzt Dateidatenquellen während der Ausführung von kettenbasierten Handlern erstellen: -- Import the template from the auto-generated `templates` -- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave +- Importieren Sie die Vorlage aus den automatisch erzeugten „Templates“. +- Aufruf von `TemplateName.create(cid: string)` innerhalb einer Zuordnung, wobei cid ein gültiger Inhaltsbezeichner für IPFS oder Arweave ist -For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifiers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). +Für IPFS unterstützt Graph Node [v0- und v1-Inhaltsbezeichner] (https://docs.ipfs.tech/concepts/content-addressing/) und Inhaltsbezeichner mit Verzeichnissen (z. B. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). -For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). +Für Arweave kann Graph Node ab Version 0.33.0 Dateien, die auf Arweave gespeichert sind, basierend auf ihrer [Transaktions-ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) von einem Arweave-Gateway ([Beispieldatei](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)) abrufen. Arweave unterstützt Transaktionen, die über Irys (früher Bundlr) hochgeladen werden, und Graph Node kann auch Dateien auf der Grundlage von [Irys-Manifesten](https://docs.irys.xyz/overview/gateways#indexing) abrufen. -Example: +Beispiel: ```typescript import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//Dieser Beispielcode ist für einen Crypto Coven Subgraph. Der obige ipfs-Hash ist ein Verzeichnis mit Token-Metadaten für alle Crypto Coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" + //This erstellt einen Pfad zu den Metadaten für eine einzelne Crypto Coven NFT. Das Verzeichnis wird mit „/“ + Dateiname + „.json“ verknüpft. token.ipfsURI = tokenIpfsHash @@ -313,251 +313,251 @@ export function handleTransfer(event: TransferEvent): void { } ``` -This will create a new file data source, which will poll Graph Node's configured IPFS or Arweave endpoint, retrying if it is not found. When the file is found, the file data source handler will be executed. +Dadurch wird eine neue Dateidatenquelle erstellt, die den konfigurierten IPFS- oder Arweave-Endpunkt des Graph Node abfragt und es erneut versucht, wenn sie nicht gefunden wird. Wenn die Datei gefunden wird, wird der Dateidatenquellen-Handler ausgeführt. -This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. +In diesem Beispiel wird die CID als Lookup zwischen der übergeordneten Entität „Token“ und der daraus resultierenden Entität „TokenMetadata“ verwendet. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Früher hätte ein Subgraph-Entwickler an dieser Stelle `ipfs.cat(CID)` aufgerufen, um die Datei zu holen -Congratulations, you are using file data sources! +Herzlichen Glückwunsch, Sie verwenden Dateidatenquellen! -#### Deploying your subgraphs +#### Einsatz von Subgraphen -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +Sie können jetzt Ihren Subgraphen auf jedem Graph Node >=v0.30.0-rc.0 „bauen“ und „bereitstellen“. -#### Limitations +#### Beschränkungen -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +Dateidatenquellen-Handler und -Entitäten sind von anderen Subgraph-Entitäten isoliert, wodurch sichergestellt wird, dass sie bei ihrer Ausführung deterministisch sind und keine Kontamination von kettenbasierten Datenquellen erfolgt. Um genau zu sein: -- Entities created by File Data Sources are immutable, and cannot be updated -- File Data Source handlers cannot access entities from other file data sources -- Entities associated with File Data Sources cannot be accessed by chain-based handlers +- Von Dateidatenquellen erstellte Entitäten sind unveränderlich und können nicht aktualisiert werden. +- Dateidatenquellen-Handler können nicht auf Entitäten aus anderen Dateidatenquellen zugreifen +- Auf Entitäten, die mit Dateidatenquellen verknüpft sind, kann von kettenbasierten Handlern nicht zugegriffen werden -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> Während diese Einschränkung für die meisten Anwendungsfälle nicht problematisch sein sollte, kann sie für einige Fälle zu mehr Komplexität führen. Bitte kontaktieren Sie uns über Discord, wenn Sie Probleme bei der Modellierung Ihrer dateibasierten Daten in einem Subgraphen haben! -Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. +Außerdem ist es nicht möglich, Datenquellen aus einer Dateidatenquelle zu erstellen, sei es eine Onchain-Datenquelle oder eine andere Dateidatenquelle. Diese Einschränkung kann in Zukunft aufgehoben werden. -#### Best practices +#### Bewährte Praktiken -If you are linking NFT metadata to corresponding tokens, use the metadata's IPFS hash to reference a Metadata entity from the Token entity. Save the Metadata entity using the IPFS hash as an ID. +Wenn Sie NFT-Metadaten mit entsprechenden Token verknüpfen, verwenden Sie den IPFS-Hash der Metadaten, um eine Metadaten-Entität von der Token-Entität zu referenzieren. Speichern Sie die Metadaten-Entität unter Verwendung des IPFS-Hashs als ID. -You can use [DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. +Sie können [DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) beim Erstellen von Dateidatenquellen verwenden, um zusätzliche Informationen zu übergeben, die dem Dateidatenquellen-Handler zur Verfügung stehen. -If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. +Wenn Sie Entitäten haben, die mehrfach aktualisiert werden, erstellen Sie eindeutige dateibasierte Entitäten unter Verwendung des IPFS-Hash & der Entitäts-ID, und verweisen Sie auf sie mit einem abgeleiteten Feld in der kettenbasierten Entität. -> We are working to improve the above recommendation, so queries only return the "most recent" version +> Wir arbeiten daran, die obige Empfehlung zu verbessern, so dass Abfragen nur die „aktuellste“ Version zurückgeben -#### Known issues +#### Bekannte Probleme -File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. +Dateidatenquellen erfordern derzeit ABIs, auch wenn ABIs nicht verwendet werden ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Umgehung ist das Hinzufügen eines beliebigen ABI. -Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. +Handler für Dateidatenquellen können nicht in Dateien sein, die `eth_call`-Vertragsbindungen importieren, und schlagen mit "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Umgehung ist die Erstellung von Dateidatenquellen-Handlern in einer eigenen Datei. #### Beispiele -[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) +[Crypto Coven Subgraph Migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) -#### References +#### Referenzen -[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) +[GIP Datenquellen](https://forum.thegraph.com/t/gip-file-data-sources/2721) -## Indexed Argument Filters / Topic Filters +## Indizierte Argumentfilter / Themen-Filter -> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` +> **Benötigt**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Themenfilter, auch bekannt als Filter für indizierte Argumente, sind eine leistungsstarke Funktion in Subgraphen, die es Benutzern ermöglicht, Blockchain-Ereignisse auf der Grundlage der Werte ihrer indizierten Argumente genau zu filtern. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- Diese Filter helfen dabei, bestimmte Ereignisse von Interesse aus dem riesigen Strom von Ereignissen auf der Blockchain zu isolieren, so dass Subgraphen effizienter arbeiten können, indem sie sich nur auf relevante Daten konzentrieren. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- Dies ist nützlich, um persönliche Subgraphen zu erstellen, die bestimmte Adressen und ihre Interaktionen mit verschiedenen Smart Contracts auf der Blockchain verfolgen. -### How Topic Filters Work +### Wie Themen-Filter funktionieren -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +Wenn ein Smart Contract ein Ereignis auslöst, können alle Argumente, die als indiziert markiert sind, als Filter im Manifest eines Subgraphen verwendet werden. Dies ermöglicht es dem Subgraph, selektiv auf Ereignisse zu warten, die diesen indizierten Argumenten entsprechen. -- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. +- Das erste indizierte Argument des Ereignisses entspricht `topic1`, das zweite `topic2` und so weiter bis `topic3`, da die Ethereum Virtual Machine (EVM) bis zu drei indizierte Argumente pro Ereignis erlaubt. ```solidity -// SPDX-License-Identifier: MIT +// SPDX-Lizenz-Identifikator: MIT pragma solidity ^0.8.0; contract Token { - // Event declaration with indexed parameters for addresses - event Transfer(address indexed from, address indexed to, uint256 value); - - // Function to simulate transferring tokens - function transfer(address to, uint256 value) public { - // Emitting the Transfer event with from, to, and value - emit Transfer(msg.sender, to, value); - } + // Ereignisdeklaration mit indizierten Parametern für Adressen + event Transfer(address indexed from, address indexed to, uint256 value); + + // Funktion zur Simulation der Übertragung von Token + function transfer(address to, uint256 value) public { + // Senden des Transfer-Ereignisses mit from, to, und value + emit Transfer(msg.sender, to, value); + } } ``` -In this example: +In unserem Beispiel: -- The `Transfer` event is used to log transactions of tokens between addresses. -- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. -- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. +- Das Ereignis „Übertragung“ wird verwendet, um Transaktionen von Token zwischen Adressen zu protokollieren. +- Die Parameter „von“ und „bis“ sind indiziert, so dass Ereignisüberwacher Übertragungen mit bestimmten Adressen filtern und überwachen können. +- Die Funktion „Transfer“ ist eine einfache Darstellung einer Token-Transfer-Aktion, die bei jedem Aufruf das Ereignis „Transfer“ auslöst. -#### Configuration in Subgraphs +#### Konfiguration in Subgraphen -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Themenfilter werden direkt in der Event-Handler-Konfiguration im Subgraph-Manifest definiert. Hier sehen Sie, wie sie konfiguriert werden: ```yaml eventHandlers: - - event: SomeEvent(indexed uint256, indexed address, indexed uint256) + - Event: SomeEvent(indexed uint256, indexed address, indexed uint256) handler: handleSomeEvent topic1: ['0xValue1', '0xValue2'] topic2: ['0xAddress1', '0xAddress2'] topic3: ['0xValue3'] ``` -In this setup: +In diesem Setup: -- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. -- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic. +- Dabei entspricht „Thema1“ dem ersten indizierten Argument des Ereignisses, ‚Thema2‘ dem zweiten und „Thema3“ dem dritten. +- Jedes Thema kann einen oder mehrere Werte haben, und ein Ereignis wird nur verarbeitet, wenn es einem der Werte in jedem angegebenen Thema entspricht. -#### Filter Logic +#### Filter-Logik -- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic. -- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler. +- Innerhalb eines einzelnen Themas: Die Logik funktioniert wie eine ODER-Bedingung. Das Ereignis wird verarbeitet, wenn es mit einem der aufgeführten Werte in einem bestimmten Thema übereinstimmt. +- Zwischen verschiedenen Themen: Die Logik funktioniert wie eine UND-Bedingung. Ein Ereignis muss alle angegebenen Bedingungen über verschiedene Themen hinweg erfüllen, um den zugehörigen Handler auszulösen. -#### Example 1: Tracking Direct Transfers from Address A to Address B +#### Beispiel 1: Verfolgung von Direktüberweisungen von Adresse A nach Adresse B ```yaml eventHandlers: - - event: Transfer(indexed address,indexed address,uint256) + - Event: Transfer(indizierte Adresse,indizierte Adresse,uint256) handler: handleDirectedTransfer - topic1: ['0xAddressA'] # Sender Address - topic2: ['0xAddressB'] # Receiver Address + topic1: ['0xAddressA'] # Absenderadresse + Topic2: ['0xAddressB'] # Empfängeradresse ``` -In this configuration: +In dieser Konfiguration: -- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- `topic1` ist konfiguriert, `Transfer` Ereignisse zu filtern, wobei `0xAddressA` der Absender ist. +- Thema2„ ist so konfiguriert, dass Ereignisse der Kategorie ‚Übertragung‘ gefiltert werden, bei denen “0xAdresseB" der Empfänger ist. +- Der Subgraph indiziert nur Transaktionen, die direkt von „0xAdresseA“ nach „0xAdresseB“ erfolgen. -#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses +#### Beispiel 2: Verfolgung von Transaktionen in beiden Richtungen zwischen zwei oder mehr Adressen ```yaml eventHandlers: - - event: Transfer(indexed address,indexed address,uint256) - handler: handleTransferToOrFrom - topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address - topic2: ['0xAddressB', '0xAddressC'] # Receiver Address + - event: Transfer(indexed address,indexed address,uint256) + handler: handleTransferToOrFrom + topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Absenderadresse + topic2: ['0xAddressB', '0xAddressC'] # Empfängeradresse ``` -In this configuration: +In dieser Konfiguration: -- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- Thema1„ ist so konfiguriert, dass er “Transfer„-Ereignisse filtert, bei denen “0xAdresseA„, ‚0xAdresseB‘, “0xAdresseC" der Absender ist. +- Thema2„ ist so konfiguriert, dass es “Transfer„-Ereignisse filtert, bei denen ‚0xAdresseB‘ und “0xAdresseC" der Empfänger ist. +- Der Subgraph indiziert Transaktionen, die in beiden Richtungen zwischen mehreren Adressen stattfinden, und ermöglicht so eine umfassende Überwachung von Interaktionen, die alle Adressen betreffen. -## Declared eth_call +## Deklariert eth_call -> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. +> Hinweis: Dies ist eine experimentelle Funktion, die derzeit noch nicht in einer stabilen Graph Node-Version verfügbar ist. Sie können sie nur in Subgraph Studio oder Ihrem selbst gehosteten Knoten verwenden. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Deklarative `eth_calls` sind eine wertvolle Funktion von Subgraph, die es erlaubt, `eth_calls` im Voraus auszuführen, so dass `graph-node` sie parallel ausführen kann. -This feature does the following: +Diese Funktion hat die folgenden Aufgaben: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. -- Allows faster data fetching, resulting in quicker query responses and a better user experience. -- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. +- Erhebliche Verbesserung der Leistung beim Abrufen von Daten aus der Ethereum-Blockchain durch Reduzierung der Gesamtzeit für mehrere Aufrufe und Optimierung der Gesamteffizienz des Subgraphen. +- Ermöglicht einen schnelleren Datenabruf, was zu schnelleren Abfrageantworten und einer besseren Benutzerfreundlichkeit führt. +- Reduziert die Wartezeiten für Anwendungen, die Daten aus mehreren Ethereum-Aufrufen aggregieren müssen, und macht den Datenabrufprozess effizienter. -### Key Concepts +### Schlüsselkonzepte -- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. -- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously. -- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel). +- Deklarative `eth_calls`: Ethereum-Aufrufe, die so definiert sind, dass sie parallel und nicht sequentiell ausgeführt werden. +- Parallele Ausführung: Anstatt auf das Ende eines Aufrufs zu warten, bevor der nächste gestartet wird, können mehrere Aufrufe gleichzeitig gestartet werden. +- Zeiteffizienz: Die Gesamtzeit, die für alle Anrufe benötigt wird, ändert sich von der Summe der einzelnen Anrufzeiten (sequentiell) zur Zeit des längsten Anrufs (parallel). -#### Scenario without Declarative `eth_calls` +#### Szenario ohne deklarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Stellen Sie sich vor, Sie haben einen Subgraph, der drei Ethereum-Aufrufe tätigen muss, um Daten über die Transaktionen, den Kontostand und den Token-Besitz eines Nutzers abzurufen. -Traditionally, these calls might be made sequentially: +Traditionell können diese Anrufe nacheinander erfolgen: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Anruf 1 (Transaktionen): dauert 3 Sekunden +2. Aufruf 2 (Balance): Dauert 2 Sekunden +3. Aufruf 3 (Besitz von Token): Dauert 4 Sekunden -Total time taken = 3 + 2 + 4 = 9 seconds +Gesamte benötigte Zeit = 3 + 2 + 4 = 9 Sekunden -#### Scenario with Declarative `eth_calls` +#### Szenario mit deklarativen `eth_calls` -With this feature, you can declare these calls to be executed in parallel: +Mit dieser Funktion können Sie erklären, dass diese Aufrufe parallel ausgeführt werden sollen: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Anruf 1 (Transaktionen): dauert 3 Sekunden +2. Aufruf 2 (Balance): Dauert 2 Sekunden +3. Aufruf 3 (Besitz von Token): Dauert 4 Sekunden -Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. +Da diese Aufrufe parallel ausgeführt werden, entspricht die Gesamtzeit der Zeit, die der längste Aufruf benötigt. -Total time taken = max (3, 2, 4) = 4 seconds +Insgesamt benötigte Zeit = max (3, 2, 4) = 4 Sekunden -#### How it Works +#### So funktioniert's -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Deklarative Definition: Im Subgraph-Manifest deklarieren Sie die Ethereum-Aufrufe in einer Weise, die angibt, dass sie parallel ausgeführt werden können. +2. Parallele Ausführungsmaschine: Die Ausführungsmaschine des Graph Node erkennt diese Deklarationen und führt die Aufrufe gleichzeitig aus. +3. Ergebnis-Aggregation: Sobald alle Aufrufe abgeschlossen sind, werden die Ergebnisse aggregiert und vom Subgraphen für die weitere Verarbeitung verwendet. -#### Example Configuration in Subgraph Manifest +#### Beispielkonfiguration im Subgraph Manifest -Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. +Deklarierte `eth_calls` können auf die `event.address` des zugrunde liegenden Ereignisses sowie auf alle `event.params` zugreifen. -`Subgraph.yaml` using `event.address`: +subgraph.yaml„ unter Verwendung von “event.address": ```yaml eventHandlers: -event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24) +event: Swap(indexed address,indexed address,int256,int160,uint128,int24) handler: handleSwap calls: global0X128: Pool[event.address].feeGrowthGlobal0X128() global1X128: Pool[event.address].feeGrowthGlobal1X128() ``` -Details for the example above: +Details für das obige Beispiel: -- `global0X128` is the declared `eth_call`. -- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. -- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` -- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. +- global0X128„ ist der angegebene “eth_call". +- Der Text (`global0X128`) ist die Bezeichnung für diesen `eth_call`, die bei der Fehlerprotokollierung verwendet wird. +- Der Text (`Pool[event.address].feeGrowthGlobal0X128()`) ist der eigentliche `eth_call`, der in Form von `Contract[address].function(arguments)` ausgeführt wird. +- Die „Adresse“ und die „Argumente“ können durch Variablen ersetzt werden, die bei der Ausführung des Handlers verfügbar sein werden. -`Subgraph.yaml` using `event.params` +subgraph.yaml„ unter Verwendung von “event.params ```yaml -calls: - - ERC20DecimalsToken0: ERC20[event.params.token0].decimals() +Aufrufe: + - ERC20DecimalsToken0: ERC20[event.params.token0].decimals() ``` -### Grafting onto Existing Subgraphs +### Grafting auf bestehende Subgraphen -> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). +> **Hinweis:** Es wird nicht empfohlen, beim ersten Upgrade auf The Graph Network das Grafting zu verwenden. Erfahren Sie mehr [hier](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +Wenn ein Subgraph zum ersten Mal eingesetzt wird, beginnt er mit der Indizierung von Ereignissen am Entstehungsblock der entsprechenden Kette (oder am `startBlock`, der mit jeder Datenquelle definiert ist). Unter bestimmten Umständen ist es von Vorteil, die Daten eines bestehenden Subgraphen wiederzuverwenden und die Indizierung an einem viel späteren Block zu beginnen. Diese Art der Indizierung wird _Grafting_ genannt. Grafting ist z.B. während der Entwicklung nützlich, um einfache Fehler in den Mappings schnell zu beheben oder um einen bestehenden Subgraph nach einem Fehler vorübergehend wieder zum Laufen zu bringen. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +Ein Subgraph wird auf einen Basis-Subgraph gepfropft, wenn das Subgraph-Manifest in `subgraph.yaml` einen `graft`-Block auf der obersten Ebene enthält: ```yaml -description: ... +Beschreibung: ... graft: - base: Qm... # Subgraph ID of base subgraph - block: 7345624 # Block number + base: Qm ... # Subgraph ID des Basis-Subgraphen + block: 7345624 # Blocknummer ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +Wenn ein Subgraph, dessen Manifest einen „graft“-Block enthält, bereitgestellt wird, kopiert Graph Node die Daten des ‚Basis‘-Subgraphen bis einschließlich des angegebenen „Blocks“ und fährt dann mit der Indizierung des neuen Subgraphen ab diesem Block fort. Der Basis-Subgraph muss auf der Ziel-Graph-Node-Instanz existieren und mindestens bis zum angegebenen Block indexiert sein. Aufgrund dieser Einschränkung sollte Grafting nur während der Entwicklung oder in Notfällen verwendet werden, um die Erstellung eines äquivalenten, nicht gepfropften Subgraphen zu beschleunigen. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Da beim Grafting die Basisdaten kopiert und nicht indiziert werden, ist es viel schneller, den Subgraphen auf den gewünschten Block zu bringen, als wenn er von Grund auf neu indiziert wird, obwohl die anfängliche Datenkopie bei sehr großen Subgraphen immer noch mehrere Stunden dauern kann. Während der Initialisierung des gepfropften Subgraphen protokolliert der Graph Node Informationen über die bereits kopierten Entitätstypen. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +Der aufgepfropfte Subgrafen kann ein GraphQL-Schema verwenden, das nicht identisch mit dem des Basis-Subgrafen ist, sondern lediglich mit diesem kompatibel ist. Es muss ein eigenständig gültiges Subgrafen-Schema sein, darf aber auf folgende Weise vom Schema des Basis-Subgrafen abweichen: -- It adds or removes entity types -- It removes attributes from entity types -- It adds nullable attributes to entity types -- It turns non-nullable attributes into nullable attributes -- It adds values to enums -- It adds or removes interfaces -- It changes for which entity types an interface is implemented +- Es fügt Entitätstypen hinzu oder entfernt sie +- Es entfernt Attribute von Entitätstypen +- Es fügt Entitätstypen nullfähige Attribute hinzu +- Es wandelt Nicht-Nullable-Attribute in Nullable-Attribute um +- Es fügt Aufzählungen Werte hinzu +- Es fügt Interface hinzu oder entfernt sie +- Sie ändert sich je nachdem, für welche Art von Elementen das Interface implementiert ist -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` muss unter `features` im Subgraph-Manifest deklariert werden. diff --git a/website/src/pages/de/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/de/subgraphs/developing/creating/assemblyscript-mappings.mdx index 4354181a33df..e0b1bfea4e2d 100644 --- a/website/src/pages/de/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -39,17 +39,17 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. -The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. +Der zweite Handler versucht, den vorhandenen `Gravatar` aus dem Graph Node Speicher zu laden. Wenn er noch nicht vorhanden ist, wird er bei Bedarf erstellt. Die Entität wird dann aktualisiert, um den neuen Ereignisparametern zu entsprechen, bevor sie mit „gravatar.save()“ in den Speicher zurückgespeichert wird. ### Recommended IDs for Creating New Entities -It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. +Es wird dringend empfohlen, `Bytes` als Typ für `id`-Felder zu verwenden und `String` nur für Attribute zu verwenden, die wirklich menschenlesbaren Text enthalten, wie den Namen eines Tokens. Im Folgenden sind einige empfohlene `id`-Werte aufgeführt, die bei der Erstellung neuer Entitäten zu berücksichtigen sind. - `transfer.id = event.transaction.hash` - `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` -- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like +- Bei Entitäten, die aggregierte Daten speichern, z. B. tägliche Handelsvolumina, enthält die „ID“ in der Regel die Tagesnummer. Hier ist die Verwendung von „Bytes“ als „ID“ von Vorteil. Die Bestimmung der `id` würde wie folgt aussehen ```typescript let dayID = event.block.timestamp.toI32() / 86400 @@ -66,13 +66,13 @@ There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-too When creating and saving a new entity, if an entity with the same ID already exists, the properties of the new entity are always preferred during the merge process. This means that the existing entity will be updated with the values from the new entity. -If a null value is intentionally set for a field in the new entity with the same ID, the existing entity will be updated with the null value. +Wird für ein Feld in der neuen Entität mit der gleichen ID absichtlich ein Nullwert gesetzt, wird die bestehende Entität mit dem Nullwert aktualisiert. If no value is set for a field in the new entity with the same ID, the field will result in null as well. ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..9dace9f39aaf 100644 --- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,101 +1,107 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Geringfügige Änderungen + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Danke [@isum](https://github.com/isum)! - feat: yaml parsing Unterstützung für Mappings hinzufügen + ## 0.37.0 -### Minor Changes +### Geringfügige Änderungen - [#1843](https://github.com/graphprotocol/graph-tooling/pull/1843) [`c09b56b`](https://github.com/graphprotocol/graph-tooling/commit/c09b56b093f23c80aa5d217b2fd56fccac061145) - Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - Update all dependencies + Danke [@YaroShkvorets](https://github.com/YaroShkvorets)! - Alle Abhängigkeiten aktualisieren ## 0.36.0 -### Minor Changes +### Geringfügige Änderungen - [#1754](https://github.com/graphprotocol/graph-tooling/pull/1754) [`2050bf6`](https://github.com/graphprotocol/graph-tooling/commit/2050bf6259c19bd86a7446410c7e124dfaddf4cd) - Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for subgraph datasource and - associated types. + Danke [@incrypto32](https://github.com/incrypto32)! - Hinzufügen von Unterstützung für Subgraph-Datenquellen und + zugehörige Typen. ## 0.35.1 -### Patch Changes +### Patch-Änderungen - [#1637](https://github.com/graphprotocol/graph-tooling/pull/1637) [`f0c583f`](https://github.com/graphprotocol/graph-tooling/commit/f0c583f00c90e917d87b707b5b7a892ad0da916f) - Thanks [@incrypto32](https://github.com/incrypto32)! - Update return type for ethereum.hasCode + Danke [@incrypto32](https://github.com/incrypto32)! - Rückgabetyp für ethereum.hasCode aktualisieren ## 0.35.0 -### Minor Changes +### Geringfügige Änderungen - [#1609](https://github.com/graphprotocol/graph-tooling/pull/1609) [`e299f6c`](https://github.com/graphprotocol/graph-tooling/commit/e299f6ce5cf1ad74cab993f6df3feb7ca9993254) - Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for eth.hasCode method + Danke [@incrypto32](https://github.com/incrypto32)! - Unterstützung für eth.hasCode-Methode hinzufügen ## 0.34.0 -### Minor Changes +### Geringfügige Änderungen - [#1522](https://github.com/graphprotocol/graph-tooling/pull/1522) [`d132f9c`](https://github.com/graphprotocol/graph-tooling/commit/d132f9c9f6ea5283e40a8d913f3abefe5a8ad5f8) - Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL - `Timestamp` scalar as `i64` (AssemblyScript) + Danke [@dotansimha](https://github.com/dotansimha)! - Unterstützung für den Umgang mit GraphQL hinzugefügt + Zeitstempel"-Skalar als ‚i64‘ (AssemblyScript) ## 0.33.0 -### Minor Changes +### Geringfügige Änderungen - [#1584](https://github.com/graphprotocol/graph-tooling/pull/1584) [`0075f06`](https://github.com/graphprotocol/graph-tooling/commit/0075f06ddaa6d37606e42e1c12d11d19674d00ad) - Thanks [@incrypto32](https://github.com/incrypto32)! - Added getBalance call to ethereum API + Danke [@incrypto32](https://github.com/incrypto32)! - getBalance-Aufruf zur Ethereum-API hinzugefügt ## 0.32.0 -### Minor Changes +### Geringfügige Änderungen - [#1523](https://github.com/graphprotocol/graph-tooling/pull/1523) [`167696e`](https://github.com/graphprotocol/graph-tooling/commit/167696eb611db0da27a6cf92a7390e72c74672ca) - Thanks [@xJonathanLEI](https://github.com/xJonathanLEI)! - add starknet data types + Danke [@xJonathanLEI](https://github.com/xJonathanLEI)! - Starknet-Datentypen hinzufügen ## 0.31.0 -### Minor Changes +### Geringfügige Änderungen - [#1340](https://github.com/graphprotocol/graph-tooling/pull/1340) [`2375877`](https://github.com/graphprotocol/graph-tooling/commit/23758774b33b5b7c6934f57a3e137870205ca6f0) - Thanks [@incrypto32](https://github.com/incrypto32)! - export `loadRelated` host function + Danke [@incrypto32](https://github.com/incrypto32)! - exportieren Sie `loadRelated` Host-Funktion - [#1296](https://github.com/graphprotocol/graph-tooling/pull/1296) [`dab4ca1`](https://github.com/graphprotocol/graph-tooling/commit/dab4ca1f5df7dcd0928bbaa20304f41d23b20ced) - Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL `Int8` - scalar as `i64` (AssemblyScript) + Danke [@dotansimha](https://github.com/dotansimha)! - Unterstützung für die Behandlung von GraphQL `Int8` hinzugefügt + Skalar als `i64` (AssemblyScript) ## 0.30.0 -### Minor Changes +### Geringfügige Änderungen - [#1299](https://github.com/graphprotocol/graph-tooling/pull/1299) [`3f8b514`](https://github.com/graphprotocol/graph-tooling/commit/3f8b51440db281e69879be7d91d79cd43e45fe86) - Thanks [@saihaj](https://github.com/saihaj)! - introduce new Etherum utility to get a CREATE2 - Address + Danke [@saihaj](https://github.com/saihaj)! - Einführung eines neuen Etherum-Dienstprogramms, um ein CREATE2 zu erhalten + Adresse - [#1306](https://github.com/graphprotocol/graph-tooling/pull/1306) [`f5e4b58`](https://github.com/graphprotocol/graph-tooling/commit/f5e4b58989edc5f3bb8211f1b912449e77832de8) - Thanks [@saihaj](https://github.com/saihaj)! - expose Host's `get_in_block` function + Danke [@saihaj](https://github.com/saihaj)! - Host's `get_in_block` Funktion freilegen ## 0.29.3 -### Patch Changes +### Patch-Änderungen - [#1057](https://github.com/graphprotocol/graph-tooling/pull/1057) [`b7a2ec3`](https://github.com/graphprotocol/graph-tooling/commit/b7a2ec3e9e2206142236f892e2314118d410ac93) - Thanks [@saihaj](https://github.com/saihaj)! - fix publihsed contents + Danke [@saihaj](https://github.com/saihaj)! - Publizierte Inhalte korrigieren ## 0.29.2 -### Patch Changes +### Patch-Änderungen - [#1044](https://github.com/graphprotocol/graph-tooling/pull/1044) [`8367f90`](https://github.com/graphprotocol/graph-tooling/commit/8367f90167172181870c1a7fe5b3e84d2c5aeb2c) - Thanks [@saihaj](https://github.com/saihaj)! - publish readme with packages + Danke [@saihaj](https://github.com/saihaj)! - Readme mit Paketen veröffentlichen diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/de/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..bf79b8c8eb78 100644 --- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/README.md @@ -1,30 +1,30 @@ -# The Graph TypeScript Library (graph-ts) +# The Graph-TypeScript-Bibliothek (graph-ts) [![npm (scoped)](https://img.shields.io/npm/v/@graphprotocol/graph-ts.svg)](https://www.npmjs.com/package/@graphprotocol/graph-ts) [![Build Status](https://travis-ci.org/graphprotocol/graph-ts.svg?branch=master)](https://travis-ci.org/graphprotocol/graph-ts) -TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to +TypeScript/AssemblyScript-Bibliothek zum Schreiben von Subgraph-Mappings, die auf [The Graph](https://github.com/graphprotocol/graph-node). -## Usage +## Verwendung -For a detailed guide on how to create a subgraph, please see the +Eine detaillierte Anleitung zur Erstellung eines Subgraphen finden Sie in der [Graph CLI docs](https://github.com/graphprotocol/graph-cli). -One step of creating the subgraph is writing mappings that will process blockchain events and will -write entities into the store. These mappings are written in TypeScript/AssemblyScript. +Ein Schritt bei der Erstellung des Subgraphen ist das Schreiben von Mappings, die Blockchain-Ereignisse verarbeiten und +Entitäten in den Speicher schreiben. Diese Mappings werden in TypeScript/AssemblyScript geschrieben. -The `graph-ts` library provides APIs to access the Graph Node store, blockchain data, smart -contracts, data on IPFS, cryptographic functions and more. To use it, all you have to do is add a -dependency on it: +Die Bibliothek `graph-ts` bietet APIs für den Zugriff auf den Graph Node-Speicher, Blockchain-Daten, Smart +Verträge, Daten auf IPFS, kryptographische Funktionen und mehr. Um sie zu verwenden, müssen Sie lediglich eine +Abhängigkeit von ihr hinzufügen: ```sh npm install --dev @graphprotocol/graph-ts # NPM yarn add --dev @graphprotocol/graph-ts # Yarn ``` -After that, you can import the `store` API and other features from this library in your mappings. A -few examples: +Danach können Sie die „Store“-API und andere Funktionen aus dieser Bibliothek in Ihre Mappings importieren. A +einige Beispiele: ```typescript import { crypto, store } from '@graphprotocol/graph-ts' @@ -50,19 +50,19 @@ function handleNameRegistered(event: NameRegistered) { } ``` -## Helper Functions for AssemblyScript +## Hilfsfunktionen für AssemblyScript -Refer to the `helper-functions.ts` file in +Siehe die Datei `helper-functions.ts` in [this](https://github.com/graphprotocol/graph-tooling/blob/main/packages/ts/helper-functions.ts) -repository for a few common functions that help build on top of the AssemblyScript library, such as -byte array concatenation, among others. +Repository für einige allgemeine Funktionen, die helfen, auf der AssemblyScript-Bibliothek aufzubauen, wie +Byte-Array-Verkettung, unter anderem. ## API -Documentation on the API can be found -[here](https://thegraph.com/docs/en/developer/assemblyscript-api/). +Die Dokumentation zur API finden Sie +[hier](https://thegraph.com/docs/en/developer/assemblyscript-api/). -For examples of `graph-ts` in use take a look at one of the following subgraphs: +Beispiele für die Verwendung von `graph-ts` finden Sie in einem der folgenden Subgraphen: - https://github.com/graphprotocol/ens-subgraph - https://github.com/graphprotocol/decentraland-subgraph @@ -71,15 +71,15 @@ For examples of `graph-ts` in use take a look at one of the following subgraphs: - https://github.com/graphprotocol/aragon-subgraph - https://github.com/graphprotocol/dharma-subgraph -## License +## Lizenz -Copyright © 2018 Graph Protocol, Inc. and contributors. +Copyright © 2018 Graph Protocol, Inc. und Mitwirkende. -The Graph TypeScript library is dual-licensed under the -[MIT license](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT) and the -[Apache License, Version 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-APACHE). +The Graph TypeScript-Bibliothek ist doppelt lizenziert unter der +[MIT-Lizenz](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT) und der +[Apache-Lizenz, Version 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-APACHE). -Unless required by applicable law or agreed to in writing, software distributed under the License is -distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -implied. See the License for the specific language governing permissions and limitations under the -License. +Sofern nicht durch geltendes Recht vorgeschrieben oder schriftlich vereinbart, wird die unter dieser Lizenz vertriebene Software +auf einer „AS IS“-Basis verteilt, OHNE GARANTIEN ODER BEDINGUNGEN JEGLICHER ART, weder ausdrücklich noch +stillschweigend. In der Lizenz finden Sie die spezifischen Bestimmungen zu den Rechten und Beschränkungen unter der +Lizenz. diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/_meta-titles.json b/website/src/pages/de/subgraphs/developing/creating/graph-ts/_meta-titles.json index a6ca184af501..60d143e1f518 100644 --- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/_meta-titles.json +++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/_meta-titles.json @@ -1,5 +1,5 @@ { "README": "Einführung", - "api": "API Reference", - "common-issues": "Common Issues" + "api": "API-Referenz", + "common-issues": "Gemeinsame Themen" } diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx index 6106b8cdf0dc..f6df454bbca8 100644 --- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,49 +2,49 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Hinweis: Wenn Sie einen Subgraph vor der Version `graph-cli`/`graph-ts` `0.22.0` erstellt haben, dann verwenden Sie eine ältere Version von AssemblyScript. Es wird empfohlen, den [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/) zu lesen. -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Erfahren Sie, welche eingebauten APIs beim Schreiben von Subgraph-Mappings verwendet werden können. Es gibt zwei Arten von APIs, die standardmäßig verfügbar sind: -- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Die [Graph-TypeScript-Bibliothek](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code, der von `graph codegen` aus Subgraph-Dateien erzeugt wird -You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). +Sie können auch andere Bibliotheken als Abhängigkeiten hinzufügen, solange sie mit [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) kompatibel sind. -Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). +Da die Sprachabbildungen in AssemblyScript geschrieben werden, ist es nützlich, die Sprach- und Standardbibliotheksfunktionen aus dem [AssemblyScript wiki] (https://github.com/AssemblyScript/assemblyscript/wiki) zu überprüfen. -## API Reference +## API-Referenz -The `@graphprotocol/graph-ts` library provides the following APIs: +Die Bibliothek `@graphprotocol/graph-ts` bietet die folgenden APIs: -- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. -- A `store` API to load and save entities from and to the Graph Node store. -- A `log` API to log messages to the Graph Node output and Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. -- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. +- Eine „Ethereum“-API für die Arbeit mit Ethereum-Smart Contracts, Ereignissen, Blöcken, Transaktionen und Ethereum-Werten. +- Eine „Store“-API zum Laden und Speichern von Entitäten aus und in den Graph Node-Speicher. +- Eine „Log“-API zur Protokollierung von Meldungen an die Graph Node-Ausgabe und den Graph Explorer. +- Eine `ipfs`-API zum Laden von Dateien aus dem IPFS. +- Eine „json“-API zum Parsen von JSON-Daten. +- Eine „Crypto“-API zur Verwendung kryptographischer Funktionen. +- Low-Level-Primitive zur Übersetzung zwischen verschiedenen Typsystemen wie Ethereum, JSON, GraphQL und AssemblyScript. ### Versionen -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +Die `apiVersion` im Subgraph-Manifest gibt die Mapping-API-Version an, die von Graph Node für einen bestimmten Subgraph ausgeführt wird. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Hinweise zur Version | +| :-----: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Fügt neue Host-Funktionen hinzu [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Fügt eine Validierung für das Vorhandensein von Feldern im Schema beim Speichern einer Entität hinzu. | +| 0.0.7 | Klassen `TransactionReceipt` und `Log` zu den Ethereum-Typen hinzugefügt<br />Feld `Receipt` zum Ethereum Event Objekt hinzugefügt | +| 0.0.6 | Feld `nonce` zum Ethereum Transaction Objekt hinzugefügt<br />`baseFeePerGas` zum Ethereum Block Objekt hinzugefügt | +| 0.0.5 | AssemblyScript wurde auf Version 0.19.10 aktualisiert (dies beinhaltet einige Änderungen, siehe [`Migrationsanleitung`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` umbenannt in `ethereum.transaction.gasLimit` | +| 0.0.4 | Feld `functionSignature` zum Ethereum SmartContractCall Objekt hinzugefügt | +| 0.0.3 | Feld `von` zum Ethereum Call Objekt hinzugefügt<br />`ethereum.call.address` umbenannt in `ethereum.call.to` | +| 0.0.2 | Feld „Eingabe“ zum Ethereum-Transaktionsobjekt hinzugefügt | -### Built-in Types +### Integrierte Typen -Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://www.assemblyscript.org/types.html). +Dokumentation zu den in AssemblyScript eingebauten Basistypen finden Sie im [AssemblyScript wiki](https://www.assemblyscript.org/types.html). -The following additional types are provided by `@graphprotocol/graph-ts`. +Die folgenden zusätzlichen Typen werden von `@graphprotocol/graph-ts` bereitgestellt. #### ByteArray @@ -52,25 +52,25 @@ The following additional types are provided by `@graphprotocol/graph-ts`. import { ByteArray } from '@graphprotocol/graph-ts' ``` -`ByteArray` represents an array of `u8`. +ByteArray„ stellt ein Array von “u8" dar. -_Construction_ +_Konstruktion_ -- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. -- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. +- `fromI32(x: i32): ByteArray` - Zerlegt `x` in Bytes. +- fromHexString(hex: string): ByteArray`- Die Eingabelänge muss gerade sein. Das Voranstellen von`0x\` ist optional. -_Type conversions_ +_Typumwandlungen_ -- `toHexString(): string` - Converts to a hex string prefixed with `0x`. -- `toString(): string` - Interprets the bytes as a UTF-8 string. -- `toBase58(): string` - Encodes the bytes into a base58 string. -- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. -- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. +- toHexString(): string`- Konvertiert in eine hexadezimale Zeichenkette mit dem Präfix`0x\`. +- \`toString(): string – Interpretiert die Bytes als UTF-8-String. +- \`toBase58(): string – Kodiert die Bytes in einen Base58-String. +- \`toU32(): u32 – Interpretiert die Bytes als Little-Endian u32. Wirft im Falle eines Überlaufs. +- \`toI32(): i32 – Interpretiert das Byte-Array als Little-Endian i32. Wirft im Falle eines Überlaufs. -_Operators_ +_Operatoren_ -- `equals(y: ByteArray): bool` – can be written as `x == y`. -- `concat(other: ByteArray) : ByteArray` - return a new `ByteArray` consisting of `this` directly followed by `other` +- `equals(y: ByteArray): bool – kann als x == y geschrieben werden`. +- `concat(other: ByteArray): ByteArray – gibt ein neues ByteArray zurück, das aus this besteht, direkt gefolgt von other` - `concatI32(other: i32) : ByteArray` - return a new `ByteArray` consisting of `this` directly followed by the byte representation of `other` #### BigDecimal @@ -83,24 +83,24 @@ import { BigDecimal } from '@graphprotocol/graph-ts' > Note: [Internally](https://github.com/graphprotocol/graph-node/blob/master/graph/src/data/store/scalar/bigdecimal.rs) `BigDecimal` is stored in [IEEE-754 decimal128 floating-point format](https://en.wikipedia.org/wiki/Decimal128_floating-point_format), which supports 34 decimal digits of significand. This makes `BigDecimal` unsuitable for representing fixed-point types that can span wider than 34 digits, such as a Solidity [`ufixed256x18`](https://docs.soliditylang.org/en/latest/types.html#fixed-point-numbers) or equivalent. -_Construction_ +_Konstruktion_ - `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. - `static fromString(s: string): BigDecimal` – parses from a decimal string. -_Type conversions_ +_Typumwandlungen_ - `toString(): string` – prints to a decimal string. _Math_ - `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. -- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. -- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. -- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. -- `equals(y: BigDecimal): bool` – can be written as `x == y`. -- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. -- `lt(y: BigDecimal): bool` – can be written as `x < y`. +- minus(y: BigDecimal): BigDecimal`- kann geschrieben werden als`x - y\`. +- Zeiten(y: BigDecimal): BigDecimal`- kann geschrieben werden als`x \* y\`. +- div(y: BigDecimal): BigDecimal`- kann als`x / y\` geschrieben werden. +- `equals(y: BigDecimal): bool` - kann geschrieben werden als `x == y`. +- `notEqual(y: BigDecimal): bool` - kann geschrieben werden als `x != y`. +- lt(y: BigDecimal): bool`- kann geschrieben werden als`x < y\` - `le(y: BigDecimal): bool` – can be written as `x <= y`. - `gt(y: BigDecimal): bool` – can be written as `x > y`. - `ge(y: BigDecimal): bool` – can be written as `x >= y`. @@ -112,11 +112,11 @@ _Math_ import { BigInt } from '@graphprotocol/graph-ts' ``` -`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. +BigInt" wird zur Darstellung großer Ganzzahlen verwendet. Dazu gehören Ethereum-Werte vom Typ `uint32` bis `uint256` und `int64` bis `int256`. Alles unter `uint32`, wie `int32`, `uint24` oder `int8` wird als `i32` dargestellt. The `BigInt` class has the following API: -_Construction_ +_Konstruktion_ - `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. @@ -126,7 +126,7 @@ _Construction_ - `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. - _Type conversions_ + _Typumwandlungen_ - `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. @@ -186,18 +186,18 @@ import { Bytes } from '@graphprotocol/graph-ts' The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: -_Construction_ +_Konstruktion_ - `fromHexString(hex: string) : Bytes` - Convert the string `hex` which must consist of an even number of hexadecimal digits to a `ByteArray`. The string `hex` can optionally start with `0x` - `fromI32(i: i32) : Bytes` - Convert `i` to an array of bytes -_Type conversions_ +_Typumwandlungen_ - `b.toHex()` – returns a hexadecimal string representing the bytes in the array - `b.toString()` – converts the bytes in the array to a string of unicode characters - `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) -_Operators_ +_Operatoren_ - `b.concat(other: Bytes) : Bytes` - - return new `Bytes` consisting of `this` directly followed by `other` - `b.concatI32(other: i32) : ByteArray` - return new `Bytes` consisting of `this` directly follow by the byte representation of `other` @@ -223,31 +223,31 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. -#### Creating entities +#### Erstellen von Entitäten -The following is a common pattern for creating entities from Ethereum events. +Im Folgenden finden Sie ein gängiges Muster zum Erstellen von Entitäten aus Ethereum-Ereignissen. ```typescript -// Import the Transfer event class generated from the ERC20 ABI +// Importieren Sie die aus dem ERC20-ABI generierte Transfer-Ereignisklasse import { Transfer as TransferEvent } from '../generated/ERC20/ERC20' -// Import the Transfer entity type generated from the GraphQL schema +// Importieren Sie den aus dem GraphQL-Schema generierten Transfer-Entitätstyp import { Transfer } from '../generated/schema' -// Transfer event handler +// Ereignishandler für Transfer export function handleTransfer(event: TransferEvent): void { - // Create a Transfer entity, using the transaction hash as the entity ID + // Erstellen Sie eine Transfer-Entität und verwenden Sie den Transaktions-Hash als Entitäts-ID let id = event.transaction.hash let transfer = new Transfer(id) - // Set properties on the entity, using the event parameters + // Legen Sie mithilfe der Ereignisparameter Eigenschaften für die Entität fest transfer.from = event.params.from transfer.to = event.params.to transfer.amount = event.params.amount - // Save the entity to the store + // Speichern Sie die Entität im Store transfer.save() } ``` @@ -258,50 +258,50 @@ Each entity must have a unique ID to avoid collisions with other entities. It is > Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. -#### Loading entities from the store +#### Laden von Entitäten aus dem Store -If an entity already exists, it can be loaded from the store with the following: +Wenn eine Entität bereits vorhanden ist, kann sie wie folgt aus dem Store geladen werden: ```typescript -let id = event.transaction.hash // or however the ID is constructed +let id = event.transaction.hash // oder wie auch immer die ID konstruiert wird let transfer = Transfer.load(id) if (transfer == null) { transfer = new Transfer(id) } -// Use the Transfer entity as before +// Verwenden Sie die Transfer-Entität wie zuvor ``` As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. > Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. -#### Looking up entities created withing a block +#### Suchen nach Entitäten, die innerhalb eines Blocks erstellt wurden As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript -let id = event.transaction.hash // or however the ID is constructed +let id = event.transaction.hash // oder wie auch immer die ID konstruiert wird let transfer = Transfer.loadInBlock(id) if (transfer == null) { transfer = new Transfer(id) } -// Use the Transfer entity as before +// Verwenden Sie die Transfer-Entität wie zuvor ``` > Note: If there is no entity created in the given block, `loadInBlock` will return `null` even if there is an entity with the given ID in the store. -#### Looking up derived entities +#### Suchen nach abgeleiteten Entitäten As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.31.0 and `@graphprotocol/graph-cli` v0.51.0 the `loadRelated` method is available. -This enables loading derived entity fields from within an event handler. For example, given the following schema: +Dies ermöglicht das Laden abgeleiteter Entitätsfelder aus einem Event-Handler heraus. Zum Beispiel anhand des folgenden Schemas: ```graphql type Token @entity { @@ -320,18 +320,18 @@ The following code will load the `Token` entity that the `Holder` entity was der ```typescript let holder = Holder.load('test-id') -// Load the Token entities associated with a given holder +// Laden Sie die Token-Entitäten, die einem bestimmten Inhaber zugeordnet sind let tokens = holder.tokens.load() ``` -#### Updating existing entities +#### Aktualisieren vorhandener Entitäten -There are two ways to update an existing entity: +Es gibt zwei Möglichkeiten, eine vorhandene Entität zu aktualisieren: 1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. 2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. -Changing properties is straight forward in most cases, thanks to the generated property setters: +Dank der generierten Eigenschaftssetzer ist das Ändern von Eigenschaften in den meisten Fällen unkompliziert: ```typescript let transfer = new Transfer(id) @@ -340,7 +340,7 @@ transfer.to = ... transfer.amount = ... ``` -It is also possible to unset properties with one of the following two instructions: +Es ist auch möglich, Eigenschaften mit einer der folgenden beiden Anweisungen zu deaktivieren: ```typescript transfer.from.unset() @@ -363,7 +363,7 @@ entity.numbers = numbers entity.save() ``` -#### Removing entities from the store +#### Entfernen von Entitäten aus dem Store There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: @@ -376,15 +376,15 @@ store.remove('Transfer', id) ### Ethereum API -The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. +Die Ethereum API bietet Zugriff auf Smart Contracts, öffentliche Zustandsvariablen, Vertragsfunktionen, Ereignisse, Transaktionen, Blöcke und die Kodierung/Dekodierung von Ethereum-Daten. -#### Support for Ethereum Types +#### Unterstützung von Ethereum-Typen -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -406,7 +406,7 @@ transfer.amount = event.params.amount transfer.save() ``` -#### Events and Block/Transaction Data +#### Ereignisse und Block-/Transaktionsdaten Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): @@ -481,23 +481,23 @@ class Log { } ``` -#### Access to Smart Contract State +#### Zugriff auf den Smart Contract-Status -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. -A common pattern is to access the contract from which an event originates. This is achieved with the following code: +Ein gängiges Muster ist der Zugriff auf den Vertrag, aus dem ein Ereignis hervorgeht. Dies wird mit dem folgenden Code erreicht: ```typescript -// Import the generated contract class and generated Transfer event class +// Importieren Sie die generierte Vertragsklasse und die generierte Transfer-Ereignisklasse import { ERC20Contract, Transfer as TransferEvent } from '../generated/ERC20Contract/ERC20Contract' -// Import the generated entity class +// Importieren Sie die generierte Entitätsklasse import { Transfer } from '../generated/schema' export function handleTransfer(event: TransferEvent) { - // Bind the contract to the address that emitted the event + // Binden Sie den Vertrag an die Adresse, die das Ereignis ausgegeben hat let contract = ERC20Contract.bind(event.address) - // Access state variables and functions by calling them + // Greifen Sie auf Zustandsvariablen und Funktionen zu, indem Sie sie aufrufen let erc20Symbol = contract.symbol() } ``` @@ -506,13 +506,13 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Jeder andere Vertrag, der Teil des Subgraphen ist, kann aus dem generierten Code importiert werden und an eine gültige Adresse gebunden werden. -#### Handling Reverted Calls +#### Bearbeitung rückgängig gemachter Anrufe -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. +Wenn die Nur-Lese-Methoden Ihres Vertrags rückgängig gemacht werden können, sollten Sie dies durch den Aufruf der generierten Vertragsmethode mit dem Präfix `try_` behandeln. -- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +- Der Gravity-Vertrag stellt zum Beispiel die Methode „GravatarToOwner“ zur Verfügung. Dieser Code wäre in der Lage, eine Umkehrung in dieser Methode zu behandeln: ```typescript let gravity = Gravity.bind(event.address) @@ -524,11 +524,11 @@ if (callResult.reverted) { } ``` -> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. +> Hinweis: Ein Graph-Knoten, der mit einem Geth- oder Infura-Client verbunden ist, erkennt möglicherweise nicht alle Umkehrungen. Wenn Sie sich darauf verlassen, empfehlen wir die Verwendung eines Graph-Knotens, der mit einem Parity-Client verbunden ist. -#### Encoding/Decoding ABI +#### Kodierung/Dekodierung von ABI -Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. +Daten können mit den Funktionen `encode` und `decode` im Modul `ethereum` gemäß dem ABI-Kodierungsformat von Ethereum kodiert und dekodiert werden. ```typescript import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' @@ -545,7 +545,7 @@ let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! let decoded = ethereum.decode('(address,uint256)', encoded) ``` -For more information: +Weitere Informationen: - [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) - Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) @@ -576,13 +576,13 @@ let eoa = Address.fromString('0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045') let isContract = ethereum.hasCode(eoa).inner // returns false ``` -### Logging API +### Logging-API ```typescript import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -598,9 +598,9 @@ The `log` API takes a format string and an array of string values. It then repla log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) ``` -#### Logging one or more values +#### Protokollierung eines oder mehrerer Werte -##### Logging a single value +##### Protokollierung eines einzelnen Werts In the example below, the string value "A" is passed into an array to become`['A']` before being logged: @@ -608,25 +608,25 @@ In the example below, the string value "A" is passed into an array to become`['A let myValue = 'A' export function handleSomeEvent(event: SomeEvent): void { - // Displays : "My value is: A" + // Zeigt an : "My value is: A" log.info('My value is: {}', [myValue]) } ``` -##### Logging a single entry from an existing array +##### Protokollieren eines einzelnen Eintrags aus einem vorhandenen Array -In the example below, only the first value of the argument array is logged, despite the array containing three values. +Im folgenden Beispiel wird nur der erste Wert des Argumentarrays protokolliert, obwohl das Array drei Werte enthält. ```typescript let myArray = ['A', 'B', 'C'] export function handleSomeEvent(event: SomeEvent): void { - // Displays : "My value is: A" (Even though three values are passed to `log.info`) + // Zeigt an : "My value is: A" (Obwohl drei Werte an „log.info“ übergeben werden) log.info('My value is: {}', myArray) } ``` -#### Logging multiple entries from an existing array +#### Protokollierung mehrerer Einträge aus einem vorhandenen Array Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. @@ -634,25 +634,25 @@ Each entry in the arguments array requires its own placeholder `{}` in the log m let myArray = ['A', 'B', 'C'] export function handleSomeEvent(event: SomeEvent): void { - // Displays : "My first value is: A, second value is: B, third value is: C" + // Zeigt an : "My first value is: A, second value is: B, third value is: C" log.info('My first value is: {}, second value is: {}, third value is: {}', myArray) } ``` -##### Logging a specific entry from an existing array +##### Protokollieren eines bestimmten Eintrags aus einem vorhandenen Array -To display a specific value in the array, the indexed value must be provided. +Um einen bestimmten Wert im Array anzuzeigen, muss der indizierte Wert angegeben werden. ```typescript export function handleSomeEvent(event: SomeEvent): void { - // Displays : "My third value is C" + // Zeigt an : "My third value is C" log.info('My third value is: {}', [myArray[2]]) } ``` -##### Logging event information +##### Protokollierung von Ereignisinformationen -The example below logs the block number, block hash and transaction hash from an event: +Im folgenden Beispiel werden Blocknummer, Block-Hash und Transaktions-Hash eines Ereignisses protokolliert: ```typescript import { log } from '@graphprotocol/graph-ts' @@ -672,31 +672,31 @@ export function handleSomeEvent(event: SomeEvent): void { import { ipfs } from '@graphprotocol/graph-ts' ``` -Smart contracts occasionally anchor IPFS files onchain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +Smart Contracts verankern gelegentlich IPFS-Dateien in der Kette. Dadurch können Mappings die IPFS-Hashes aus dem Vertrag abrufen und die entsprechenden Dateien aus IPFS lesen. Die Dateidaten werden als Bytes zurückgegeben, was normalerweise eine weitere Verarbeitung erfordert, z. B. mit der json-API, die später auf dieser Seite dokumentiert wird. -Given an IPFS hash or path, reading a file from IPFS is done as follows: +Bei gegebenem IPFS-Hash oder -Pfad erfolgt das Lesen einer Datei aus IPFS wie folgt: ```typescript -// Put this inside an event handler in the mapping +// Fügen Sie dies in einen Event-Handler im Mapping ein let hash = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D' let data = ipfs.cat(hash) -// Paths like `QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile` -// that include files in directories are also supported +// Pfade wie `QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile`, +// die Dateien in Verzeichnissen enthalten, werden ebenfalls unterstützt let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' let data = ipfs.cat(path) ``` -**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. +**Anmerkung:** `ipfs.cat` ist zur Zeit nicht deterministisch. Wenn die Datei nicht über das IPFS-Netzwerk abgerufen werden kann, bevor die Anfrage eine Zeitüberschreitung erreicht, wird `null` zurückgegeben. Aus diesem Grund lohnt es sich immer, das Ergebnis auf `null` zu überprüfen. -It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: +Es ist auch möglich, größere Dateien mit `ipfs.map` in einem Streaming-Verfahren zu verarbeiten. Die Funktion erwartet den Hash oder Pfad für eine IPFS-Datei, den Namen eines Callbacks und Flags, um ihr Verhalten zu ändern: ```typescript import { JSONValue, Value } from '@graphprotocol/graph-ts' export function processItem(value: JSONValue, userData: Value): void { - // See the JSONValue documentation for details on dealing - // with JSON values + // Weitere Informationen zum Handel + // mit JSON-Werten finden Sie in der JSONValue-Dokumentation let obj = value.toObject() let id = obj.get('id') let title = obj.get('title') @@ -705,23 +705,23 @@ export function processItem(value: JSONValue, userData: Value): void { return } - // Callbacks can also created entities + // Callbacks können auch Entitäten erstellen let newItem = new Item(id) newItem.title = title.toString() - newitem.parent = userData.toString() // Set parent to "parentId" + newitem.parent = userData.toString() // Übergeordnetes Element auf „parentId“ setzen newitem.save() } -// Put this inside an event handler in the mapping +// Fügen Sie dies in einen Event-Handler im Mapping ein ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) -// Alternatively, use `ipfs.mapJSON` +// Alternativ verwenden Sie „ipfs.mapJSON“. ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) ``` -The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. +Das einzige derzeit unterstützte Flag ist `json`, das an `ipfs.map` übergeben werden muss. Mit dem `json`-Flag muss die IPFS-Datei aus einer Reihe von JSON-Werten bestehen, ein Wert pro Zeile. Der Aufruf von `ipfs.map` liest jede Zeile der Datei, deserialisiert sie in einen `JSONValue` und ruft den Callback für jeden dieser Werte auf. Der Callback kann dann Entity-Operationen verwenden, um Daten aus dem `JSONValue` zu speichern. Entity-Änderungen werden nur gespeichert, wenn der Handler, der `ipfs.map` aufgerufen hat, erfolgreich beendet ist; in der Zwischenzeit werden sie im Speicher gehalten, und die Größe der Datei, die `ipfs.map` verarbeiten kann, ist daher begrenzt. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +Bei Erfolg gibt `ipfs.map` `void` zurück. Wenn ein Aufruf des Callbacks einen Fehler verursacht, wird der Handler, der `ipfs.map` aufgerufen hat, abgebrochen und der Subgraph wird als fehlgeschlagen markiert. ### Crypto API @@ -729,9 +729,9 @@ On success, `ipfs.map` returns `void`. If any invocation of the callback causes import { crypto } from '@graphprotocol/graph-ts' ``` -The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: +Die „crypto“-API stellt kryptographische Funktionen für die Verwendung in Mappings zur Verfügung. Momentan gibt es nur eine: -- `crypto.keccak256(input: ByteArray): ByteArray` +- crypto.keccak256(input: ByteArray): ByteArray\` ### JSON API @@ -746,7 +746,7 @@ JSON data can be parsed using the `json` API: - `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` - `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed -The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: +Die Klasse `JSONValue` bietet eine Möglichkeit, Werte aus einem beliebigen JSON-Dokument zu ziehen. Da JSON-Werte Boolesche Werte, Zahlen, Arrays und mehr sein können, verfügt `JSONValue` über die Eigenschaft `kind`, um den Typ eines Wertes zu überprüfen: ```typescript let value = json.fromBytes(...) @@ -768,9 +768,9 @@ When the type of a value is certain, it can be converted to a [built-in type](#b - `value.toString(): string` - `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) -### Type Conversions Reference +### Referenz zu Typkonvertierungen -| Source(s) | Destination | Conversion function | +| Quelle(n) | Destination | Conversion function | | -------------------- | -------------------- | ---------------------------- | | Address | Bytes | none | | Address | String | s.toHexString() | @@ -809,15 +809,15 @@ When the type of a value is certain, it can be converted to a [built-in type](#b | String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | | String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | -### Data Source Metadata +### Metadaten der Datenquelle -You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: +Sie können die Vertragsadresse, das Netzwerk und den Kontext der Datenquelle, die den Handler aufgerufen hat, durch den Namespace dataSource überprüfen: - `dataSource.address(): Address` - `dataSource.network(): string` - `dataSource.context(): DataSourceContext` -### Entity and DataSourceContext +### Entität und DataSourceContext The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: @@ -834,9 +834,9 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to - `getBoolean(key: string): boolean` - `getBigDecimal(key: string): BigDecimal` -### DataSourceContext in Manifest +### DataSourceContext im Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +Dieser Kontext ist dann in Ihren Subgraph-Zuordnungsdateien zugänglich und ermöglicht dynamischere und konfigurierbare Subgraphen. diff --git a/website/src/pages/de/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/de/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..b5930f918532 100644 --- a/website/src/pages/de/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -1,8 +1,8 @@ --- -title: Common AssemblyScript Issues +title: Häufige Probleme mit AssemblyScript --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +Es gibt bestimmte [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) Probleme, die bei der Entwicklung von Subgraphs häufig auftreten. Sie sind unterschiedlich schwer zu beheben, aber es kann hilfreich sein, sie zu kennen. Im Folgenden finden Sie eine nicht erschöpfende Liste dieser Probleme: -- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. -- Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). +- Private" Klassenvariablen werden in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features) nicht erzwungen. Es gibt keine Möglichkeit, Klassenvariablen davor zu schützen, dass sie direkt vom Klassenobjekt aus geändert werden. +- Der Geltungsbereich wird nicht in [Schließungsfunktionen](https://www.assemblyscript.org/status.html#on-closures) vererbt, d.h. außerhalb von Schließungsfunktionen deklarierte Variablen können nicht verwendet werden. Erläuterung in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s) diff --git a/website/src/pages/de/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/de/subgraphs/developing/creating/install-the-cli.mdx index f9d419ffe1ce..bb9fe36ade05 100644 --- a/website/src/pages/de/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/install-the-cli.mdx @@ -1,22 +1,22 @@ --- -title: Install the Graph CLI +title: Installieren der Graph-CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Überblick -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Erste Schritte -### Install the Graph CLI +### Installieren der Graph-CLI The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. Führen Sie einen der folgenden Befehle auf Ihrem lokalen Computer aus: -#### Using [npm](https://www.npmjs.com/) +#### Verwendung von [npm](https://www.npmjs.com/) ```bash npm install -g @graphprotocol/graph-cli@latest @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. -## Create a Subgraph +## Erstellen Sie einen Subgrafen -### From an Existing Contract +### Aus einem bestehenden Vertrag -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -47,39 +47,39 @@ graph init \ - The command tries to retrieve the contract ABI from Etherscan. - - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + - Die Graph CLI stützt sich auf einen öffentlichen RPC-Endpunkt. Gelegentliche Fehler sind zwar zu erwarten, aber durch Wiederholungen lässt sich dieses Problem in der Regel beheben. Bei anhaltenden Fehlern sollten Sie eine lokale ABI verwenden. - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. -### From an Example Subgraph +### Aus einem Datenbeispiel Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- Der Subgraph behandelt diese Ereignisse, indem er „Gravatar“-Entitäten in den Graph Node Store schreibt und sicherstellt, dass diese entsprechend den Ereignissen aktualisiert werden. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +dataSources" sind Schlüsselkomponenten von Subgraphs. Sie definieren die Datenquellen, die der Subgraph indiziert und verarbeitet. Eine „Datenquelle“ gibt an, auf welche Smart Contracts zu hören ist, welche Ereignisse zu verarbeiten sind und wie sie zu behandeln sind. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Neuere Versionen der Graph CLI unterstützen das Hinzufügen neuer Datenquellen zu einem bestehenden Subgraphen durch den Befehl „Graph add“: ```sh graph add
[] -Options: +Optionen: - --abi Path to the contract ABI (default: download from Etherscan) - --contract-name Name of the contract (default: Contract) - --merge-entities Whether to merge entities with the same name (default: false) - --network-file Networks config file path (default: "./networks.json") + --abi Pfad zur Vertrags-ABI (default: download from Etherscan) + --contract-name Name des Vertrags (default: Contract) + --merge-entities Ob Entitäten mit demselben Namen zusammengeführt werden sollen (default: false) + --network-file Pfad der Netzwerkkonfigurationsdate (default: "./networks.json") ``` #### Besonderheiten @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/de/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/de/subgraphs/developing/creating/ql-schema.mdx index 7f0283d91f62..99513f73b1d8 100644 --- a/website/src/pages/de/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/ql-schema.mdx @@ -4,39 +4,39 @@ title: The Graph QL Schema ## Überblick -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. -> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. +> Hinweis: Wenn Sie noch nie ein GraphQL-Schema geschrieben haben, empfehlen wir Ihnen, sich diese Einführung in das GraphQL-Typsystem anzusehen. Die Referenzdokumentation für GraphQL-Schemata finden Sie im Abschnitt [GraphQL API](/subgraphs/querying/graphql-api/). ### Defining Entities Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. -- It may be useful to imagine entities as "objects containing data", rather than as events or functions. -- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. +- Alle Abfragen werden gegen das im Subgraph-Schema definierte Datenmodell durchgeführt. Daher sollte sich der Entwurf des Subgraph-Schemas an den Abfragen orientieren, die Ihre Anwendung durchführen muss. +- Es kann sinnvoll sein, sich Entitäten als „Objekte, die Daten enthalten“, vorzustellen und nicht als Ereignisse oder Funktionen. +- Sie definieren Entitätstypen in „schema.graphql“, und Graph Node generiert Top-Level-Felder zur Abfrage einzelner Instanzen und Sammlungen dieses Entitätstyps. - Each type that should be an entity is required to be annotated with an `@entity` directive. - By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. - - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. - - If changes happen in the same block in which the entity was created, then mappings can make changes to immutable entities. Immutable entities are much faster to write and to query so they should be used whenever possible. + - Die Veränderbarkeit hat ihren Preis. Daher wird empfohlen, Entitätstypen, die niemals verändert werden, wie z. B. solche, die Daten enthalten, die wortwörtlich aus der Kette extrahiert wurden, mit `@entity(immutable: true)` als unveränderlich zu kennzeichnen. + - Wenn Änderungen in demselben Block stattfinden, in dem die Entität erstellt wurde, können Mappings Änderungen an unveränderlichen Entitäten vornehmen. Unveränderliche Entitäten sind viel schneller zu schreiben und abzufragen, so dass sie, wann immer möglich, verwendet werden sollten. #### Good Example -The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. +Die folgende „Gravatar“-Entität ist um ein Gravatar-Objekt herum aufgebaut und ist ein gutes Beispiel dafür, wie eine Entität definiert werden könnte. ```graphql -type Gravatar @entity(immutable: true) { +Typ Gravatar @entity(immutable: true) { id: Bytes! - owner: Bytes + Eigentümer: Bytes displayName: String imageUrl: String - accepted: Boolean + akzeptiert: Boolean } ``` #### Bad Example -The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. +Die folgenden Beispiele `GravatarAccepted` und `GravatarDeclined` basieren auf Ereignissen. Es wird nicht empfohlen, Ereignisse oder Funktionsaufrufe 1:1 auf Entitäten abzubilden. ```graphql type GravatarAccepted @entity { @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| Type | Beschreibung | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Beschreibung | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Beispiel @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx index dbffb92cfc5e..a2f39804cff0 100644 --- a/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starten Ihres Subgraphen ## Überblick -The Graph beherbergt Tausende von Subgraphen, die bereits für Abfragen zur Verfügung stehen. Schauen Sie also in [The Graph Explorer] (https://thegraph.com/explorer) nach und finden Sie einen, der Ihren Anforderungen entspricht. +The Graph beherbergt Tausende von Subgraphen, die bereits für Abfragen zur Verfügung stehen. Schauen Sie sich also [The Graph Explorer] (https://thegraph.com/explorer) an und finden Sie einen, der bereits Ihren Bedürfnissen entspricht. -Wenn Sie einen [Subgraphen](/subgraphs/developing/subgraphs/) erstellen, erstellen Sie eine benutzerdefinierte offene API, die Daten aus einer Blockchain extrahiert, verarbeitet, speichert und über GraphQL einfach abfragen lässt. +Wenn Sie einen [Subgraph](/subgraphs/developing/subgraphs/) erstellen, erstellen Sie eine benutzerdefinierte offene API, die Daten aus einer Blockchain extrahiert, verarbeitet, speichert und eine einfache Abfrage über GraphQL ermöglicht. -Die Entwicklung von Subgraphen reicht von einfachen Gerüst-Subgraphen bis hin zu fortgeschrittenen, speziell zugeschnittenen Subgraphen. +Die Entwicklung von Subgraphen reicht von einfachen Gerüstsubgraphen bis hin zu fortgeschrittenen, speziell zugeschnittenen Subgraphen. ### Start des Erstellens Starten Sie den Prozess und erstellen Sie einen Subgraphen, der Ihren Anforderungen entspricht: 1. [Installieren der CLI](/subgraphs/developing/creating/install-the-cli/) - Richten Sie Ihre Infrastruktur ein -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Verstehen der wichtigsten Komponenten eines Subgraphen +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Die Schlüsselkomponente eines Subgraphen verstehen 3. [Das GraphQL-Schema](/subgraphs/developing/creating/ql-schema/) - Schreiben Sie Ihr Schema 4. [Schreiben von AssemblyScript-Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Schreiben Sie Ihre Mappings 5. [Erweiterte Funktionen](/subgraphs/developing/creating/advanced/) - Passen Sie Ihren Subgraph mit erweiterten Funktionen an Erkunden Sie zusätzliche [Ressourcen für APIs](/subgraphs/developing/creating/graph-ts/README/) und führen Sie lokale Tests mit [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) durch. + +| Version | Hinweise zur Version | +| :-----: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Unterstützung für [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) hinzugefügt & `eth_call` erklärt | +| 1.1.0 | Unterstützt [Timeseries & Aggregations](#timeseries-and-aggregations). Unterstützung für Typ `Int8` für `id` hinzugefügt. | +| 1.0.0 | Unterstützt die Funktion [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) zum Beschneiden von Subgraphen | +| 0.0.9 | Unterstützt `endBlock` Funktion | +| 0.0.8 | Unterstützung für die Abfrage von [Block-Handlern](/developing/creating-a-subgraph/#polling-filter) und [Initialisierungs-Handlern](/developing/creating-a-subgraph/#once-filter) hinzugefügt. | +| 0.0.7 | Unterstützung für [Dateidatenquellen](/developing/creating-a-subgraph/#file-data-sources) hinzugefügt. | +| 0.0.6 | Unterstützt schnelle [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) Berechnungsvariante. | +| 0.0.5 | Unterstützung für Event-Handler mit Zugriff auf Transaktionsbelege hinzugefügt. | +| 0.0.4 | Unterstützung für die Verwaltung von Subgraphen-Features wurde hinzugefügt. | diff --git a/website/src/pages/de/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/de/subgraphs/developing/creating/subgraph-manifest.mdx index a3959f1f4d57..a1e209266146 100644 --- a/website/src/pages/de/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/subgraph-manifest.mdx @@ -16,7 +16,7 @@ Die **Subgraph-Definition** besteht aus den folgenden Dateien: ### Subgraph-Fähigkeiten -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,48 +24,48 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 -description: Gravatar for Ethereum -repository: https://github.com/graphprotocol/graph-tooling +specVersion: 1.3.0 +Beschreibung: Gravatar für Ethereum +Repository: https://github.com/graphprotocol/graph-tooling schema: - file: ./schema.graphql + Datei: ./schema.graphql indexerHints: prune: auto dataSources: - - kind: ethereum/contract - name: Gravity - network: mainnet - source: - address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' - abi: Gravity + - art: ethereum/contract + Name: Schwerkraft + Netzwerk: mainnet + Quelle: + Adresse: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi: Schwerkraft startBlock: 6175244 endBlock: 7175245 - context: + Kontext: foo: type: Bool - data: true + Daten: wahr bar: type: String - data: 'bar' + Daten: 'bar' mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - entities: + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Entitäten: - Gravatar abis: - - name: Gravity - file: ./abis/Gravity.json - eventHandlers: - - event: NewGravatar(uint256,address,string,string) - handler: handleNewGravatar - - event: UpdatedGravatar(uint256,address,string,string) - handler: handleUpdatedGravatar + - Name: Schwerkraft + Datei: ./abis/Gravity.json + eventHandler: + - event: NewGravatar(uint256,adresse,string,string) + Behandler: handleNewGravatar + - event: UpdatedGravatar(uint256,adresse,zeichenkette,zeichenkette) + Behandler: handleUpdatedGravatar callHandlers: - function: createGravatar(string,string) handler: handleCreateGravatar @@ -73,53 +73,53 @@ dataSources: - handler: handleBlock - handler: handleBlockWithCall filter: - kind: call - file: ./src/mapping.ts + Art: Aufruf + Datei: ./src/mapping.ts ``` ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). -The important entries to update for the manifest are: +Die wichtigen Einträge, die für das Manifest aktualisiert werden müssen, sind: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- specVersion": eine Semerversion, die die unterstützte Manifeststruktur und Funktionalität für den Untergraphen angibt. Die neueste Version ist `1.3.0`. Siehe [specVersion-Releases](#specversion-releases) Abschnitt für weitere Details zu Features und Releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- Beschreibung": eine von Menschen lesbare Beschreibung des Subgraphen. Diese Beschreibung wird im Graph Explorer angezeigt, wenn der Subgraph in Subgraph Studio bereitgestellt wird. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- Repository": die URL des Repositorys, in dem das Subgraph-Manifest zu finden ist. Dies wird auch im Graph Explorer angezeigt. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +Ein einzelner Subgraph kann Daten von mehreren Smart Contracts indizieren. Fügen Sie dem Array „DataSources“ einen Eintrag für jeden Vertrag hinzu, von dem Daten indiziert werden müssen. -## Event Handlers +## Event Handler -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event-Handler in einem Subgraph reagieren auf bestimmte Ereignisse, die von Smart Contracts auf der Blockchain ausgelöst werden, und lösen Handler aus, die im Manifest des Subgraphen definiert sind. Auf diese Weise können Subgraphen Ereignisdaten nach einer festgelegten Logik verarbeiten und speichern. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +Ein Eventhandler wird innerhalb einer Datenquelle in der YAML-Konfiguration des Subgraphen deklariert. Er gibt an, auf welche Ereignisse zu warten ist und welche Funktion ausgeführt werden soll, wenn diese Ereignisse erkannt werden. ```yaml dataSources: @@ -131,29 +131,29 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - entities: + apiVersion: 0. .9 + Sprache: wasm/assemblyscript + Einheiten: - Gravatar - - Transaction + - Transaktion abis: - - name: Gravity - file: ./abis/Gravity.json + - Name: Gravity + Datei: . abis/Schwerkraft. son eventHandlers: - - event: Approval(address,address,uint256) - handler: handleApproval - - event: Transfer(address,address,uint256) + - Event: Genehmigung (Adresse, Adresse, Adresse, int256) + Handler: HandlesFreigabe + - Event: Transfer(Adresse, ddress, int256) handler: handleTransfer - topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic. + Topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optionaler Themenfilter, der nur Ereignisse nach dem angegebenen Thema filtert. ``` ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +Während Ereignisse eine effektive Möglichkeit bieten, relevante Änderungen am Zustand eines Vertrags zu erfassen, vermeiden viele Verträge die Erstellung von Protokollen, um die Gaskosten zu optimieren. In diesen Fällen kann ein Subgraph Aufrufe an den Datenquellenvertrag abonnieren. Dies wird durch die Definition von Call-Handlern erreicht, die auf die Funktionssignatur und den Mapping-Handler verweisen, der die Aufrufe dieser Funktion verarbeiten wird. Um diese Aufrufe zu verarbeiten, erhält der Mapping Handler ein `ethereum.Call` als Argument mit den typisierten Eingaben und Ausgaben des Aufrufs. Aufrufe, die in jeder Tiefe der Aufrufkette einer Transaktion erfolgen, lösen das Mapping aus, so dass Aktivitäten mit dem Datenquellenvertrag durch Proxy-Verträge erfasst werden können. -Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. +Call-Handler werden nur in einem von zwei Fällen ausgelöst: wenn die angegebene Funktion von einem anderen Konto als dem Vertrag selbst aufgerufen wird oder wenn sie in Solidity als extern markiert ist und als Teil einer anderen Funktion im selben Vertrag aufgerufen wird. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Hinweis:** Call-Handler sind derzeit von der Parity-Tracing-API abhängig. Bestimmte Netzwerke, wie die BNB-Kette und Arbitrum, unterstützen diese API nicht. Wenn ein Subgraph, der eines dieser Netzwerke indiziert, einen oder mehrere Call-Handler enthält, wird er nicht mit der Synchronisierung beginnen. Subgraph-Entwickler sollten stattdessen Event-Handler verwenden. Diese sind weitaus leistungsfähiger als Call-Handler und werden von jedem evm-Netzwerk unterstützt. ### Defining a Call Handler @@ -161,24 +161,24 @@ To define a call handler in your manifest, simply add a `callHandlers` array und ```yaml dataSources: - - kind: ethereum/contract - name: Gravity - network: mainnet - source: - address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + - art: ethereum/contract + Name: Gravity + Netzwerk: Hauptnetz + Quelle: + Adresse: '0x731a10897d267e19b34503ad902d0a29173ba4b1' abi: Gravity mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - entities: + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Entitäten: - Gravatar - - Transaction + - transaktion abis: - - name: Gravity - file: ./abis/Gravity.json + - Name: Gravity + Datei: ./abis/Gravity.json callHandlers: - - function: createGravatar(string,string) + - Funktion: createGravatar(string,string) handler: handleCreateGravatar ``` @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,33 +218,33 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. ```yaml dataSources: - - kind: ethereum/contract - name: Gravity - network: dev - source: - address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + - art: ethereum/contract + Name: Gravity + Netzwerk: dev + Quelle: + Adresse: '0x731a10897d267e19b34503ad902d0a29173ba4b1' abi: Gravity mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - entities: + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Entitäten: - Gravatar - - Transaction + - transaktion abis: - - name: Gravity - file: ./abis/Gravity.json - blockHandlers: + - Name: Gravity + Datei: ./abis/Gravity.json + blockHandler: - handler: handleBlock - handler: handleBlockWithCallToContract filter: - kind: call + Art: Aufruf ``` #### Polling Filter @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +Der definierte Handler wird alle `n` Blöcke einmal aufgerufen, wobei `n` der im Feld `every` angegebene Wert ist. Diese Konfiguration ermöglicht es dem Subgraphen, bestimmte Operationen in regelmäßigen Blockintervallen durchzuführen. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +Der definierte Handler mit dem once-Filter wird nur einmal aufgerufen, bevor alle anderen Handler ausgeführt werden. Diese Konfiguration ermöglicht es dem Subgraph, den Handler als Initialisierungs-Handler zu verwenden, der zu Beginn der Indizierung bestimmte Aufgaben ausführt. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,14 +288,14 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +Die Mapping-Funktion erhält einen „Ethereum.Block“ als einziges Argument. Wie Mapping-Funktionen für Ereignisse kann diese Funktion auf bestehende Subgraph-Entitäten im Speicher zugreifen, Smart Contracts aufrufen und Entitäten erstellen oder aktualisieren. ```typescript -import { ethereum } from '@graphprotocol/graph-ts' +import { ethereum } aus '@graphprotocol/graph-ts' -export function handleBlock(block: ethereum.Block): void { +export function handleBlock(block: ethereum. lock): void { let id = block.hash - let entity = new Block(id) + let entity = new block(id) entity.save() } ``` @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -330,7 +330,7 @@ Inside the handler function, the receipt can be accessed in the `Event.receipt` ## Order of Triggering Handlers -The triggers for a data source within a block are ordered using the following process: +Die Trigger für eine Datenquelle innerhalb eines Blocks werden mit dem folgenden Prozess bestellt: 1. Event and call triggers are first ordered by transaction index within the block. 2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. @@ -352,24 +352,24 @@ First, you define a regular data source for the main contract. The snippet below ```yaml dataSources: - - kind: ethereum/contract - name: Factory - network: mainnet - source: - address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + - art: ethereum/contract + Name: Factory + Netzwerk: mainnet + Quelle: + Adresse: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' abi: Factory mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - file: ./src/mappings/factory.ts - entities: - - Directory + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Datei: ./src/mappings/factory.ts + Entitäten: + - Verzeichnis abis: - name: Factory - file: ./abis/factory.json - eventHandlers: - - event: NewExchange(address,address) + Datei: ./abis/factory.json + eventHandler: + - event: NewExchange(Adresse,Adresse) handler: handleNewExchange ``` @@ -379,34 +379,34 @@ Then, you add _data source templates_ to the manifest. These are identical to re ```yaml dataSources: - - kind: ethereum/contract + - art: ethereum/contract name: Factory - # ... other source fields for the main contract ... -templates: + # ... andere Quellfelder für den Hauptvertrag ... +Vorlagen: - name: Exchange - kind: ethereum/contract - network: mainnet - source: + Art: ethereum/contract + Netzwerk: mainnet + Quelle: abi: Exchange - mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - file: ./src/mappings/exchange.ts - entities: + Mapping: + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Datei: ./src/mappings/exchange.ts + Entitäten: - Exchange abis: - - name: Exchange - file: ./abis/exchange.json - eventHandlers: - - event: TokenPurchase(address,uint256,uint256) - handler: handleTokenPurchase - - event: EthPurchase(address,uint256,uint256) - handler: handleEthPurchase - - event: AddLiquidity(address,uint256,uint256) - handler: handleAddLiquidity - - event: RemoveLiquidity(address,uint256,uint256) - handler: handleRemoveLiquidity + - Name: Exchange + Datei: ./abis/exchange.json + eventHandler: + - event: TokenKauf(Adresse,uint256,uint256) + Behandler: handleTokenPurchase + - event: EthPurchase(Adresse,uint256,uint256) + Behandler: handleEthPurchase + - event: AddLiquidity(Adresse,uint256,uint256) + Behandler: handleAddLiquidity + - event: RemoveLiquidity(Adresse,uint256,uint256) + Behandler: handleRemoveLiquidity ``` ### Instantiating a Data Source Template @@ -454,30 +454,30 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +Der „Startblock“ ist eine optionale Einstellung, mit der Sie festlegen können, ab welchem Block in der Kette die Datenquelle mit der Indizierung beginnen soll. Die Einstellung des Startblocks ermöglicht es der Datenquelle, potenziell Millionen von Blöcken zu überspringen, die irrelevant sind. Typischerweise wird ein Subgraph-Entwickler `startBlock` auf den Block setzen, in dem der Smart Contract der Datenquelle erstellt wurde. ```yaml dataSources: - - kind: ethereum/contract - name: ExampleSource - network: mainnet - source: - address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' - abi: ExampleContract + - art: ethereum/contract + Name: BeispielQuelle + Netzwerk: Hauptnetz + Quelle: + Adresse: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: BeispielVertrag startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - file: ./src/mappings/factory.ts - entities: - - User + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Datei: ./src/mappings/factory.ts + Entitäten: + - Benutzer abis: - - name: ExampleContract - file: ./abis/ExampleContract.json - eventHandlers: - - event: NewEvent(address,address) - handler: handleNewEvent + - Name: BeispielVertrag + Datei: ./abis/BeispielVertrag.json + eventHandler: + - event: NewEvent(Adresse,Adresse) + Behandler: handleNewEvent ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +Die Einstellung „indexerHints“ im Manifest eines Subgraphen enthält Richtlinien für Indexer zur Verarbeitung und Verwaltung eines Subgraphen. Sie beeinflusst operative Entscheidungen über die Datenverarbeitung, Indizierungsstrategien und Optimierungen. Gegenwärtig bietet sie die Option „prune“ für die Verwaltung der Aufbewahrung historischer Daten oder das Pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +IndexerHints.prune": Definiert die Aufbewahrung von historischen Blockdaten für einen Subgraphen. Die Optionen umfassen: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> Der Begriff „Historie“ bezieht sich in diesem Zusammenhang auf die Speicherung von Daten, die die alten Zustände von veränderlichen Entitäten widerspiegeln. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Zeitreiseabfragen](/subgraphs/querying/graphql-api/#time-travel-queries), die es ermöglichen, die vergangenen Zustände dieser Entitäten zu bestimmten Zeitpunkten in der Geschichte des Subgraphen abzufragen +- Verwendung des Subgraphen als [Pfropfgrundlage] (/entwickeln/erzeugen-eines-subgraphen/#pfropfen-auf-vorhandene-subgraphen) in einem anderen Subgraphen, in diesem Block +- Zurückspulen des Subgraphen zu diesem Block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +Für Subgraphen, die [„time travel queries“] (/subgraphs/querying/graphql-api/#time-travel-queries) verwenden, ist es ratsam, entweder eine bestimmte Anzahl von Blöcken für die Aufbewahrung historischer Daten festzulegen oder `prune: never` zu verwenden, um alle historischen Entitätszustände zu erhalten. Im Folgenden finden Sie Beispiele, wie Sie beide Optionen in den Einstellungen Ihres Subgraphen konfigurieren können: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Veröffentlichungen + +| Version | Hinweise zur Version | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Unterstützung für [Subgraph Composition](/cookbook/subgraph-composition-three-sources) hinzugefügt | +| 1.2.0 | Unterstützung für [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) hinzugefügt & `eth_call` erklärt | +| 1.1.0 | Unterstützt [Zeitreihen & Aggregationen](/developing/creating/advanced/#timeseries-and-aggregations). Unterstützung für den Typ `Int8` für `id` hinzugefügt. | +| 1.0.0 | Unterstützt die Funktion [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) zum Beschneiden von Subgraphen | +| 0.0.9 | Unterstützt `endBlock` Funktion | +| 0.0.8 | Unterstützung für das Polling von [Block-Handlern](/entwickeln/erstellen/subgraph-manifest/#polling-filter) und [Initialisierungs-Handlern](/entwickeln/erstellen/subgraph-manifest/#once-filter) hinzugefügt. | +| 0.0.7 | Unterstützung für [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources) hinzugefügt. | +| 0.0.6 | Unterstützt schnelle [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) Berechnungsvariante. | +| 0.0.5 | Unterstützung für Event-Handler mit Zugriff auf Transaktionsbelege hinzugefügt. | +| 0.0.4 | Unterstützung für die Verwaltung von Subgraphen-Features wurde hinzugefügt. | diff --git a/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx index 52f7cc2134b8..f8733d6ef561 100644 --- a/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/de/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,21 +2,21 @@ title: Rahmen für Einheitstests --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Lernen Sie die Verwendung von Matchstick, einem von [LimeChain] (https://limechain.tech/) entwickelten Unit-Testing-Framework. Matchstick ermöglicht es Subgraph-Entwicklern, ihre Mapping-Logik in einer Sandbox-Umgebung zu testen und ihre Subgraphen erfolgreich einzusetzen. -## Benefits of Using Matchstick +## Vorteile der Verwendung von Matchstick -- It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- Es ist in Rust geschrieben und für hohe Leistung optimiert. +- Sie ermöglicht Ihnen den Zugriff auf Entwicklerfunktionen, einschließlich der Möglichkeit, Vertragsaufrufe nachzubilden, Behauptungen über den Speicherzustand aufzustellen, Fehler in Subgraphen zu überwachen, die Testleistung zu überprüfen und vieles mehr. ## Erste Schritte -### Install Dependencies +### Abhängigkeiten installieren -In order to use the test helper methods and run tests, you need to install the following dependencies: +Um die Testhilfsmethoden verwenden und Tests ausführen zu können, müssen Sie die folgenden Abhängigkeiten installieren: ```sh -yarn add --dev matchstick-as +yarn add --dev Matchstick-as ``` ### Install PostgreSQL @@ -47,7 +47,7 @@ Installation command (depends on your distro): sudo apt install postgresql ``` -### Using WSL (Windows Subsystem for Linux) +### Verwendung von WSL (Windows Subsystem für Linux) Sie können Matchstick auf WSL sowohl mit dem Docker-Ansatz als auch mit dem binären Ansatz verwenden. Da WSL ein wenig knifflig sein kann, hier ein paar Tipps, falls Sie auf Probleme stoßen wie @@ -61,13 +61,13 @@ oder /node_modules/gluegun/build/index.js:13 throw up; ``` -Please make sure you're on a newer version of Node.js graph-cli doesn't support **v10.19.0** anymore, and that is still the default version for new Ubuntu images on WSL. For instance Matchstick is confirmed to be working on WSL with **v18.1.0**, you can switch to it either via **nvm** or if you update your global Node.js. Don't forget to delete `node_modules` and to run `npm install` again after updating you nodejs! Then, make sure you have **libpq** installed, you can do that by running +Bitte stellen Sie sicher, dass Sie eine neuere Version von Node.js verwenden. graph-cli unterstützt **v10.19.0** nicht mehr, und das ist immer noch die Standardversion für neue Ubuntu-Images auf WSL. Zum Beispiel ist Matchstick bestätigt, dass es auf WSL mit **v18.1.0** funktioniert, Sie können entweder über **nvm** darauf umsteigen oder wenn Sie Ihr globales Node.js aktualisieren. Vergessen Sie nicht, `node_modules` zu löschen und `npm install` erneut auszuführen, nachdem Sie Ihr nodejs aktualisiert haben! Stellen Sie dann sicher, dass Sie **libpq** installiert haben, indem Sie ``` sudo apt-get install libpq-dev ``` -And finally, do not use `graph test` (which uses your global installation of graph-cli and for some reason that looks like it's broken on WSL currently), instead use `yarn test` or `npm run test` (that will use the local, project-level instance of graph-cli, which works like a charm). For that you would of course need to have a `"test"` script in your `package.json` file which can be something as simple as +Und schließlich, verwenden Sie nicht `graph test` (das Ihre globale Installation von graph-cli verwendet und aus irgendeinem Grund sieht es so aus, als ob es auf der WSL derzeit nicht funktioniert), sondern verwenden Sie `yarn test` oder `npm run test` (das wird die lokale Instanz von graph-cli auf Projektebene verwenden, was wunderbar funktioniert). Dafür müssen Sie natürlich ein `„test“-Skript in Ihrer `package.json\`-Datei haben, was etwas so einfaches sein kann wie ```json { @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI-Optionen @@ -109,7 +109,7 @@ Dadurch wird nur diese spezielle Testdatei ausgeführt: graph test path/to/file.test.ts ``` -**Options:** +**Optionen:** ```sh -c, --coverage Run the tests in coverage mode @@ -118,12 +118,12 @@ graph test path/to/file.test.ts -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) -r, --recompile Forces tests to be recompiled --v, --version Choose the version of the rust binary that you want to be downloaded/used +-v, --version &lt; tag&gt; Choose the version of the rust binary that you want to be downloaded/used ``` ### Docker -From `graph-cli 0.25.2`, the `graph test` command supports running `matchstick` in a docker container with the `-d` flag. The docker implementation uses [bind mount](https://docs.docker.com/storage/bind-mounts/) so it does not have to rebuild the docker image every time the `graph test -d` command is executed. Alternatively you can follow the instructions from the [matchstick](https://github.com/LimeChain/matchstick#docker-) repository to run docker manually. +Ab `graph-cli 0.25.2` unterstützt der Befehl `graph test` die Ausführung von `matchstick` in einem Docker-Container mit dem `-d` Flag. Die Docker-Implementierung verwendet [bind mount](https://docs.docker.com/storage/bind-mounts/), so dass sie das Docker-Image nicht jedes Mal neu erstellen muss, wenn der Befehl `graph test -d` ausgeführt wird. Alternativ können Sie den Anweisungen aus dem [matchstick](https://github.com/LimeChain/matchstick#docker-) Repository folgen, um docker manuell zu starten. ❗ `graph test -d` forces `docker run` to run with flag `-t`. This must be removed to run inside non-interactive environments (like GitHub CI). @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo-Subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video-Tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Struktur der Tests -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +WICHTIG: Die unten beschriebene Teststruktur hängt von der Version `matchstick-as` >=0.5.0\*\*_ ab. ### describe() @@ -165,14 +165,14 @@ _**IMPORTANT: The test structure described below depens on `matchstick-as` versi - _Describes are not mandatory. You can still use test() the old way, outside of the describe() blocks_ -Example: +Beispiel: ```typescript -import { describe, test } from "matchstick-as/assembly/index" -import { handleNewGravatar } from "../../src/gravity" +Importiere { describe, test } von "Matchstick-as/assembly/index" +importiere { handleNewGravatar } von "../.. src/gravity" describe("handleNewGravatar()", () => { - test("Should create a new Gravatar entity", () => { + test("Soll eine neue Gravatar Entity erstellen", () => { ... }) }) @@ -181,18 +181,18 @@ describe("handleNewGravatar()", () => { Nested `describe()` example: ```typescript -import { describe, test } from "matchstick-as/assembly/index" -import { handleUpdatedGravatar } from "../../src/gravity" +Importiere { describe, test } von "Matchstick-as/assembly/index" +importiere { handleUpdatedGravatar } von "../.. src/gravity" describe("handleUpdatedGravatar()", () => { - describe("When entity exists", () => { - test("updates the entity", () => { + describe("Wenn Entität existiert", () => { + test("aktualisiert die Entität", () => { ... }) }) - describe("When entity does not exists", () => { - test("it creates a new entity", () => { + beschreibt ("Wenn Entität nicht existiert", () => { + test("Es erzeugt ein neues Entity", () => { ... }) }) @@ -205,7 +205,7 @@ describe("handleUpdatedGravatar()", () => { `test(name: String, () =>, should_fail: bool)` - Defines a test case. You can use test() inside of describe() blocks or independently. -Example: +Beispiel: ```typescript import { describe, test } from "matchstick-as/assembly/index" @@ -294,7 +294,7 @@ describe("handleUpdatedGravatar()", () => { Runs a code block after all of the tests in the file. If `afterAll` is declared inside of a `describe` block, it runs at the end of that `describe` block. -Example: +Beispiel: Code inside `afterAll` will execute once after _all_ tests in the file. @@ -416,7 +416,7 @@ describe('handleUpdatedGravatars', () => { ### afterEach() -Runs a code block after every test. If `afterEach` is declared inside of a `describe` block, it runs after each test in that `describe` block. +Führt einen Codeblock nach jedem Test aus. Wenn `afterEach` innerhalb eines `describe`-Blocks deklariert ist, wird es nach jedem Test in diesem `describe`-Block ausgeführt. Beispiele: @@ -652,17 +652,17 @@ test('Next test', () => { }) ``` -That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: +Das ist eine Menge zum Auspacken! Zunächst einmal ist es wichtig zu wissen, dass wir Dinge aus `matchstick-as` importieren, unserer AssemblyScript-Hilfsbibliothek (die als npm-Modul verteilt wird). Sie können das Repository [hier] finden (https://github.com/LimeChain/matchstick-as). `matchstick-as` stellt uns nützliche Testmethoden zur Verfügung und definiert auch die Funktion `test()`, die wir zum Erstellen unserer Testblöcke verwenden werden. Der Rest ist ziemlich einfach - hier ist, was passiert: -- We're setting up our initial state and adding one custom Gravatar entity; -- We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; -- We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; -- We assert the state of the store. How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; -- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. +- Wir richten unseren Ausgangszustand ein und fügen eine benutzerdefinierte Gravatar-Entität hinzu; +- Wir definieren zwei „NewGravatar“-Ereignisobjekte zusammen mit ihren Daten, indem wir die Funktion „CreateNewGravatarEvent()“ verwenden; +- Wir rufen Handler-Methoden für diese Ereignisse auf - „handleNewGravatars()“ - und übergeben die Liste unserer eigenen Ereignisse; +- Wir behaupten den Zustand des Ladens. Wie funktioniert das? - Wir übergeben eine eindeutige Kombination aus Entity-Typ und ID. Dann überprüfen wir ein bestimmtes Feld dieser Entität und stellen sicher, dass es den erwarteten Wert hat. Wir tun dies sowohl für die ursprüngliche Gravatar-Entität, die wir dem Speicher hinzugefügt haben, als auch für die beiden Gravatar-Entitäten, die hinzugefügt werden, wenn die Handler-Funktion aufgerufen wird; +- Und schließlich bereinigen wir den Speicher mit `clearStore()`, damit unser nächster Test mit einem frischen und leeren Speicherobjekt beginnen kann. Wir können so viele Testblöcke definieren, wie wir wollen. There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -674,7 +674,7 @@ And if all goes well you should be greeted with the following: ### Hydrating the store with a certain state -Users are able to hydrate the store with a known set of entities. Here's an example to initialise the store with a Gravatar entity: +Die Benutzer können den Shop mit einer bekannten Reihe von Entitäten bestücken. Hier ist ein Beispiel für die Initialisierung des Speichers mit einer Gravatar-Entität: ```typescript let gravatar = new Gravatar('entryId') @@ -683,7 +683,7 @@ gravatar.save() ### Calling a mapping function with an event -A user can create a custom event and pass it to a mapping function that is bound to the store: +Ein Benutzer kann ein benutzerdefiniertes Ereignis erstellen und es an eine Mapping-Funktion übergeben, die an den Speicher gebunden ist: ```typescript import { store } from 'matchstick-as/assembly/store' @@ -728,12 +728,12 @@ import { addMetadata, assert, createMockedFunction, clearStore, test } from 'mat import { Gravity } from '../../generated/Gravity/Gravity' import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' -let contractAddress = Address.fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') +let contractAddress = Address. fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') let expectedResult = Address.fromString('0x90cBa2Bbb19ecc291A12066Fd8329D65FA1f1947') -let bigIntParam = BigInt.fromString('1234') +let bigIntParam = BigInt. fromString('1234') createMockedFunction(contractAddress, 'gravatarToOwner', 'gravatarToOwner(uint256):(address)') - .withArgs([ethereum.Value.fromSignedBigInt(bigIntParam)]) - .returns([ethereum.Value.fromAddress(Address.fromString('0x90cBa2Bbb19ecc291A12066Fd8329D65FA1f1947'))]) +.withArgs([ethereum.Value.fromSignedBigInt(bigIntParam)]) +.returns([ethereum.Value.fromAddress(Address. fromString('0x90cBa2Bbb19ecc291A12066Fd8329D65FA1f1947'))]) let gravity = Gravity.bind(contractAddress) let result = gravity.gravatarToOwner(bigIntParam) @@ -741,7 +741,7 @@ let result = gravity.gravatarToOwner(bigIntParam) assert.equals(ethereum.Value.fromAddress(expectedResult), ethereum.Value.fromAddress(result)) ``` -As demonstrated, in order to mock a contract call and hardcore a return value, the user must provide a contract address, function name, function signature, an array of arguments, and of course - the return value. +Wie gezeigt, muss der Benutzer eine Vertragsadresse, einen Funktionsnamen, eine Funktionssignatur, ein Array von Argumenten und natürlich den Rückgabewert angeben, um einen Vertragsaufruf und einen Hardcore-Rückgabewert nachzuahmen. Users can also mock function reverts: @@ -754,19 +754,19 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri ### Mocking IPFS files (from matchstick 0.4.1) -Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. +Benutzer können IPFS-Dateien mit der Funktion „mockIpfsFile(hash, filePath)“ simulieren. Die Funktion akzeptiert zwei Argumente, das erste ist der Hash/Pfad der IPFS-Datei und das zweite ist der Pfad zu einer lokalen Datei. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +HINWEIS: Beim Testen von `ipfs.map/ipfs.mapJSON` muss die Callback-Funktion aus der Testdatei exportiert werden, damit Matchstick sie erkennen kann, wie die Funktion `processGravatar()` im untenstehenden Testbeispiel: `.test.ts` file: ```typescript -import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' -import { ipfs } from '@graphprotocol/graph-ts' -import { gravatarFromIpfs } from './utils' +importiere { assert, test, mockIpfsFile } von 'matchstick-as/assembly/index' +importieren { ipfs } von '@graphprotocol/graph-ts' +importiere { gravatarFromIpfs } von './utils' -// Export ipfs.map() callback in order for matchstck to detect it -export { processGravatar } from './utils' +// ipfs.map()-Callback exportieren, damit Matchstick ihn erkennen kann +exportiere { processGravatar } aus './utils' test('ipfs.cat', () => { mockIpfsFile('ipfsCatfileHash', 'tests/ipfs/cat.json') @@ -798,46 +798,46 @@ test('ipfs.map', () => { `utils.ts` file: ```typescript -import { Address, ethereum, JSONValue, Value, ipfs, json, Bytes } from "@graphprotocol/graph-ts" -import { Gravatar } from "../../generated/schema" +import { Address, ethereum, JSONValue, Wert, ipfs, json, Bytes } von "@graphprotocol/graph-ts" +importieren { Gravatar } von "../../generated/schema" ... -// ipfs.map callback -export function processGravatar(value: JSONValue, userData: Value): void { - // See the JSONValue documentation for details on dealing - // with JSON values - let obj = value.toObject() - let id = obj.get('id') +// ipfs. ap Callback +Export Funktion processGravatar(Wert: JSONValue, userData: Wert): void { + // Siehe JSONValue Dokumentation für Details zum Umgang mit + // mit JSON-Werten + let obj = value. oObject() + let id = obj. et('id') - if (!id) { + wenn (! d) { return } - // Callbacks can also created entities - let gravatar = new Gravatar(id.toString()) - gravatar.displayName = userData.toString() + id.toString() + // Callbacks können auch Objekte + let gravatar = new Gravatar(id. oString()) + Gravatar. isplayName = userData.toString() + id.toString() gravatar.save() } -// function that calls ipfs.cat -export function gravatarFromIpfs(): void { - let rawData = ipfs.cat("ipfsCatfileHash") +// Funktion, die ipfs aufruft. bei +Export Funktion gravatarFromIpfs(): void { + let rawData = ipfs. at("ipfsCatfileHash") if (!rawData) { return } - let jsonData = json.fromBytes(rawData as Bytes).toObject() + let jsonData = json. romBytes(rawData as Bytes).toObject() let id = jsonData.get('id') - let url = jsonData.get("imageUrl") + let url = jsonData. et("imageUrl") if (!id || !url) { return } - let gravatar = new Gravatar(id.toString()) + let gravatar = new Gravatar(id. oString()) gravatar.imageUrl = url.toString() gravatar.save() } @@ -896,7 +896,7 @@ import { logStore } from 'matchstick-as/assembly/store' logStore() ``` -As of version 0.6.0, `logStore` no longer prints derived fields, instead users can use the new `logEntity` function. Of course `logEntity` can be used to print any entity, not just ones that have derived fields. `logEntity` takes the entity type, entity id and a `showRelated` flag to indicate if users want to print the related derived entities. +Seit Version 0.6.0 druckt `logStore` keine abgeleiteten Felder mehr aus, stattdessen können Benutzer die neue Funktion `logEntity` verwenden. Natürlich kann `logEntity` verwendet werden, um jede Entität zu drucken, nicht nur solche, die abgeleitete Felder haben. Die Funktion `logEntity` nimmt den Entitätstyp, die Entitäts-ID und ein `showRelated`-Flag, um anzugeben, ob der Benutzer die zugehörigen abgeleiteten Entitäten ausgeben möchte. ``` import { logEntity } from 'matchstick-as/assembly/store' @@ -919,30 +919,30 @@ test( ) ``` -If the test is marked with shouldFail = true but DOES NOT fail, that will show up as an error in the logs and the test block will fail. Also, if it's marked with shouldFail = false (the default state), the test executor will crash. +Wenn der Test mit shouldFail = true gekennzeichnet ist, aber NICHT fehlschlägt, wird dies in den Protokollen als Fehler angezeigt und der Testblock schlägt fehl. Wenn der Test mit shouldFail = false markiert ist (der Standardstatus), stürzt der Test-Executor ab. ### Protokollierung -Having custom logs in the unit tests is exactly the same as logging in the mappings. The difference is that the log object needs to be imported from matchstick-as rather than graph-ts. Here's a simple example with all non-critical log types: +Die Verwendung von benutzerdefinierten Protokollen in den Unit-Tests ist genau dasselbe wie die Protokollierung in den Mappings. Der Unterschied besteht darin, dass das Log-Objekt von matchstick-as und nicht von graph-ts importiert werden muss. Hier ist ein einfaches Beispiel mit allen unkritischen Protokolltypen: ```typescript -import { test } from "matchstick-as/assembly/index"; -import { log } from "matchstick-as/assembly/log"; +importiere { test } aus "matchstick-as/assembly/index"; +importiere { log } aus "matchstick-as/assembly/log"; test("Success", () => { - log.success("Success!". []); + log. uccess("Erfolg!". []); }); test("Error", () => { - log.error("Error :( ", []); + log. rror("Error :( ", []); }); test("Debug", () => { - log.debug("Debugging...", []); + log. ebug("Debugging...", []); }); test("Info", () => { - log.info("Info!", []); + log. nfo("Info!", []); }); -test("Warning", () => { - log.warning("Warning!", []); +test("Warnung", () => { + log.warning("Warnung!", []); }); ``` @@ -954,7 +954,7 @@ test('Blow everything up', () => { }) ``` -Logging critical errors will stop the execution of the tests and blow everything up. After all - we want to make sure you're code doesn't have critical logs in deployment, and you should notice right away if that were to happen. +Die Protokollierung kritischer Fehler wird die Ausführung der Tests stoppen und alles in die Luft jagen. Schließlich wollen wir sicherstellen, dass Ihr Code bei der Bereitstellung keine kritischen Protokolle enthält, und Sie sollten sofort bemerken, wenn das passiert. ### Testing derived fields @@ -1044,56 +1044,56 @@ Testing dynamic data sources can be be done by mocking the return value of the ` Example below: -First we have the following event handler (which has been intentionally repurposed to showcase datasource mocking): +Zunächst haben wir den folgenden Event-Handler (der absichtlich umgewidmet wurde, um Datasource Mocking zu zeigen): ```typescript -export function handleApproveTokenDestinations(event: ApproveTokenDestinations): void { - let tokenLockWallet = TokenLockWallet.load(dataSource.address().toHexString())! +Export Funktion handleApproveTokenDestinations(event: ApproveTokenDestinations): void { + let tokenLockWallet = TokenLockWallet. oad(dataSource.address().toHexString())! if (dataSource.network() == 'rinkeby') { - tokenLockWallet.tokenDestinationsApproved = true + tokenLockWallet. okenDestinationsApproved = true } - let context = dataSource.context() + let context = dataSource. ontext() if (context.get('contextVal')!.toI32() > 0) { - tokenLockWallet.setBigInt('tokensReleased', BigInt.fromI32(context.get('contextVal')!.toI32())) + tokenLockWallet. etBigInt('tokensReleased', BigInt.fromI32(context.get('contextVal')!.toI32())) } tokenLockWallet.save() } ``` -And then we have the test using one of the methods in the dataSourceMock namespace to set a new return value for all of the dataSource functions: +Und dann haben wir den Test mit einer der Methoden im dataSourceMock-Namensraum, um einen neuen Rückgabewert für alle dataSource-Funktionen festzulegen: ```typescript import { assert, test, newMockEvent, dataSourceMock } from 'matchstick-as/assembly/index' import { BigInt, DataSourceContext, Value } from '@graphprotocol/graph-ts' -import { handleApproveTokenDestinations } from '../../src/token-lock-wallet' -import { ApproveTokenDestinations } from '../../generated/templates/GraphTokenLockWallet/GraphTokenLockWallet' -import { TokenLockWallet } from '../../generated/schema' +import { handleApproveTokenDestinations } from '. /../src/token-lock-wallet' +Import { ApproveTokenDestinations } von '../../generated/templates/GraphTokenLockWallet/GraphTokenLockWallet' +Import { TokenLockWallet } von '. /../generated/schema' test('Data source simple mocking example', () => { let addressString = '0xA16081F360e3847006dB660bae1c6d1b2e17eC2A' - let address = Address.fromString(addressString) + let address = Address. romString(addressString) let wallet = new TokenLockWallet(address.toHexString()) - wallet.save() - let context = new DataSourceContext() + Wallet. ave() + laßt context = new DataSourceContext() context.set('contextVal', Value.fromI32(325)) - dataSourceMock.setReturnValues(addressString, 'rinkeby', context) - let event = changetype(newMockEvent()) + dataSourceMock. etReturnValues(addressString, 'rinkeby', context) + let event = change type(newMockEvent()) - assert.assertTrue(!wallet.tokenDestinationsApproved) + assert.assertTrue(!wallet. okenDestinationsApproved) handleApproveTokenDestinations(event) - wallet = TokenLockWallet.load(address.toHexString())! - assert.assertTrue(wallet.tokenDestinationsApproved) + Wallet.load(address.toHexString())! + assert. ssertTrue(wallet.tokenDestinationsApproved) assert.bigIntEquals(wallet.tokensReleased, BigInt.fromI32(325)) dataSourceMock.resetValues() }) ``` -Notice that dataSourceMock.resetValues() is called at the end. That's because the values are remembered when they are changed and need to be reset if you want to go back to the default values. +Beachten Sie, dass dataSourceMock.resetValues() am Ende aufgerufen wird. Das liegt daran, dass die Werte gespeichert werden, wenn sie geändert werden, und dass sie zurückgesetzt werden müssen, wenn Sie zu den Standardwerten zurückkehren möchten. ### Testing dynamic data source creation @@ -1101,57 +1101,57 @@ As of version `0.6.0`, it is possible to test if a new data source has been crea - `assert.dataSourceCount(templateName, expectedCount)` can be used to assert the expected count of data sources from the specified template - `assert.dataSourceExists(templateName, address/ipfsHash)` asserts that a data source with the specified identifier (could be a contract address or IPFS file hash) from a specified template was created -- `logDataSources(templateName)` prints all data sources from the specified template to the console for debugging purposes +- logDataSources(templateName)\` gibt alle Datenquellen der angegebenen Vorlage zu Debugging-Zwecken auf der Konsole aus - `readFile(path)` reads a JSON file that represents an IPFS file and returns the content as Bytes #### Testing `ethereum/contract` templates ```typescript -test('ethereum/contract dataSource creation example', () => { - // Assert there are no dataSources created from GraphTokenLockWallet template - assert.dataSourceCount('GraphTokenLockWallet', 0) +test('ethereum/contract dataSource creation example', () =&gt; { + // Assert, dass keine dataSources aus der GraphTokenLockWallet-Vorlage erstellt wurden + assert. dataSourceCount('GraphTokenLockWallet', 0) - // Create a new GraphTokenLockWallet datasource with address 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A - GraphTokenLockWallet.create(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2A')) + // Erstellen einer neuen GraphTokenLockWallet-Datenquelle mit der Adresse 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A + GraphTokenLockWallet.create(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2A')) - // Assert the dataSource has been created - assert.dataSourceCount('GraphTokenLockWallet', 1) + // Assert, dass die Datenquelle erstellt wurde + assert.dataSourceCount('GraphTokenLockWallet', 1) - // Add a second dataSource with context - let context = new DataSourceContext() - context.set('contextVal', Value.fromI32(325)) + // Eine zweite Datenquelle mit Kontext hinzufügen + let context = new DataSourceContext() + context.set('contextVal', Value.fromI32(325)) - GraphTokenLockWallet.createWithContext(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'), context) + GraphTokenLockWallet.createWithContext(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'), context) - // Assert there are now 2 dataSources - assert.dataSourceCount('GraphTokenLockWallet', 2) + // Assert, dass es jetzt 2 Datenquellen gibt + assert.dataSourceCount('GraphTokenLockWallet', 2) - // Assert that a dataSource with address "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" was created - // Keep in mind that `Address` type is transformed to lower case when decoded, so you have to pass the address as all lower case when asserting if it exists - assert.dataSourceExists('GraphTokenLockWallet', '0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'.toLowerCase()) + // Assert, dass eine Datenquelle mit der Adresse "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" erstellt wurde + // Beachten Sie, dass der Typ `Address` bei der Dekodierung in Kleinbuchstaben umgewandelt wird, so dass Sie die Adresse in Kleinbuchstaben übergeben müssen, wenn Sie behaupten, dass sie existiert + assert. dataSourceExists('GraphTokenLockWallet', '0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'.toLowerCase()) - logDataSources('GraphTokenLockWallet') + logDataSources('GraphTokenLockWallet') }) ``` ##### Example `logDataSource` output ```bash -🛠 { +🛠️ { "0xa16081f360e3847006db660bae1c6d1b2e17ec2a": { "kind": "ethereum/contract", "name": "GraphTokenLockWallet", - "address": "0xa16081f360e3847006db660bae1c6d1b2e17ec2a", + "Adresse: "0xa16081f360e3847006db660bae1c6d1b2e17ec2a", "context": null }, "0xa16081f360e3847006db660bae1c6d1b2e17ec2b": { "kind": "ethereum/contract", "name": "GraphTokenLockWallet", - "address": "0xa16081f360e3847006db660bae1c6d1b2e17ec2b", + "Adresse: "0xa16081f360e3847006db660bae1c6d1b2e17ec2b", "context": { "contextVal": { "type": "Int", - "data": 325 + "Daten": 325 } } } @@ -1160,45 +1160,45 @@ test('ethereum/contract dataSource creation example', () => { #### Testing `file/ipfs` templates -Similarly to contract dynamic data sources, users can test test file data sources and their handlers +Ähnlich wie bei den vertraglich vereinbarten dynamischen Datenquellen können die Benutzer auch Dateidatenquellen und ihre Bearbeiter testen ##### Example `subgraph.yaml` ```yaml ... -templates: - - kind: file/ipfs - name: GraphTokenLockMetadata - network: mainnet - mapping: - kind: ethereum/events - apiVersion: 0.0.6 - language: wasm/assemblyscript - file: ./src/token-lock-wallet.ts +Vorlagen: + - Art: Datei/ipfs + Name: GraphTokenLockMetadaten + Netzwerk: mainnet + Zuweisung: + Art: Ethereum/Ereignisse + apiVersion: 0.0.9 + Sprache: wasm/assemblyscript + Datei: ./src/token-lock-wallet.ts handler: handleMetadata - entities: + Entitäten: - TokenLockMetadata abis: - name: GraphTokenLockWallet - file: ./abis/GraphTokenLockWallet.json + Datei: ./abis/GraphTokenLockWallet.json ``` ##### Example `schema.graphql` ```graphql """ -Token Lock Wallets which hold locked GRT +Token Sperr-Wallets, die gesperrte GRT """ -type TokenLockMetadata @entity { - "The address of the token lock wallet" +Typ TokenLockMetadata @entity { + "Die Adresse der Token Sperr-Wallet" id: ID! - "Start time of the release schedule" + "Startzeit des Release-Zeitplans" startTime: BigInt! - "End time of the release schedule" - endTime: BigInt! - "Number of periods between start time and end time" - periods: BigInt! - "Time when the releases start" + "Endzeit des Release-Zeitplans" + EndTime: BigInt! + "Anzahl der Perioden zwischen Startzeit und Endzeit" + Perioden: BigInt! + "Time when the release start" releaseStartTime: BigInt! } ``` @@ -1218,27 +1218,27 @@ type TokenLockMetadata @entity { ```typescript export function handleMetadata(content: Bytes): void { - // dataSource.stringParams() returns the File DataSource CID - // stringParam() will be mocked in the handler test - // for more info https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files - let tokenMetadata = new TokenLockMetadata(dataSource.stringParam()) - const value = json.fromBytes(content).toObject() - - if (value) { - const startTime = value.get('startTime') - const endTime = value.get('endTime') - const periods = value.get('periods') - const releaseStartTime = value.get('releaseStartTime') - - if (startTime && endTime && periods && releaseStartTime) { - tokenMetadata.startTime = startTime.toBigInt() - tokenMetadata.endTime = endTime.toBigInt() - tokenMetadata.periods = periods.toBigInt() - tokenMetadata.releaseStartTime = releaseStartTime.toBigInt() - } - - tokenMetadata.save() - } + // dataSource.stringParams() gibt die CID der File DataSource zurück + // stringParam() wird im Handler-Test gemockt + // für weitere Informationen https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files + let tokenMetadata = new TokenLockMetadata(dataSource.stringParam()) + const value = json.fromBytes(content).toObject() + + if (value) { + const startTime = value. get('startTime') + const endTime = value.get('endTime') + const periods = value.get('periods') + const releaseStartTime = value.get('releaseStartTime') + + if (startTime &amp;&amp; endTime &amp;&amp; periods &amp;&amp; releaseStartTime) { + tokenMetadata. startTime = startTime.toBigInt() + tokenMetadata.endTime = endTime.toBigInt() + tokenMetadata.periods = periods.toBigInt() + tokenMetadata.releaseStartTime = releaseStartTime.toBigInt() + } + + tokenMetadata.save() + } } ``` @@ -1249,57 +1249,57 @@ import { assert, test, dataSourceMock, readFile } from 'matchstick-as' import { Address, BigInt, Bytes, DataSourceContext, ipfs, json, store, Value } from '@graphprotocol/graph-ts' import { handleMetadata } from '../../src/token-lock-wallet' -import { TokenLockMetadata } from '../../generated/schema' +import { TokenLockMetadata } from '.. /../generated/schema' import { GraphTokenLockMetadata } from '../../generated/templates' -test('file/ipfs dataSource creation example', () => { - // Generate the dataSource CID from the ipfsHash + ipfs path file - // For example QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example.json - const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' - const CID = `${ipfshash}/example.json` - - // Create a new dataSource using the generated CID - GraphTokenLockMetadata.create(CID) - - // Assert the dataSource has been created - assert.dataSourceCount('GraphTokenLockMetadata', 1) - assert.dataSourceExists('GraphTokenLockMetadata', CID) - logDataSources('GraphTokenLockMetadata') - - // Now we have to mock the dataSource metadata and specifically dataSource.stringParam() - // dataSource.stringParams actually uses the value of dataSource.address(), so we will mock the address using dataSourceMock from matchstick-as - // First we will reset the values and then use dataSourceMock.setAddress() to set the CID - dataSourceMock.resetValues() - dataSourceMock.setAddress(CID) - - // Now we need to generate the Bytes to pass to the dataSource handler - // For this case we introduced a new function readFile, that reads a local json and returns the content as Bytes - const content = readFile(`path/to/metadata.json`) - handleMetadata(content) - - // Now we will test if a TokenLockMetadata was created - const metadata = TokenLockMetadata.load(CID) - - assert.bigIntEquals(metadata!.endTime, BigInt.fromI32(1)) - assert.bigIntEquals(metadata!.periods, BigInt.fromI32(1)) - assert.bigIntEquals(metadata!.releaseStartTime, BigInt.fromI32(1)) - assert.bigIntEquals(metadata!.startTime, BigInt.fromI32(1)) +test('file/ipfs dataSource creation example', () =&gt; { + // Generieren Sie die dataSource CID aus der ipfsHash + ipfs Pfaddatei + // Zum Beispiel QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example. json + const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' + const CID = `${ipfshash}/example.json` + + // Erstellen einer neuen dataSource mit der generierten CID + GraphTokenLockMetadata.create(CID) + + // Assert, dass die dataSource erstellt wurde + assert. dataSourceCount('GraphTokenLockMetadata', 1) + assert.dataSourceExists('GraphTokenLockMetadata', CID) + logDataSources('GraphTokenLockMetadata') + + // Nun müssen wir die dataSource-Metadaten und insbesondere dataSource. stringParam() + // dataSource.stringParams verwendet eigentlich den Wert von dataSource.address(), also werden wir die Adresse mit dataSourceMock von matchstick-as nachbilden + // Zuerst werden wir die Werte zurücksetzen und dann dataSourceMock.setAddress() verwenden, um die CID zu setzen + dataSourceMock. resetValues() + dataSourceMock.setAddress(CID) + + // Nun müssen wir die Bytes generieren, um sie an den dataSource-Handler zu übergeben + // Für diesen Fall haben wir eine neue Funktion readFile eingeführt, die ein lokales json liest und den Inhalt als Bytes zurückgibt + const content = readFile(`path/to/metadata. json`) + handleMetadata(content) + + // Nun testen wir, ob ein TokenLockMetadata erstellt wurde + const metadata = TokenLockMetadata.load(CID) + + assert.bigIntEquals(metadata!.endTime, BigInt.fromI32(1)) + assert. bigIntEquals(metadata!.periods, BigInt.fromI32(1)) + assert.bigIntEquals(metadata!.releaseStartTime, BigInt.fromI32(1)) + assert.bigIntEquals(metadata!.startTime, BigInt.fromI32(1)) }) ``` ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Mit **Matchstick** können die Entwickler von Subgraph ein Skript ausführen, das die Testabdeckung der geschriebenen Unit-Tests berechnet. -The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. +Das Testabdeckungswerkzeug nimmt die kompilierten Test-Binärdateien „wasm“ und konvertiert sie in ‚wat‘-Dateien, die dann leicht inspiziert werden können, um zu sehen, ob die in „subgraph.yaml“ definierten Handler aufgerufen wurden oder nicht. Da die Codeabdeckung (und das Testen als Ganzes) in AssemblyScript und WebAssembly noch in den Kinderschuhen steckt, kann **Matchstick** nicht auf Zweigabdeckung prüfen. Stattdessen verlassen wir uns auf die Behauptung, dass, wenn ein bestimmter Handler aufgerufen wurde, das Ereignis/die Funktion für diesen Handler korrekt gespottet wurde. -### Prerequisites +### Voraussetzungen -To run the test coverage functionality provided in **Matchstick**, there are a few things you need to prepare beforehand: +Um die Testabdeckungsfunktion von **Matchstick** nutzen zu können, müssen Sie einige Dinge vorbereiten: #### Export your handlers -In order for **Matchstick** to check which handlers are being run, those handlers need to be exported from the **test file**. So for instance in our example, in our gravity.test.ts file we have the following handler being imported: +Damit **Matchstick** prüfen kann, welche Handler ausgeführt werden, müssen diese Handler aus der **Testdatei** exportiert werden. In unserem Beispiel haben wir also in der Datei gravity.test.ts den folgenden Handler importiert: ```typescript import { handleNewGravatar } from '../../src/gravity' @@ -1311,7 +1311,7 @@ In order for that function to be visible (for it to be included in the `wat` fil export { handleNewGravatar } ``` -### Usage +### Verwendung Once that's all set up, to run the test coverage tool, simply run: @@ -1328,7 +1328,7 @@ You could also add a custom `coverage` command to your `package.json` file, like }, ``` -That will execute the coverage tool and you should see something like this in the terminal: +Hopefully that should execute the coverage tool without any issues. You should see something like this in the terminal: ```sh $ graph test -c @@ -1375,9 +1375,9 @@ The log output includes the test run duration. Here's an example: ## Common compiler errors -> Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined +> Kritisch: WasmInstance konnte nicht aus einem gültigen Modul mit Kontext erstellt werden: unknown import: wasi_snapshot_preview1::fd_write wurde nicht definiert -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/subgraphs/developing/creating/graph-ts/api/#logging-api) +Dies bedeutet, dass Sie `console.log` in Ihrem Code verwendet haben, was von AssemblyScript nicht unterstützt wird. Bitte verwenden Sie die [Logging API](/subgraphs/developing/creating/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. > @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Zusätzliche Ressourcen -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx index 7bc4c42301c5..6db33ed6bf1e 100644 --- a/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/de/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,10 +1,11 @@ --- title: Bereitstellen eines Subgraphen in mehreren Netzen +sidebarTitle: Bereitstellung für mehrere Netzwerke --- Auf dieser Seite wird erklärt, wie man einen Subgraphen in mehreren Netzwerken bereitstellt. Um einen Subgraphen bereitzustellen, müssen Sie zunächst die [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) installieren. Wenn Sie noch keinen Subgraphen erstellt haben, lesen Sie [Erstellen eines Subgraphen](/developing/creating-a-subgraph/). -## Breitstellen des Subgraphen in mehreren Netzen +## Deploying the Subgraph to multiple networks In manchen Fällen möchten Sie denselben Subgraph in mehreren Netzen bereitstellen, ohne den gesamten Code zu duplizieren. Die größte Herausforderung dabei ist, dass die Vertragsadressen in diesen Netzen unterschiedlich sind. diff --git a/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx index b559bcdff049..4f784b4304b8 100644 --- a/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/de/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -10,18 +10,18 @@ Erfahren Sie, wie Sie Ihren Subgraphen in Subgraph Studio bereitstellen können. In [Subgraph Studio] (https://thegraph.com/studio/) können Sie Folgendes tun: -- Eine Liste der von Ihnen erstellten Subgraphen anzeigen -- Verwalten, Details anzeigen und den Status eines bestimmten Subgraphen visualisieren -- Ihre API-Schlüssel für bestimmte Subgraphen erstellen und verwalten +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Ihre API-Schlüssel auf bestimmte Domains einschränken und nur bestimmten Indexern die Abfrage mit diesen Schlüsseln erlauben - Ihren Subgraphen erstellen - Ihren Subgraphen mit The Graph CLI verteilen - Ihren Subgraphen in der „Playground“-Umgebung testen - Ihren Subgraphen in Staging unter Verwendung der Entwicklungsabfrage-URL integrieren -- Ihren Subgraphen auf The Graph Network veröffentlichen -- Ihre Rechnungen verwalten +- Veröffentlichen Sie Ihren Subgraphen im The Graph Network +- Verwalten Sie Ihre Rechnungen -## Installieren der The Graph-CLI +## Installieren der Graph-CLI Vor der Bereitstellung müssen Sie The Graph CLI installieren. @@ -57,13 +57,7 @@ npm install -g @graphprotocol/graph-cli ### Kompatibilität von Subgraphen mit dem The Graph Network -Um von Indexern auf The Graph Network unterstützt zu werden, müssen Subgraphen: - -- Ein [unterstütztes Netzwerk](/supported-networks/) indizieren -- Keine der folgenden Funktionen verwenden: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +Um von Indexern auf The Graph Network unterstützt zu werden, müssen Subgraphen ein [supported network](/supported-networks/) indizieren. Eine vollständige Liste der unterstützten und nicht unterstützten Features finden Sie im [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) Repo. ## Initialisieren Ihres Subgraphen @@ -81,7 +75,7 @@ Nachdem Sie `graph init` ausgeführt haben, werden Sie aufgefordert, die Vertrag ## Graph Auth -Bevor Sie Ihren Subgraphen in Subgraph Studio bereitstellen können, müssen Sie sich bei Ihrem Konto in der CLI anmelden. Dazu benötigen Sie Ihren Bereitstellungsschlüssel, den Sie auf der Seite mit den Details Ihres Subgraphen finden. +Bevor Sie Ihren Subgraph in Subgraph Studio bereitstellen können, müssen Sie sich bei Ihrem Konto in der CLI anmelden. Dazu benötigen Sie Ihren Deploy-Schlüssel, den Sie auf Ihrer Subgraph-Detailseite finden. Verwenden Sie dann den folgenden Befehl, um sich über die CLI zu authentifizieren: @@ -91,11 +85,11 @@ graph auth ## Bereitstellen eines Subgraphen -Sobald Sie fertig sind, können Sie Ihren Subgraphen an Subgraph Studio übergeben. +Sobald Sie bereit sind, können Sie Ihren Subgraph in Subgraph Studio bereitstellen. -> Wenn Sie einen Subgraphen über die Befehlszeilenschnittstelle bereitstellen, wird er in das Studio übertragen, wo Sie ihn testen und die Metadaten aktualisieren können. Bei dieser Aktion wird Ihr Subgraph nicht im dezentralen Netzwerk veröffentlicht. +> Wenn Sie einen Subgraphen mit der Befehlszeilenschnittstelle bereitstellen, wird er in das Studio übertragen, wo Sie ihn testen und die Metadaten aktualisieren können. Durch diese Aktion wird Ihr Subgraph nicht im dezentralen Netzwerk veröffentlicht. -Verwenden Sie den folgenden CLI-Befehl, um Ihren Subgraphen bereitzustellen: +Verwenden Sie den folgenden CLI-Befehl, um Ihren Subgraph zu verteilen: ```bash graph deploy @@ -108,13 +102,13 @@ Nach der Ausführung dieses Befehls wird die CLI nach einer Versionsbezeichnung ## Testen Ihres Subgraphen -Nach der Bereitstellung können Sie Ihren Subgraphen testen (entweder in Subgraph Studio oder in Ihrer eigenen Anwendung, mit der Bereitstellungsabfrage-URL), eine weitere Version bereitstellen, die Metadaten aktualisieren und im [Graph Explorer](https://thegraph.com/explorer) veröffentlichen, wenn Sie bereit sind. +Nach dem Deployment können Sie Ihren Subgraph testen (entweder in Subgraph Studio oder in Ihrer eigenen Anwendung, mit der Deployment-Query-URL), eine weitere Version deployen, die Metadaten aktualisieren und im [Graph Explorer](https://thegraph.com/explorer) veröffentlichen, wenn Sie bereit sind. Verwenden Sie Subgraph Studio, um die Protokolle auf dem Dashboard zu überprüfen und nach Fehlern in Ihrem Subgraphen zu suchen. -## Veröffentlichung Ihres Subgraphen +## Veröffentlichen Sie Ihren Subgraph -Um Ihren Subgraphen erfolgreich zu veröffentlichen, lesen Sie [Veröffentlichen eines Subgraphen](/subgraphs/developing/publishing/publishing-a-subgraph/). +Um Ihren Subgraphen erfolgreich zu veröffentlichen, lesen Sie bitte [Einen Subgraphen veröffentlichen](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versionierung Ihres Subgraphen mit der CLI @@ -122,15 +116,15 @@ Wenn Sie Ihren Subgraphen aktualisieren möchten, können Sie wie folgt vorgehen - Sie können eine neue Version über die Befehlszeilenschnittstelle (CLI) in Studio bereitstellen (zu diesem Zeitpunkt ist sie nur privat). - Wenn Sie damit zufrieden sind, können Sie Ihre neue Bereitstellung im [Graph Explorer] (https://thegraph.com/explorer) veröffentlichen. -- Mit dieser Aktion wird eine neue Version Ihres Subgraphen erstellt, die von Kuratoren mit Signalen versehen und von Indexern indiziert werden kann. +- Mit dieser Aktion wird eine neue Version Ihres Subgraphen erstellt, die von Kuratoren mit Signalen versehen und von Indexierern indiziert werden kann. -Sie können auch die Metadaten Ihres Subgraphen aktualisieren, ohne eine neue Version zu veröffentlichen. Sie können Ihre Subgraph-Details in Studio (unter dem Profilbild, dem Namen, der Beschreibung usw.) aktualisieren, indem Sie eine Option namens **Details aktualisieren** im [Graph Explorer] (https://thegraph.com/explorer) aktivieren. Wenn diese Option aktiviert ist, wird eine Onchain-Transaktion generiert, die die Subgraph-Details im Explorer aktualisiert, ohne dass eine neue Version mit einer neuen Bereitstellung veröffentlicht werden muss. +Sie können die Metadaten Ihres Subgraphen auch aktualisieren, ohne eine neue Version zu veröffentlichen. Sie können die Details Ihres Subgraphen in Studio (unter dem Profilbild, dem Namen, der Beschreibung usw.) aktualisieren, indem Sie eine Option namens **Details aktualisieren** im [Graph Explorer] (https://thegraph.com/explorer) aktivieren. Wenn diese Option aktiviert ist, wird eine Onchain-Transaktion generiert, die die Subgraph-Details im Explorer aktualisiert, ohne dass Sie eine neue Version mit einem neuen Deployment veröffentlichen müssen. -> Hinweis: Die Veröffentlichung einer neuen Version eines Subgraphen im Netz ist mit Kosten verbunden. Zusätzlich zu den Transaktionsgebühren müssen Sie auch einen Teil der Kurationssteuer für das Auto-Migrations-Signal finanzieren. Sie können keine neue Version Ihres Subgraphen veröffentlichen, wenn Kuratoren nicht darauf signalisiert haben. Für weitere Informationen, lesen Sie bitte [hier](/resources/roles/curating/). +> Hinweis: Die Veröffentlichung einer neuen Version eines Subgraphen im Netz ist mit Kosten verbunden. Zusätzlich zu den Transaktionsgebühren müssen Sie auch einen Teil der Kurationssteuer für das Auto-Migrations-Signal finanzieren. Sie können keine neue Version Ihres Subgraphen veröffentlichen, wenn die Kuratoren nicht auf ihn signalisiert haben. Für weitere Informationen, lesen Sie bitte [hier](/resources/roles/curating/). ## Automatische Archivierung von Subgraph-Versionen -Immer wenn Sie eine neue Subgraph-Version in Subgraph Studio bereitstellen, wird die vorherige Version archiviert. Archivierte Versionen werden nicht indiziert/synchronisiert und können daher nicht abgefragt werden. Sie können die Archivierung einer archivierten Version Ihres Subgraphen in Subgraph Studio dearchivieren. +Immer wenn Sie eine neue Subgraph-Version in Subgraph Studio bereitstellen, wird die vorherige Version archiviert. Archivierte Versionen werden nicht indiziert/synchronisiert und können daher nicht abgefragt werden. Sie können eine archivierte Version Ihres Subgraphen in Subgraph Studio dearchivieren. > Hinweis: Frühere Versionen von nicht veröffentlichten Subgraphen, die in Studio bereitgestellt wurden, werden automatisch archiviert. diff --git a/website/src/pages/de/subgraphs/developing/developer-faq.mdx b/website/src/pages/de/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..1584166374a4 100644 --- a/website/src/pages/de/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/de/subgraphs/developing/developer-faq.mdx @@ -1,95 +1,95 @@ --- -title: Developer FAQ +title: Entwickler-FAQ sidebarTitle: FAQ --- -This page summarizes some of the most common questions for developers building on The Graph. +Diese Seite fasst einige der häufigsten Fragen für Entwickler zusammen, die auf The Graph aufbauen. ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? -Not currently, as mappings are written in AssemblyScript. +Gegenwärtig nicht, da Mappings in AssemblyScript geschrieben werden. -One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +Eine mögliche alternative Lösung hierzu ist die Speicherung von Rohdaten in Entitäten und die Durchführung von Logik, die JS-Bibliotheken auf dem Client erfordert. -### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 9. Ist es möglich, beim Abhören mehrerer Verträge die Reihenfolge der zu hörenden Ereignisse zu wählen? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. -### 10. How are templates different from data sources? +### 10. Wie unterscheiden sich Vorlagen von Datenquellen? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. You can also use `graph add` command to add a new dataSource. -### 12. In what order are the event, block, and call handlers triggered for a data source? +### 12. In welcher Reihenfolge werden die Ereignis-, Block- und Aufrufhandler für eine Datenquelle ausgelöst? -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +Ereignis- und Aufruf-Handler sind innerhalb des Blocks zunächst nach dem Index der Transaktion geordnet. Ereignis- und Aufruf-Handler innerhalb derselben Transaktion werden nach einer Konvention geordnet: zuerst Ereignis-Handler, dann Aufruf-Handler, wobei jeder Typ die Reihenfolge einhält, in der sie im Manifest definiert sind. Block-Handler werden nach Ereignis- und Anruf-Handlern ausgeführt, und zwar in der Reihenfolge, in der sie im Manifest definiert sind. Auch diese Ordnungsregeln können sich ändern. -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +Wenn neue dynamische Datenquellen erstellt werden, beginnen die für dynamische Datenquellen definierten Handler erst mit der Verarbeitung, nachdem alle vorhandenen Datenquellen-Handler verarbeitet wurden, und wiederholen sich in der gleichen Reihenfolge, wenn sie ausgelöst werden. -### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 13. Wie stelle ich sicher, dass ich die neueste Version von graph-node für meine lokalen Implementierungen verwende? -You can run the following command: +Sie können den folgenden Befehl ausführen: ```sh docker pull graphprotocol/graph-node:latest ``` -> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. +> Hinweis: docker / docker-compose verwendet immer die Version von graph-node, die beim ersten Start geladen wurde. Stellen Sie also sicher, dass Sie die neueste Version von graph-node verwenden. -### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. Welches ist der empfohlene Weg, um „automatisch generierte“ IDs für eine Entität zu erstellen, wenn Ereignisse behandelt werden? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. -## Network Related +## Netzwerkspezifisch -### 16. What networks are supported by The Graph? +### 16. Welche Netze werden von The Graph unterstützt? You can find the list of the supported networks [here](/supported-networks/). -### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? +### 17. Ist es möglich, innerhalb von Event-Handlern zwischen Netzen (Mainnet, Sepolia, Local) zu unterscheiden? Yes. You can do this by importing `graph-ts` as per the example below: @@ -100,31 +100,31 @@ dataSource.network() dataSource.address() ``` -### 18. Do you support block and call handlers on Sepolia? +### 18. Unterstützen Sie Block- und Call-Handler auf Sepolia? -Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. +Ja, Sepolia unterstützt Block-Handler, Call-Handler und Event-Handler. Es ist anzumerken, dass Ereignis-Handler weitaus leistungsfähiger sind als die beiden anderen Handler und in jedem EVM-kompatiblen Netzwerk unterstützt werden. ## Indexing & Querying Related -### 19. Is it possible to specify what block to start indexing on? +### 19. Ist es möglich festzulegen, bei welchem Block die Indizierung beginnen soll? Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? -Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: +Ja! Probieren Sie den folgenden Befehl aus und ersetzen Sie „organization/subgraphName“ durch die Organisation, unter der sie veröffentlicht ist, und den Namen Ihres Subgrafen: ```sh curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 22. Is there a limit to how many objects The Graph can return per query? +### 22. Gibt es eine Grenze für die Anzahl der Objekte, die The Graph pro Abfrage zurückgeben kann? -By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: +Standardmäßig sind die Abfrageantworten auf 100 Elemente pro Sammlung beschränkt. Wenn Sie mehr erhalten möchten, können Sie bis zu 1000 Elemente pro Sammlung erhalten und darüber hinaus können Sie mit paginieren: ```graphql someCollection(first: 1000, skip: ) { ... } @@ -132,15 +132,15 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## Miscellaneous +## Sonstiges -### 24. Is it possible to use Apollo Federation on top of graph-node? +### 24. Ist es möglich, Apollo Federation zusätzlich zum Graph-Knoten zu verwenden? -Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. +Federation wird noch nicht unterstützt. Zurzeit können Sie Schema-Stitching verwenden, entweder auf dem Client oder über einen Proxy-Dienst. -### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +### 25. Ich möchte einen Beitrag leisten oder ein GitHub-Problem hinzufügen. Wo kann ich die Open-Source-Repositories finden? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-tooling](https://github.com/graphprotocol/graph-tooling) diff --git a/website/src/pages/de/subgraphs/developing/introduction.mdx b/website/src/pages/de/subgraphs/developing/introduction.mdx index fd2872880ce0..6ea77e4cf497 100644 --- a/website/src/pages/de/subgraphs/developing/introduction.mdx +++ b/website/src/pages/de/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. -### What is GraphQL? +### Was ist GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/de/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/de/subgraphs/developing/managing/deleting-a-subgraph.mdx index 91c22f7c44ba..e01d84c31aee 100644 --- a/website/src/pages/de/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/de/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Schritt für Schritt -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/de/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/de/subgraphs/developing/managing/transferring-a-subgraph.mdx index d6837fbade98..a4cbb348e418 100644 --- a/website/src/pages/de/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/de/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs, die im dezentralen Netzwerk veröffentlicht werden, haben eine NFT, die auf die Adresse geprägt wird, die den Subgraph veröffentlicht hat. Die NFT basiert auf dem Standard ERC721, der Überweisungen zwischen Konten im The Graph Network erleichtert. +Die im dezentralen Netzwerk veröffentlichten Subgraphen haben eine NFT, die auf die Adresse geprägt ist, die den Subgraphen veröffentlicht hat. Die NFT basiert auf einem ERC721-Standard, der Überweisungen zwischen Konten im The Graph Network erleichtert. ## Erinnerungshilfen -- Wer im Besitz der NFT ist, kontrolliert den Subgraph. -- Wenn der Eigentümer beschließt, das NFT zu verkaufen oder zu übertragen, kann er diesen Subgraph im Netz nicht mehr bearbeiten oder aktualisieren. -- Sie können die Kontrolle über einen Subgraph leicht an eine Multisig übertragen. -- Ein Community-Mitglied kann einen Subgraph im Namen einer DAO erstellen. +- Whoever owns the NFT controls the Subgraph. +- Wenn der Eigentümer beschließt, das NFT zu verkaufen oder zu übertragen, kann er diesen Subgraphen im Netz nicht mehr bearbeiten oder aktualisieren. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## Betrachten Sie Ihren Subgraph als NFT -Um Ihren Subgraph als NFT zu betrachten, können Sie einen NFT-Marktplatz wie **OpenSea** besuchen: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,15 +27,15 @@ https://rainbow.me/your-wallet-addres ## Schritt für Schritt -Um das Eigentum an einem Subgraph zu übertragen, gehen Sie wie folgt vor: +To transfer ownership of a Subgraph, do the following: 1. Verwenden Sie die in Subgraph Studio integrierte Benutzeroberfläche: ![Subgraph-Besitzübertragung](/img/subgraph-ownership-transfer-1.png) -2. Wählen Sie die Adresse, an die Sie den Subgraph übertragen möchten: +2. Choose the address that you would like to transfer the Subgraph to: - ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) + ![Subgraph-Eigentumsübertragung](/img/subgraph-ownership-transfer-2.png) Optional können Sie auch die integrierte Benutzeroberfläche von NFT-Marktplätzen wie OpenSea verwenden: diff --git a/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 129d063a2e95..2fa5e3654038 100644 --- a/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/de/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Veröffentlichung eines Subgraphen im dezentralen Netzwerk +sidebarTitle: Veröffentlichung im dezentralen Netzwerk --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -Wenn Sie einen Subgraphen im dezentralen Netzwerk veröffentlichen, stellen Sie ihn für andere zur Verfügung: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,23 +18,23 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -Alle veröffentlichten Versionen eines bestehenden Subgraphen können: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Aktualisierung der Metadaten für einen veröffentlichten Subgraphen +### Updating metadata for a published Subgraph -- Nachdem Sie Ihren Subgraphen im dezentralen Netzwerk veröffentlicht haben, können Sie die Metadaten jederzeit in Subgraph Studio aktualisieren. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Sobald Sie Ihre Änderungen gespeichert und die Aktualisierungen veröffentlicht haben, werden sie im Graph Explorer angezeigt. - Es ist wichtig zu beachten, dass bei diesem Vorgang keine neue Version erstellt wird, da sich Ihre Bereitstellung nicht geändert hat. ## Veröffentlichen über die CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Öffnen Sie den `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. @@ -43,7 +44,7 @@ As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`]( ### Anpassen Ihrer Bereitstellung -Sie können Ihre Subgraph-Erstellung auf einen bestimmten IPFS-Knoten hochladen und Ihre Bereitstellung mit den folgenden Flags weiter anpassen: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -63,31 +64,31 @@ FLAGS ## Hinzufügen von Signalen zu Ihrem Subgraphen -Entwickler können ihren Subgraphen ein GRT-Signal hinzufügen, um Indexer zur Abfrage des Subgraphen zu veranlassen. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- Wenn ein Subgraph für Indexing Rewards in Frage kommt, erhalten Indexer, die einen „Beweis für die Indizierung“ erbringen, einen GRT Reward, der sich nach der Menge der signalisierten GRT richtet. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Das Hinzufügen von Signalen zu einem Subgraphen, der nicht für Rewards in Frage kommt, zieht keine weiteren Indexer an. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > > Wenn Ihr Subgraph für Rewards in Frage kommt, wird empfohlen, dass Sie Ihren eigenen Subgraphen mit mindestens 3.000 GRT kuratieren, um zusätzliche Indexer für die Indizierung Ihres Subgraphen zu gewinnen. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. Bei der Signalisierung können Kuratoren entscheiden, ob sie für eine bestimmte Version des Subgraphen signalisieren wollen oder ob sie die automatische Migration verwenden wollen. Bei der automatischen Migration werden die Freigaben eines Kurators immer auf die neueste vom Entwickler veröffentlichte Version aktualisiert. Wenn sie sich stattdessen für eine bestimmte Version entscheiden, bleiben die Freigaben immer auf dieser spezifischen Version. -Indexer können Subgraphen für die Indizierung auf der Grundlage von Kurationssignalen finden, die sie im Graph Explorer sehen. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. ![Explorer-Subgrafen](/img/explorer-subgraphs.png) -Mit Subgraph Studio können Sie Ihrem Subgraphen ein Signal hinzufügen, indem Sie GRT in der gleichen Transaktion, in der es veröffentlicht wird, zum Kurationspool Ihres Subgraphen hinzufügen. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternativ können Sie ein GRT-Signal zu einem veröffentlichten Subgraphen aus dem Graph Explorer hinzufügen. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal provenant de l'Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/de/subgraphs/developing/subgraphs.mdx b/website/src/pages/de/subgraphs/developing/subgraphs.mdx index 9e5dc5f613a6..1ac536f54378 100644 --- a/website/src/pages/de/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/de/subgraphs/developing/subgraphs.mdx @@ -4,13 +4,13 @@ title: Subgraphs ## Was ist ein Subgraph? -Ein Subgraph ist eine benutzerdefinierte, offene API, die Daten aus einer Blockchain extrahiert, verarbeitet und so speichert, dass sie einfach über GraphQL abgefragt werden können. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph-Fähigkeiten - **Zugangsdaten:** Subgraphs ermöglichen die Abfrage und Indizierung von Blockchain-Daten für web3. -- **Build:** Entwickler können Subgraphs für The Graph Network erstellen, bereitstellen und veröffentlichen. Um loszulegen, schauen Sie sich den Subgraph Entwickler [Quick Start](quick-start/) an. -- **Index & Abfrage:** Sobald ein Subgraph indiziert ist, kann jeder ihn abfragen. Alle im Netzwerk veröffentlichten Subgraphen können im [Graph Explorer] (https://thegraph.com/explorer) untersucht und abgefragt werden. +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Innerhalb eines Subgraph @@ -24,63 +24,63 @@ Die **Subgraph-Definition** besteht aus den folgenden Dateien: - mapping.ts\`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) Code, der die Ereignisdaten in die in Ihrem Schema definierten Entitäten übersetzt -Um mehr über die einzelnen Komponenten eines Subgraphs zu erfahren, lesen Sie bitte [Erstellen eines Subgraphs](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lebenszyklus -Hier ist ein allgemeiner Überblick über den Lebenszyklus eines Subgraphs: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Entwicklung -1. [Einen Subgraph erstellen](/entwickeln/einen-subgraph-erstellen/) -2. [Einen Subgraph bereitstellen](/deploying/deploying-a-subgraph-to-studio/) -3. [Testen eines Subgraphen](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/de/subgraphs/explorer.mdx b/website/src/pages/de/subgraphs/explorer.mdx index 3cc0e39ef659..0c3619b3f740 100644 --- a/website/src/pages/de/subgraphs/explorer.mdx +++ b/website/src/pages/de/subgraphs/explorer.mdx @@ -2,255 +2,255 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Erschließen Sie die Welt der Subgraphen und Netzwerkdaten mit [Graph Explorer] (https://thegraph.com/explorer). ## Überblick -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer besteht aus mehreren Teilen, in denen Sie mit [[Subgraphen]] (https://thegraph.com/explorer?chain=arbitrum-one) interagieren, [[delegieren]] (https://thegraph.com/explorer/delegate?chain=arbitrum-one), [[Teilnehmer]] (https://thegraph.com/explorer/participants?chain=arbitrum-one) einbeziehen, [[Netzwerkinformationen]] (https://thegraph.com/explorer/network?chain=arbitrum-one) anzeigen und auf Ihr Benutzerprofil zugreifen können. ## Inside Explorer -The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide). +Nachfolgend finden Sie eine Übersicht über die wichtigsten Funktionen von Graph Explorer. Für zusätzliche Unterstützung können Sie sich den [Graph Explorer Video Guide](/subgraphs/explorer/#video-guide) ansehen. -### Subgraphs Page +### Subgraphen-Seite -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +Nachdem Sie Ihren Subgraph in Subgraph Studio bereitgestellt und veröffentlicht haben, gehen Sie zu [Graph Explorer] (https://thegraph.com/explorer) und klicken Sie auf den Link „[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)“ in der Navigationsleiste, um auf Folgendes zuzugreifen: -- Your own finished subgraphs -- Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- Ihre eigenen fertigen Subgraphen +- Von anderen veröffentlichte Subgraphen +- Den genauen Subgraphen, den Sie wünschen (basierend auf dem Erstellungsdatum, der Signalmenge oder dem Namen). -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![Explorer Bild 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +Wenn Sie in einen Subgraphen klicken, können Sie Folgendes tun: -- Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Testen Sie Abfragen auf dem Playground und nutzen Sie Netzwerkdetails, um fundierte Entscheidungen zu treffen. +- Signalisieren Sie GRT auf Ihrem eigenen Subgraphen oder den Subgraphen anderer, um die Indexierer auf seine Bedeutung und Qualität aufmerksam zu machen. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - Dies ist von entscheidender Bedeutung, da die Signalisierung eines Subgraphen einen Anreiz darstellt, ihn zu indizieren, was bedeutet, dass er schließlich im Netzwerk auftaucht, um Abfragen zu bedienen. -![Explorer Image 2](/img/Subgraph-Details.png) +![Explorer Bild 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +Auf der speziellen Seite jedes Subgraphen können Sie Folgendes tun: -- Signal/Un-signal on subgraphs -- View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph -- Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- Signal/Un-Signal auf Subgraphen +- Weitere Details wie Diagramme, aktuelle Bereitstellungs-ID und andere Metadaten anzeigen +- Versionen wechseln, um frühere Iterationen des Subgraphen zu erkunden +- Abfrage von Subgraphen über GraphQL +- Subgraphen auf dem Prüfstand testen +- Anzeigen der Indexierer, die auf einem bestimmten Subgraphen indexieren +- Subgraphen-Statistiken (Zuweisungen, Kuratoren, etc.) +- Anzeigen der Entität, die den Subgraphen veröffentlicht hat -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![Explorer Bild 3](/img/Explorer-Signal-Unsignal.png) -### Delegate Page +### Delegierten-Seite -On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer. +Auf der [Delegierten-Seite] (https://thegraph.com/explorer/delegate?chain=arbitrum-one) finden Sie Informationen zum Delegieren, zum Erwerb von GRT und zur Auswahl eines Indexierers. -On this page, you can see the following: +Auf dieser Seite können Sie Folgendes sehen: -- Indexers who collected the most query fees -- Indexers with the highest estimated APR +- Indexierer, die die meisten Abfragegebühren erhoben haben +- Indexierer mit dem höchsten geschätzten effektiven Jahreszins -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Darüber hinaus können Sie Ihren ROI berechnen und die besten Indexierer nach Name, Adresse oder Subgraph suchen. -### Participants Page +### Teilnehmer-Seite -This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. +Diese Seite bietet einen Überblick über alle „Teilnehmer“, d. h. alle am Netzwerk beteiligten Personen wie Indexer, Delegatoren und Kuratoren. -#### 1. Indexers +#### 1. Indexierer -![Explorer Image 4](/img/Indexer-Pane.png) +![Explorer Bild 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexierer sind das Rückgrat des Protokolls. Sie setzen auf Subgraphen, indizieren sie und stellen allen, die Subgraphen konsumieren, Abfragen zur Verfügung. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In der Tabelle Indizierer können Sie die Delegationsparameter eines Indizierers, seinen Einsatz, die Höhe seines Einsatzes für jeden Subgraphen und die Höhe seiner Einnahmen aus Abfragegebühren und Indizierungsprämien sehen. -**Specifics** +**Besonderheiten** -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. -- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. -- Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. +- Abfragegebührenkürzung – der Prozentsatz der Abfragegebührenrabatte, den der Indexierer bei der Aufteilung mit Delegatoren teien. +- Effektiver Reward Cut - der auf den Delegationspool angewandte Indexierungs-Reward Cut. Ist er negativ, bedeutet dies, dass der Indexierer einen Teil seiner Rewards abgibt. Ist er positiv, bedeutet dies, dass der Indexierer einen Teil seiner Rewards behält. +- Verbleibende Abklingzeit - die verbleibende Zeit, bis der Indexierer die oben genannten Delegationsparameter ändern kann. Abklingzeiten werden von Indexierern festgelegt, wenn sie ihre Delegationsparameter aktualisieren. +- Eigenkapital - Dies ist der hinterlegte Einsatz des Indexierers, der bei bösartigem oder falschem Verhalten gekürzt werden kann. +- Delegiert - Einsätze von Delegatoren, die vom Indexierer zugewiesen werden können, aber nicht durchgeschnitten werden können. +- Zugewiesen - Einsatz, den Indexierer aktiv den Subgraphen zuweisen, die sie indizieren. +- Verfügbare Delegationskapazität - die Menge der delegierten Anteile, die die Indexierer noch erhalten können, bevor sie überdelegiert werden. +- Maximale Delegationskapazität - der maximale Betrag an delegiertem Einsatz, den der Indexierer produktiv akzeptieren kann. Ein überschüssiger delegierter Einsatz kann nicht für Zuteilungen oder Belohnungsberechnungen verwendet werden. +- Abfragegebühren - dies ist die Gesamtsumme der Gebühren, die Endnutzer über die gesamte Zeit für Abfragen von einem Indexierer bezahlt haben. +- Indexierer Rewards - dies ist die Gesamtsumme der Indexierer Rewards, die der Indexierer und seine Delegatoren über die gesamte Zeit verdient haben. Indexierer Rewards werden durch die Ausgabe von GRTs ausgezahlt. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. +Indexierer können sowohl Abfragegebühren als auch Indexierungsprämien verdienen. Funktionell geschieht dies, wenn Netzwerkteilnehmer GRT an einen Indexierer delegieren. Dadurch können Indexierer je nach ihren Indexierer-Parametern Abfragegebühren und Belohnungen erhalten. -- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. +- Indizierungsparameter können durch Klicken auf die rechte Seite der Tabelle oder durch Aufrufen des Profils eines Indizierers und Klicken auf die Schaltfläche „Delegieren“ festgelegt werden. -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +Um mehr darüber zu erfahren, wie man ein Indexierer wird, können Sie einen Blick auf die [offizielle Dokumentation](/indexing/overview/) oder [The Graph Academy Indexer guides](https://thegraph.academy/delegators/choosing-indexers/) werfen. -![Indexing details pane](/img/Indexing-Details-Pane.png) +![Indizierungs-Detailfenster](/img/Indizierungs-Detailfenster.png) -#### 2. Curators +#### 2. Kuratoren -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Kuratoren analysieren Subgraphen, um festzustellen, welche Subgraphen von höchster Qualität sind. Sobald ein Kurator einen potenziell hochwertigen Subgraphen gefunden hat, kann er ihn kuratieren, indem er seine Bindungskurve signalisiert. Auf diese Weise teilen die Kuratoren den Indexierern mit, welche Subgraphen von hoher Qualität sind und indiziert werden sollten. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. - - The bonding curve incentivizes Curators to curate the highest quality data sources. +- Kuratoren können Community-Mitglieder, Datenkonsumenten oder sogar Subgraph-Entwickler sein, die ihre eigenen Subgraphen durch Einzahlung von GRT-Token in eine Bindungskurve signalisieren. + - Durch die Hinterlegung von GRT prägen Kuratoren Kurationsanteile an einem Subgraphen. Dadurch können sie einen Teil der Abfragegebühren verdienen, die von dem Subgraphen generiert werden, auf den sie sich gemeldet haben. + - Die Bindungskurve bietet den Kuratoren einen Anreiz, die hochwertigsten Datenquellen zu kuratieren. -In the The Curator table listed below you can see: +In der unten aufgeführten Tabelle von The Curator können Sie sehen: -- The date the Curator started curating -- The number of GRT that was deposited -- The number of shares a Curator owns +- Das Datum, an dem der Kurator mit der Kuratierung begonnen hat +- Die Anzahl der hinterlegten GRT +- Die Anzahl der Anteile, die ein Kurator besitzt -![Explorer Image 6](/img/Curation-Overview.png) +![Explorer Bild 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/). +Wenn Sie mehr über die Rolle des Kurators erfahren möchten, besuchen Sie [offizielle Dokumentation](/resources/roles/curating/) oder [The Graph Academy](https://thegraph.academy/curators/). -#### 3. Delegators +#### 3. Delegatoren -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. +Delegatoren spielen eine Schlüsselrolle bei der Aufrechterhaltung der Sicherheit und Dezentralisierung des Graph Network. Sie beteiligen sich am Netzwerk, indem sie GRT-Token an einen oder mehrere Indexierer delegieren (d.h. „staken“). -- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. -- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. -- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/). +- Ohne Delegatoren ist es für die Indexierer unwahrscheinlicher, signifikante Prämien und Gebühren zu verdienen. Daher locken Indexierer Delegatoren an, indem sie ihnen einen Teil ihrer Indexierungsprämien und Abfragegebühren anbieten. +- Die Delegatoren wählen die Indexierer auf der Grundlage einer Reihe von Variablen aus, wie z. B. frühere Leistungen, Indexierungsvergütungssätze und Senkung der Abfragegebühren. +- Die Reputation innerhalb der Community kann bei der Auswahl ebenfalls eine Rolle spielen. Es wird empfohlen, mit den ausgewählten Indexierern über [The Graph's Discord] (https://discord.gg/graphprotocol) oder [The Graph Forum] (https://forum.thegraph.com/) in Kontakt zu treten. -![Explorer Image 7](/img/Delegation-Overview.png) +![Explorer Bild 7](/img/Delegation-Overview.png) -In the Delegators table you can see the active Delegators in the community and important metrics: +In der Tabelle „Delegatoren“ können Sie die aktiven Delegatoren in der Community und wichtige Metriken einsehen: -- The number of Indexers a Delegator is delegating towards -- A Delegator's original delegation -- The rewards they have accumulated but have not withdrawn from the protocol -- The realized rewards they withdrew from the protocol -- Total amount of GRT they have currently in the protocol -- The date they last delegated +- Die Anzahl der Indexierer, an die ein Delegator delegiert +- Die ursprüngliche Delegation eines Delegators +- Die Belohnungen, die sie angesammelt, aber nicht aus dem Protokoll entnommen haben +- Die realisierten Belohnungen zogen sie aus dem Protokoll zurück +- Gesamtmenge an GRT, die sie derzeit im Protokoll haben +- Das Datum der letzten Delegation If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). -### Network Page +### Netzwerk-Seite -On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +Auf dieser Seite können Sie globale KPIs sehen und haben die Möglichkeit, auf eine Epochenbasis zu wechseln und die Netzwerkmetriken detaillierter zu analysieren. Diese Details geben Ihnen ein Gefühl dafür, wie sich das Netzwerk im Laufe der Zeit entwickelt. #### Überblick -The overview section has both all the current network metrics and some cumulative metrics over time: +Der Übersichtsabschnitt enthält sowohl alle aktuellen Netzwerkmetriken als auch einige kumulative Metriken im Zeitverlauf: -- The current total network stake -- The stake split between the Indexers and their Delegators -- Total supply, minted, and burned GRT since the network inception -- Total Indexing rewards since the inception of the protocol -- Protocol parameters such as curation reward, inflation rate, and more -- Current epoch rewards and fees +- Die aktuelle Gesamtbeteiligung am Netz +- Die Aufteilung der Anteile zwischen den Indexierern und ihren Delegatoren +- Gesamtangebot, geprägte und verbrannte GRT seit Gründung des Netzes +- Gesamtindexierungsgewinne seit Einführung des Protokolls +- Protokollparameter wie Kurationsbelohnung, Inflationsrate und mehr +- Aktuelle Epochenprämien und Gebühren -A few key details to note: +Ein paar wichtige Details sind zu beachten: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Die Abfragegebühren stellen die von den Verbrauchern** generierten Gebühren dar. Sie können von den Indexierern nach einem Zeitraum von mindestens 7 Epochen (siehe unten) eingefordert werden (oder auch nicht), nachdem ihre Zuweisungen zu den Subgraphen abgeschlossen wurden und die von ihnen gelieferten Daten von den Verbrauchern validiert wurden. +- **Die Indizierungs-Belohnungen stellen die Anzahl der Belohnungen dar, die die Indexierer während der Epoche von der Netzwerkausgabe beansprucht haben.** Obwohl die Protokollausgabe festgelegt ist, werden die Belohnungen erst geprägt, wenn die Indexierer ihre Zuweisungen zu den Subgraphen schließen, die sie indiziert haben. Daher variiert die Anzahl der Rewards pro Epoche (d. h. während einiger Epochen könnten Indexer kollektiv Zuweisungen geschlossen haben, die seit vielen Tagen offen waren). -![Explorer Image 8](/img/Network-Stats.png) +![Explorer Bild 8](/img/Network-Stats.png) -#### Epochs +#### Epochen -In the Epochs section, you can analyze on a per-epoch basis, metrics such as: +Im Abschnitt Epochen können Sie je nach Epochen Metriken analysieren: -- Epoch start or end block -- Query fees generated and indexing rewards collected during a specific epoch -- Epoch status, which refers to the query fee collection and distribution and can have different states: - - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. +- Epochenstart oder Endblock +- Abfragegebühren und Indexierungsprämien, die während einer bestimmten Epoche erhoben werden +- Epochenstatus, der sich auf die Erhebung und Verteilung der Abfragegebühren bezieht und verschiedene Zustände annehmen kann: + - Die aktive Epoche ist diejenige, in der die Indexierer gerade Anteile zuweisen und Abfragegebühren erheben + - Die Abrechnungsepochen sind diejenigen, in denen die staatlichen Kanäle abgewickelt werden. Das bedeutet, dass die Indexierer der Kürzung unterliegen, wenn die Verbraucher Streitigkeiten gegen sie eröffnen. + - Die verteilenden Epochen sind die Epochen, in denen die Zustandskanäle für die Epochen abgerechnet werden und die Indexierer ihre Rückerstattung der Abfragegebühren beantragen können. + - Die abgeschlossenen Epochen sind die Epochen, für die die Indexierer keine Abfragegebühren-Rabatte mehr beanspruchen können. -![Explorer Image 9](/img/Epoch-Stats.png) +![Explorer Bild 9](/img/Epoch-Stats.png) -## Your User Profile +## Ihr Benutzerprofil -Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: +Ihr persönliches Profil ist der Ort, an dem Sie Ihre Netzwerkaktivitäten sehen können, unabhängig von Ihrer Rolle im Netzwerk. Ihre Krypto- Wallet dient als Ihr Benutzerprofil, und im Benutzer-Dashboard können Sie die folgenden Registerkarten sehen: -### Profile Overview +### Profil-Übersicht -In this section, you can view the following: +In diesem Abschnitt können Sie Folgendes sehen: -- Any of your current actions you've done. -- Your profile information, description, and website (if you added one). +- Jede Ihrer aktuellen Aktionen, die Sie durchgeführt haben. +- Ihre Profilinformationen, Beschreibung und Website (falls Sie eine hinzugefügt haben). -![Explorer Image 10](/img/Profile-Overview.png) +![Explorer Bild 10](/img/Profile-Overview.png) -### Subgraphs Tab +### Registerkarte "Subgraphen" -In the Subgraphs tab, you’ll see your published subgraphs. +Auf der Registerkarte "Subgraphen" sehen Sie Ihre veröffentlichten Subgraphen. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> Dies schließt keine Subgraphen ein, die mit dem CLI zu Testzwecken bereitgestellt wurden. Subgraphen werden erst angezeigt, wenn sie im dezentralen Netzwerk veröffentlicht werden. -![Explorer Image 11](/img/Subgraphs-Overview.png) +![Explorer Bild 11](/img/Subgraphs-Overview.png) -### Indexing Tab +### Registerkarte "Indizierung" -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +Auf der Registerkarte "Indizierung" finden Sie eine Tabelle mit allen aktiven und historischen Zuweisungen zu Subgraphen. Hier finden Sie auch Diagramme, in denen Sie Ihre bisherige Leistung als Indexierer sehen und analysieren können. -This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: +Dieser Abschnitt enthält auch Angaben zu Ihren Netto-Indexierer-Belohnungen und Netto-Abfragegebühren. Sie sehen die folgenden Metriken: -- Delegated Stake - the stake from Delegators that can be allocated by you but cannot be slashed -- Total Query Fees - the total fees that users have paid for queries served by you over time -- Indexer Rewards - the total amount of Indexer rewards you have received, in GRT -- Fee Cut - the % of query fee rebates that you will keep when you split with Delegators -- Rewards Cut - the % of Indexer rewards that you will keep when splitting with Delegators -- Owned - your deposited stake, which could be slashed for malicious or incorrect behavior +- Delegated Stake - der Einsatz von Delegatoren, der von Ihnen zugewiesen werden kann, aber nicht reduziert werden kann +- Gesamte Abfragegebühren - die gesamten Gebühren, die Nutzer im Laufe der Zeit für von Ihnen durchgeführte Abfragen bezahlt haben +- Indexierer Rewards - der Gesamtbetrag der Indexierer Rewards, die Sie erhalten haben, in GRT +- Gebührensenkung - der Prozentsatz der Rückerstattungen von Abfragegebühren, den Sie behalten, wenn Sie mit Delegatoren teilen +- Rewardkürzung - der Prozentsatz der Indexierer-Rewards, den Sie behalten, wenn Sie mit Delegatoren teilen +- Eigenkapital - Ihr hinterlegter Einsatz, der bei böswilligem oder falschem Verhalten gekürzt werden kann -![Explorer Image 12](/img/Indexer-Stats.png) +![Explorer Bild 12](/img/Indexer-Stats.png) -### Delegating Tab +### Registerkarte "Delegieren" -Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. +Die Delegatoren sind wichtig für The Graph Network. Sie müssen ihr Wissen nutzen, um einen Indexierer auszuwählen, der eine gesunde Rendite abwirft. -In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Auf der Registerkarte "Delegatoren" finden Sie die Details Ihrer aktiven und historischen Delegationen sowie die Metriken der Indexierer, an die Sie delegiert haben. -In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. +In der ersten Hälfte der Seite sehen Sie Ihr Delegationsdiagramm sowie das Diagramm „Nur Belohnungen“. Auf der linken Seite sehen Sie die KPIs, die Ihre aktuellen Delegationskennzahlen widerspiegeln. -The Delegator metrics you’ll see here in this tab include: +Auf dieser Registerkarte sehen Sie unter anderem die Delegator-Metriken: -- Total delegation rewards -- Total unrealized rewards -- Total realized rewards +- Delegationsprämien insgesamt +- Unrealisierte Rewards insgesamt +- Gesamte realisierte Rewards -In the second half of the page, you have the delegations table. Here you can see the Indexers that you delegated towards, as well as their details (such as rewards cuts, cooldown, etc). +In der zweiten Hälfte der Seite finden Sie die Tabelle der Delegationen. Hier sehen Sie die Indexierer, an die Sie delegiert haben, sowie deren Details (wie z. B. Belohnungskürzungen, Abklingzeit, usw.). -With the buttons on the right side of the table, you can manage your delegation - delegate more, undelegate, or withdraw your delegation after the thawing period. +Mit den Schaltflächen auf der rechten Seite der Tabelle können Sie Ihre Delegierung verwalten - mehr delegieren, die Delegierung aufheben oder Ihre Delegierung nach der Auftauzeit zurückziehen. -Keep in mind that this chart is horizontally scrollable, so if you scroll all the way to the right, you can also see the status of your delegation (delegating, undelegating, withdrawable). +Beachten Sie, dass dieses Diagramm horizontal gescrollt werden kann. Wenn Sie also ganz nach rechts scrollen, können Sie auch den Status Ihrer Delegation sehen (delegierend, nicht delegierend, zurückziehbar). -![Explorer Image 13](/img/Delegation-Stats.png) +![Explorer Bild 13](/img/Delegation-Stats.png) -### Curating Tab +### Registerkarte "Kuratieren" -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +Auf der Registerkarte „Kuratierung“ finden Sie alle Subgraphen, für die Sie ein Signal geben (damit Sie Abfragegebühren erhalten). Mit der Signalisierung können Kuratoren den Indexierern zeigen, welche Subgraphen wertvoll und vertrauenswürdig sind und somit signalisieren, dass sie indiziert werden müssen. -Within this tab, you’ll find an overview of: +Auf dieser Registerkarte finden Sie eine Übersicht über: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph -- Updated at date details +- Alle Subgraphen, die Sie kuratieren, mit Signaldetails +- Anteilssummen pro Subgraph +- Abfragebelohnungen pro Subgraph +- Aktualisiert bei Datumsdetails -![Explorer Image 14](/img/Curation-Stats.png) +![Explorer Bild 14](/img/Curation-Stats.png) -### Your Profile Settings +### Ihre Profileinstellungen -Within your user profile, you’ll be able to manage your personal profile details (like setting up an ENS name). If you’re an Indexer, you have even more access to settings at your fingertips. In your user profile, you’ll be able to set up your delegation parameters and operators. +In Ihrem Benutzerprofil können Sie Ihre persönlichen Profildaten verwalten (z. B. einen ENS-Namen einrichten). Wenn Sie ein Indexierer sind, haben Sie sogar noch mehr Zugang zu den Einstellungen, die Ihnen zur Verfügung stehen. In Ihrem Benutzerprofil können Sie Ihre Delegationsparameter und Operatoren einrichten. -- Operators take limited actions in the protocol on the Indexer's behalf, such as opening and closing allocations. Operators are typically other Ethereum addresses, separate from their staking wallet, with gated access to the network that Indexers can personally set -- Delegation parameters allow you to control the distribution of GRT between you and your Delegators. +- Operatoren führen im Namen des Indexierers begrenzte Aktionen im Protokoll durch, wie z. B. das Öffnen und Schließen von Allokationen. Operatoren sind in der Regel andere Ethereum-Adressen, die von ihrer Staking-Wallet getrennt sind und einen beschränkten Zugang zum Netzwerk haben, den Indexer persönlich festlegen können +- Mit den Delegationsparametern können Sie die Verteilung der GRT zwischen Ihnen und Ihren Delegatoren steuern. -![Explorer Image 15](/img/Profile-Settings.png) +![Explorer Bild 15](/img/Profile-Settings.png) -As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button. +Als Ihr offizielles Portal in die Welt der dezentralen Daten ermöglicht Ihnen der Graph Explorer eine Vielzahl von Aktionen, unabhängig von Ihrer Rolle im Netzwerk. Sie können zu Ihren Profileinstellungen gelangen, indem Sie das Dropdown-Menü neben Ihrer Adresse öffnen und dann auf die Schaltfläche Einstellungen klicken. ![Wallet details](/img/Wallet-Details.png) ## Zusätzliche Ressourcen -### Video Guide +### Video-Leitfaden -For a general overview of Graph Explorer, check out the video below: +Einen allgemeinen Überblick über Graph Explorer finden Sie in dem folgenden Video: diff --git a/website/src/pages/de/subgraphs/guides/arweave.mdx b/website/src/pages/de/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..2e547c7b6813 --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/arweave.mdx @@ -0,0 +1,238 @@ +--- +title: Erstellen von Subgraphen auf Arweave +--- + +> Die Unterstützung von Arweave in Graph Node und Subgraph Studio befindet sich in der Beta-Phase: Bitte kontaktieren Sie uns auf [Discord] (https://discord.gg/graphprotocol), wenn Sie Fragen zur Erstellung von Arweave-Subgraphen haben! + +In dieser Anleitung erfahren Sie, wie Sie Subgraphen erstellen und einsetzen, um die Arweave-Blockchain zu indizieren. + +## Was ist Arweave? + +Das Arweave-Protokoll ermöglicht es Entwicklern, Daten dauerhaft zu speichern, und das ist der Hauptunterschied zwischen Arweave und IPFS, wobei IPFS die Eigenschaft der Dauerhaftigkeit fehlt und auf Arweave gespeicherte Dateien nicht geändert oder gelöscht werden können. + +Arweave hat bereits zahlreiche Bibliotheken für die Integration des Protokolls in eine Reihe verschiedener Programmiersprachen erstellt. Für weitere Informationen können Sie nachsehen: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## Was sind Subgraphen von Arweave? + +The Graph ermöglicht es Ihnen, benutzerdefinierte offene APIs, sogenannte „ Subgraphen“, zu erstellen. Subgraphen werden verwendet, um Indexierern (Serverbetreibern) mitzuteilen, welche Daten auf einer Blockchain indexiert und auf ihren Servern gespeichert werden sollen, damit Sie sie jederzeit mit [GraphQL] (https://graphql.org/) abfragen können. + +Der [Graph Node] (https://github.com/graphprotocol/graph-node) ist nun in der Lage, Daten auf dem Arweave-Protokoll zu indizieren. Die aktuelle Integration indiziert nur Arweave als Blockchain (Blöcke und Transaktionen), sie indiziert noch nicht die gespeicherten Dateien. + +## Aufbau eines Arweave Subgraphen + +Um Arweave Subgraphs erstellen und einsetzen zu können, benötigen Sie zwei Pakete: + +1. `@graphprotocol/graph-cli` ab Version 0.30.2 - Dies ist ein Kommandozeilen-Tool zum Erstellen und Bereitstellen von Subgraphen. [Klicken Sie hier](https://www.npmjs.com/package/@graphprotocol/graph-cli), um es mit `npm` herunterzuladen. +2. `@graphprotocol/graph-ts` ab Version 0.27.0 - Dies ist eine Bibliothek von Subgraphen-spezifischen Typen. [Klicken Sie hier](https://www.npmjs.com/package/@graphprotocol/graph-ts) zum Herunterladen mit `npm`. + +## Komponenten des Subgraphen + +Ein Subgraph besteht aus drei Komponenten: + +### 1. Manifest - `subgraph.yaml` + +Definiert die Datenquellen, die von Interesse sind, und wie sie verarbeitet werden sollen. Arweave ist eine neue Art von Datenquelle. + +### 2. Schema - `schema.graphql` + +Hier legen Sie fest, welche Daten Sie nach der Indizierung Ihres Subgraphen mit GraphQL abfragen können möchten. Dies ist eigentlich ähnlich wie ein Modell für eine API, wobei das Modell die Struktur eines Requests Body definiert. + +Die Anforderungen für Arweave-Subgraphen werden in der [bestehenden Dokumentation](/developing/creating-a-subgraph/#the-graphql-schema) behandelt. + +### 3. AssemblyScript-Mappings - `mapping.ts` + +Dies ist die Logik, die bestimmt, wie Daten abgerufen und gespeichert werden sollen, wenn jemand mit den Datenquellen interagiert, die Sie abhören. Die Daten werden übersetzt und auf der Grundlage des von Ihnen angegebenen Schemas gespeichert. + +Bei der Entwicklung von Subgraphen gibt es zwei wichtige Befehle: + +``` +$ graph codegen # erzeugt Typen aus der im Manifest angegebenen Schemadatei +$ graph build # generiert Web Assembly aus den AssemblyScript-Dateien und bereitet alle Subgraph-Dateien in einem /build-Ordner vor +``` + +## Subgraph-Manifest-Definition + +Das Subgraph-Manifest `subgraph.yaml` identifiziert die Datenquellen für den Subgraphen, die Auslöser von Interesse und die Funktionen, die als Reaktion auf diese Auslöser ausgeführt werden sollen. Im Folgenden finden Sie ein Beispiel für ein Subgraph-Manifest für einen Arweave-Subgraphen: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql #Link zur Schemadatei +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph unterstützt nur das Arweave source: + owner: 'ID-OF-AN-OWNER' # Der öffentliche Schlüssel einer Arweave-Brieftasche + startBlock: 0 # Setzen Sie dies auf 0, um die Indizierung von der Kettenentstehung zu starten + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # Verweis auf die Datei mit den Assemblyscript-mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # der Funktionsname in der Mapping-Datei + transactionHandlers: + - handler: handleTx # der Funktionsname in der Mapping-Datei +``` + +- Mit Arweave Subgraphen wird eine neue Art von Datenquelle eingeführt (`arweave`) +- Das Netzwerk sollte einem Netzwerk auf dem hostenden Graph Node entsprechen. In Subgraph Studio ist das Arweave-Mainnet als `arweave-mainnet` bezeichnet +- Arweave-Datenquellen führen ein optionales Feld source.owner ein, das den öffentlichen Schlüssel eines Arweave-Wallets darstellt + +Arweave-Datenquellen unterstützen zwei Arten von Handlern: + +- `blockHandlers` - Wird bei jedem neuen Arweave-Block ausgeführt. Es wird kein source.owner benötigt. +- `transactionHandlers` - Wird bei jeder Transaktion ausgeführt, bei der der `source.owner` der Eigentümer der Datenquelle ist. Derzeit ist ein Besitzer für `transactionHandlers` erforderlich, wenn Benutzer alle Transaktionen verarbeiten wollen, sollten sie "" als `source.owner` angeben + +> Als source.owner kann die Adresse des Eigentümers oder sein öffentlicher Schlüssel angegeben werden. +> +> Transaktionen sind die Bausteine des Arweave permaweb und sie sind Objekte, die von den Endbenutzern erstellt werden. +> +> Hinweis: [Irys (früher Bundlr)](https://irys.xyz/) Transaktionen werden noch nicht unterstützt. + +## Schema-Definition + +Die Schemadefinition beschreibt die Struktur der entstehenden Subgraph-Datenbank und die Beziehungen zwischen den Entitäten. Dies ist unabhängig von der ursprünglichen Datenquelle. Weitere Details zur Subgraph-Schemadefinition finden Sie [hier](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript-Mappings + +Die Handler für die Ereignisverarbeitung sind in [AssemblyScript](https://www.assemblyscript.org/) geschrieben. + +Die Arweave-Indizierung führt Arweave-spezifische Datentypen in die [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) ein. + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block-Handler erhalten einen `Block`, während Transaktionen einen `Transaction` erhalten. + +Das Schreiben der Mappings eines Arweave-Subgraphen ist dem Schreiben der Mappings eines Ethereum-Subgraphen sehr ähnlich. Für weitere Informationen, klicken Sie [hier](/developing/creating-a-subgraph/#writing-mappings). + +## Einsatz von Subgraphen aus Arweave in Subgraph Studio + +Sobald Ihr Subgraph auf Ihrem Subgraph Studio Dashboard erstellt wurde, können Sie ihn mit dem CLI-Befehl `graph deploy` bereitstellen. + +```bash +graph deploy --access-token +``` + +## Abfrage eines Arweave-Subgraphen + +Der GraphQL-Endpunkt für Arweave Subgraphen wird durch die Schemadefinition bestimmt, mit der vorhandenen API-Schnittstelle. Bitte besuchen Sie die [GraphQL API Dokumentation](/subgraphs/querying/graphql-api/) für weitere Informationen. + +## Beispiele von Subgraphen + +Hier ist ein Beispiel für einen Subgraphen als Referenz: + +- [Beispiel-Subgraph für Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Kann ein Subgraph Arweave und andere Ketten indizieren? + +Nein, ein Subgraph kann nur Datenquellen von einer Kette oder einem Netzwerk unterstützen. + +### Kann ich die gespeicherten Dateien auf Arweave indizieren? + +Derzeit indiziert The Graph Arweave nur als Blockchain (seine Blöcke und Transaktionen). + +### Kann ich Bundlr-„Bundles“ in meinem Subgraph identifizieren? + +Dies wird derzeit nicht unterstützt. + +### Wie kann ich Transaktionen nach einem bestimmten Konto filtern? + +Der source.owner kann der öffentliche Schlüssel oder die Kontoadresse des Benutzers sein. + +### Was ist das aktuelle Verschlüsselungsformat? + +Daten werden im Allgemeinen als Bytes an die Mappings übergeben, die, wenn sie direkt gespeichert werden, im Subgraphen in einem `hex`-Format zurückgegeben werden (z.B. Block- und Transaktions-Hashes). Möglicherweise möchten Sie in Ihren Mappings in ein `base64`- oder `base64 URL`-sicheres Format konvertieren, um dem zu entsprechen, was in Block-Explorern wie [Arweave Explorer] (https://viewblock.io/arweave/) angezeigt wird. + +Die folgende Hilfsfunktion `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` kann verwendet werden und wird zu `graph-ts` hinzugefügt: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/de/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/de/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..90d94eed5242 --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Überblick + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Voraussetzungen + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +oder + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Schlussfolgerung + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/de/subgraphs/guides/enums.mdx b/website/src/pages/de/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..c01b20cac51b --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: NFT-Marktplätze mit Enums kategorisieren +--- + +Verwenden Sie Enums, um Ihren Code sauberer und weniger fehleranfällig zu machen. Hier finden Sie ein vollständiges Beispiel für die Verwendung von Enums auf NFT-Marktplätzen. + +## Was sind Enums? + +Enums oder Aufzählungstypen sind ein spezieller Datentyp, mit dem Sie eine Reihe von bestimmten, zulässigen Werten definieren können. + +### Beispiel für Enums in Ihrem Schema + +Wenn Sie einen Subgraphen erstellen, um den Besitzverlauf von Token auf einem Marktplatz zu verfolgen, kann jeder Token verschiedene Besitzverhältnisse durchlaufen, z. B. `OriginalOwner`, `SecondOwner` und `ThirdOwner`. Durch die Verwendung von Enums können Sie diese spezifischen Besitzverhältnisse definieren und sicherstellen, dass nur vordefinierte Werte zugewiesen werden. + +Sie können Enums in Ihrem Schema definieren, und sobald sie definiert sind, können Sie die String-Darstellung der Enum-Werte verwenden, um ein Enum-Feld auf einer Entität zu setzen. + +So könnte eine Enum-Definition in Ihrem Schema aussehen, basierend auf dem obigen Beispiel: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +Das heißt, wenn Sie den Typ `TokenStatus` in Ihrem Schema verwenden, erwarten Sie, dass er genau einen der vordefinierten Werte annimmt: `OriginalOwner`, `SecondOwner` oder `ThirdOwner`, um Konsistenz und Gültigkeit zu gewährleisten. + +Um mehr über Enums zu erfahren, lesen Sie [Erstellen eines Subgraphen](/developing/creating-a-subgraph/#enums) und [GraphQL-Dokumentation](https://graphql.org/learn/schema/#enumeration-types). + +## Vorteile der Verwendung von Enums + +- **Klarheit:** Enums bieten aussagekräftige Namen für Werte, wodurch die Daten leichter zu verstehen sind. +- **Validierung:** Enums erzwingen strenge Wertedefinitionen, die ungültige Dateneinträge verhindern. +- **Pflegeleichtigkeit:** Wenn Sie Kategorien ändern oder neue hinzufügen müssen, können Sie dies mit Hilfe von Enums gezielt tun. + +### Ohne Enums + +Wenn Sie sich dafür entscheiden, den Typ als String zu definieren, anstatt eine Enum zu verwenden, könnte Ihr Code wie folgt aussehen: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Eigentümer des Tokens + tokenStatus: String! # String-Feld zur Verfolgung des Token-Status + timestamp: BigInt! +} +``` + +In diesem Schema ist `TokenStatus` eine einfache Zeichenfolge ohne spezifische, zulässige Werte. + +#### Warum ist das ein Problem? + +- Es gibt keine Beschränkung der `TokenStatus`-Werte, so dass jede beliebige Zeichenfolge versehentlich zugewiesen werden kann. Das macht es schwer sicherzustellen, dass nur gültige Status wie `OriginalOwner`, `SecondOwner` oder `ThirdOwner` gesetzt werden. +- Es ist leicht, Tippfehler zu machen, wie z. B. `Orgnalowner` anstelle von `OriginalOwner`, was die Daten und mögliche Abfragen unzuverlässig macht. + +### Mit Enums + +Anstelle der Zuweisung von Freiform-Strings können Sie ein Enum für `TokenStatus` mit spezifischen Werten definieren: `OriginalOwner`, `SecondOwner`, oder `ThirdOwner`. Die Verwendung einer Aufzählung stellt sicher, dass nur erlaubte Werte verwendet werden. + +Enums bieten Typsicherheit, minimieren das Risiko von Tippfehlern und gewährleisten konsistente und zuverlässige Ergebnisse. + +## Definieren von Enums für NFT-Marktplätze + +> Hinweis: Die folgende Anleitung verwendet den CryptoCoven NFT Smart Contract. + +Um Enums für die verschiedenen Marktplätze, auf denen NFTs gehandelt werden, zu definieren, verwenden Sie Folgendes in Ihrem Subgraph-Schema: + +```gql +# Enum für Marktplätze, mit denen der CryptoCoven-Vertrag interagiert (wahrscheinlich ein Trade/Mint) +enum Marketplace { + OpenSeaV1 # Repräsentiert, wenn ein CryptoCoven NFT auf dem Marktplatz gehandelt wird + OpenSeaV2 # Stellt dar, wenn ein CryptoCoven NFT auf dem OpenSeaV2-Marktplatz gehandelt wird + SeaPort # Stellt dar, wenn ein CryptoCoven NFT auf dem SeaPort-Marktplatz gehandelt wird + LooksRare # Stellt dar, wenn ein CryptoCoven NFT auf dem LookRare-Marktplatz gehandelt wird. + # ...und andere Marktplätze +} +``` + +## Verwendung von Enums für NFT-Marktplätze + +Einmal definiert, können Enums in Ihrem gesamten Subgraphen verwendet werden, um Transaktionen oder Ereignisse zu kategorisieren. + +Bei der Protokollierung von NFT-Verkäufen können Sie beispielsweise mit Hilfe des Enums den Marktplatz angeben, der an dem Geschäft beteiligt ist. + +### Implementieren einer Funktion für NFT-Marktplätze + +So können Sie eine Funktion implementieren, die den Namen des Marktplatzes als String aus der Aufzählung abruft: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Verwendung von if-else-Anweisungen, um den Enum-Wert auf eine Zeichenkette abzubilden + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // Wenn der Marktplatz OpenSea ist, wird seine String-Repräsentation zurückgegeben + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // Wenn der Marktplatz SeaPort ist, wird seine String-Repräsentation zurückgegeben + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // Wenn der Marktplatz LooksRare ist, wird seine String-Repräsentation zurückgegeben + // ... und andere Marktplätze + } +} +``` + +## Best Practices für die Verwendung von Enums + +- **Konsistente Benennung:** Verwenden Sie klare, beschreibende Namen für Enum-Werte, um die Lesbarkeit zu verbessern. +- **Zentrale Verwaltung:** Halten Sie Enums in einer einzigen Datei für Konsistenz. Dies erleichtert die Aktualisierung von Enums und stellt sicher, dass sie die einzige Quelle der Wahrheit sind. +- **Dokumentation:** Hinzufügen von Kommentaren zu Enum, um deren Zweck und Verwendung zu verdeutlichen. + +## Verwendung von Enums in Abfragen + +Enums in Abfragen helfen Ihnen, die Datenqualität zu verbessern und Ihre Ergebnisse leichter zu interpretieren. Sie fungieren als Filter und Antwortelemente, sorgen für Konsistenz und reduzieren Fehler bei Marktplatzwerten. + +**Besonderheiten** + +- **Filtern mit Enums:** Enums bieten klare Filter, mit denen Sie bestimmte Marktplätze ein- oder ausschließen können. +- **Enums in Antworten:** Enums garantieren, dass nur anerkannte Marktplatznamen zurückgegeben werden, wodurch die Ergebnisse standardisiert und genau sind. + +### Beispiele für Abfragen + +#### Abfrage 1: Konto mit den höchsten NFT-Marktplatzinteraktionen + +Diese Abfrage führt Folgendes aus: + +- Es findet das Konto mit den meisten eindeutigen NFT-Marktplatzinteraktionen, was sich hervorragend für die Analyse von marktplatzübergreifenden Aktivitäten eignet. +- Das Feld marketplaces verwendet das Marktplatz-Enum, um konsistente und validierte Marktplatzwerte in der Antwort zu gewährleisten. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # Dieses Feld gibt den Enum-Wert für den Marktplatz zurück + } + } +} +``` + +#### Rückgabe + +Diese Antwort enthält Kontodetails und eine Liste eindeutiger Marktplatz-Interaktionen mit Enum-Werten für standardisierte Klarheit: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Abfrage 2: Aktivste Marktplätze für CryptoCoven-Transaktionen + +Diese Abfrage führt Folgendes aus: + +- Sie identifiziert den Marktplatz mit dem höchsten Transaktionsvolumen von CryptoCoven. +- Sie verwendet das Marktplatz-Enum, um sicherzustellen, dass nur gültige Marktplatztypen in der Antwort erscheinen, was die Zuverlässigkeit und Konsistenz Ihrer Daten erhöht. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Ergebnis 2 + +Die erwartete Antwort enthält den Marktplatz und die entsprechende Anzahl der Transaktionen, wobei das Enum zur Angabe des Marktplatztyps verwendet wird: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Abfrage 3: Marktplatz-Interaktionen mit hohen Transaktionszahlen + +Diese Abfrage führt Folgendes aus: + +- Sie ermittelt die vier wichtigsten Marktplätze mit mehr als 100 Transaktionen, wobei „unbekannte“ Marktplätze ausgeschlossen sind. +- Sie verwendet Enums als Filter, um sicherzustellen, dass nur gültige Marktplatztypen einbezogen werden, was die Genauigkeit erhöht. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Ergebnis 3 + +Die erwartete Ausgabe umfasst die Marktplätze, die die Kriterien erfüllen und jeweils durch einen Enumwert dargestellt werden: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Zusätzliche Ressourcen + +Weitere Informationen finden Sie in der [Repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums) dieses Leitfadens. diff --git a/website/src/pages/de/subgraphs/guides/grafting.mdx b/website/src/pages/de/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..a9ca6f6eda54 --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Ersetzen Sie einen Vertrag und bewahren Sie seine Historie mit Grafting +--- + +In dieser Anleitung erfahren Sie, wie Sie neue Subgraphen durch Aufpfropfen bestehender Subgraphen erstellen und einsetzen können. + +## Was ist Grafting? + +Beim Grafting werden die Daten eines bestehenden Subgraphen wiederverwendet und erst in einem späteren Block indiziert. Dies ist während der Entwicklung nützlich, um einfache Fehler in den Mappings schnell zu beheben oder um einen bestehenden Subgraphen vorübergehend wieder zum Laufen zu bringen, nachdem er ausgefallen ist. Es kann auch verwendet werden, wenn ein Feature zu einem Subgraphen hinzugefügt wird, dessen Indizierung von Grund auf lange dauert. + +Der aufgepfropfte Subgrafen kann ein GraphQL-Schema verwenden, das nicht identisch mit dem des Basis-Subgrafen ist, sondern lediglich mit diesem kompatibel ist. Es muss ein eigenständig gültiges Subgrafen-Schema sein, darf aber auf folgende Weise vom Schema des Basis-Subgrafen abweichen: + +- Es fügt Entitätstypen hinzu oder entfernt sie +- Es entfernt Attribute von Entitätstypen +- Es fügt Entitätstypen nullfähige Attribute hinzu +- Es wandelt Nicht-Nullable-Attribute in Nullable-Attribute um +- Es fügt Aufzählungen Werte hinzu +- Es fügt Interface hinzu oder entfernt sie +- Sie ändert sich je nachdem, für welche Art von Elementen das Interface implementiert ist + +Weitere Informationen finden Sie unter: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In diesem Tutorial werden wir einen grundlegenden Anwendungsfall behandeln. Wir werden einen bestehenden Vertrag durch einen identischen Vertrag (mit einer neuen Adresse, aber demselben Code) ersetzen. Anschließend wird der bestehende Subgraph auf den „Basis“-Subgraphen verpflanzt, der den neuen Vertrag verfolgt. + +## Wichtiger Hinweis auf Grafting beim Upgrade auf das Netzwerk + +> **Vorsicht**: Es wird empfohlen, das Grafting nicht für Subgraphen zu verwenden, die in The Graph Network veröffentlicht wurden. + +### Warum ist das wichtig? + +Das Grafting ist eine leistungsstarke Funktion, mit der Sie einen Subgraphen auf einen anderen "grafen" können, wodurch historische Daten aus dem bestehenden Subgraphen in eine neue Version übertragen werden. Es ist nicht möglich, einen Subgraphen aus The Graph Network zurück in Subgraph Studio zu übertragen. + +### Bewährte Praktiken + +**Erstmalige Migration**: Wenn Sie Ihren Subgraphen zum ersten Mal im dezentralen Netzwerk einsetzen, tun Sie dies ohne Veredelung. Stellen Sie sicher, dass der Subgraph stabil ist und wie erwartet funktioniert. + +**Nachfolgende Updates**: Sobald Ihr Subgraph live und stabil im dezentralen Netzwerk ist, können Sie Grafting für zukünftige Versionen verwenden, um den Übergang reibungsloser zu gestalten und historische Daten zu erhalten. + +Wenn Sie sich an diese Richtlinien halten, minimieren Sie die Risiken und sorgen für einen reibungsloseren Migrationsprozess. + +## Erstellen eines vorhanden Subgrafen + +Die Erstellung von Subgraphen ist ein wesentlicher Bestandteil von The Graph, der [hier] näher beschrieben wird (/subgraphs/quick-start/). Um den bestehenden Subgraphen, der in diesem Tutorial verwendet wird, zu bauen und einzusetzen, wird das folgende Repo zur Verfügung gestellt: + +- [Subgraph-Beispiel-Repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Hinweis: Der im Subgraphen verwendete Vertrag wurde dem folgenden [Hackathon Starterkit] (https://github.com/schmidsi/hackathon-starterkit) entnommen. + +## Subgraph-Manifest-Definition + +Das Subgraph-Manifest `subgraph.yaml` identifiziert die Datenquellen für den Subgraphen, die Auslöser von Interesse und die Funktionen, die als Reaktion auf diese Auslöser ausgeführt werden sollen. Im Folgenden finden Sie ein Beispiel für ein Subgraph-Manifest, das Sie verwenden werden: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- Die Datenquelle `Lock` ist die Abi- und Vertragsadresse, die wir erhalten, wenn wir den Vertrag kompilieren und einsetzen +- Das Netzwerk sollte einem indizierten Netzwerk entsprechen, das abgefragt wird. Da wir mit dem Sepolia-Testnetz arbeiten, lautet das Netzwerk `sepolia`. +- Der Abschnitt `mapping` definiert die Auslöser von Interesse und die Funktionen, die als Reaktion auf diese Auslöser ausgeführt werden sollen. In diesem Fall warten wir auf das Ereignis `Withdrawal` und rufen die Funktion `handleWithdrawal` auf, wenn es ausgelöst wird. + +## Grafting-Manifest-Definition + +Beim Grafting müssen dem ursprünglichen Subgraph-Manifest zwei neue Elemente hinzugefügt werden: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID des Basis-Subgraphen + block: 5956000 # Blocknummer +``` + +- `features:` ist eine Liste aller verwendeten [Funktionsnamen](/developing/creating-a-subgraph/#experimental-features). +- `graft:` ist eine Abbildung des `base`-Subgraphen und des Blocks, auf den veredelt werden soll. Der `block` ist die Blocknummer, ab der die Indizierung beginnen soll. The Graph kopiert die Daten des Basis-Subgraphen bis einschließlich des angegebenen Blocks und fährt dann mit der Indizierung des neuen Subgraphen von diesem Block an fort. + +Die `base`- und `block`-Werte können durch das Bereitstellen von zwei Subgraphen ermittelt werden: einen für die Basisindizierung und einen mit Grafting + +## Bereitstellen des Basis-Subgrafen + +1. Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und erstellen Sie einen Subgraphen im Sepolia-Testnetz mit dem Namen `graft-example` +2. Befolgen Sie die Anweisungen im Abschnitt `AUTH & DEPLOY` auf Ihrer Subgraph-Seite im Ordner `graft-example` aus dem Repo +3. Wenn Sie fertig sind, überprüfen Sie, ob der Subgraf richtig indiziert wird. Wenn Sie den folgenden Befehl in The Graph Playground ausführen + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Es gibt so etwas zurück: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Sobald Sie sich vergewissert haben, dass die Indizierung des Subgraphen ordnungsgemäß funktioniert, können Sie den Subgraphen mit Grafting schnell aktualisieren. + +## Bereitstellen des Grafting-Subgrafen + +Der Graft-Ersatz subgraph.yaml wird eine neue Vertragsadresse haben. Dies könnte passieren, wenn Sie Ihre DApp aktualisieren, einen Vertrag erneut bereitstellen usw. + +1. Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und erstellen Sie einen Subgraphen im Sepolia-Testnetz mit dem Namen `graft-replacement` +2. Erstellen Sie ein neues Manifest. Die `subgraph.yaml` für `graph-replacement` enthält eine andere Vertragsadresse und neue Informationen darüber, wie sie gegraft werden soll. Dies sind der `block` des [letztes Eregnisses emittiert](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452), um den sich der alte Vertrag kümmert, und die `base` des alten Subgraphen. Die `base` Subgraph ID ist die `Deployment ID` Ihres ursprünglichen `graph-example`-Subgraphen. Sie können diese in Subgraph Studio finden. +3. Folgen Sie den Anweisungen im Abschnitt `AUTH & DEPLOY` auf Ihrer Subgraph-Seite im Ordner `graft-replacement` aus dem Repo +4. Wenn Sie fertig sind, überprüfen Sie, ob der Subgraf richtig indiziert wird. Wenn Sie den folgenden Befehl in The Graph Playground ausführen + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Es sollte Folgendes zurückgeben: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +Sie können sehen, dass der Subgraph `graft-replacement` ältere Daten von `graph-example` und neuere Daten von der neuen Vertragsadresse indiziert. Der ursprüngliche Vertrag hat zwei `Withdrawal`-Ereignisse ausgelöst, [Ereignis 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) und [Ereignis 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). Der neue Vertrag hat ein `Withdrawal`-Ereignis ausgelöst, nämlich [Ereignis 3] (https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). Die beiden zuvor indizierten Transaktionen (Ereignis 1 und 2) und die neue Transaktion (Ereignis 3) wurden im Subgraphen `graft-replacement` zusammengefasst. + +Herzlichen Glückwunsch! Sie haben erfolgreich einen Subgraphen auf einen anderen Subgraphen gegraft. + +## Zusätzliche Ressourcen + +Wenn Sie mehr Erfahrung mit dem Grafting haben möchten, finden Sie hier einige Beispiele für beliebte Verträge: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +Um ein noch besserer Graph-Experte zu werden, sollten Sie sich mit anderen Methoden zur Handhabung von Änderungen in den zugrunde liegenden Datenquellen vertraut machen. Alternativen wie [Datenquellenvorlagen](/developing/creating-a-subgraph/#data-source-templates) können ähnliche Ergebnisse erzielen + +> Hinweis: Vieles in diesem Artikel wurde aus dem zuvor veröffentlichten [Arweave-Artikel](/subgraphs/cookbook/arweave/) übernommen. diff --git a/website/src/pages/de/subgraphs/guides/near.mdx b/website/src/pages/de/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..3bb7e5af4796 --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Aufbau von Subgraphen auf NEAR +--- + +Diese Anleitung ist eine Einführung in die Erstellung von Subgraphen, die Smart Contracts auf der [NEAR-Blockchain] (https://docs.near.org/) indizieren. + +## Was ist NEAR? + +[NEAR](https://near.org/) ist eine Smart-Contract-Plattform zur Erstellung dezentraler Anwendungen. Besuchen Sie die [offizielle Dokumentation](https://docs.near.org/concepts/basics/protocol) für weitere Informationen. + +## Was sind NEAR-Subgraphen? + +The Graph gibt Entwicklern Werkzeuge an die Hand, um Blockchain-Ereignisse zu verarbeiten und die daraus resultierenden Daten über eine GraphQL-API, die individuell als Subgraph bezeichnet wird, leicht verfügbar zu machen. Der [Graph Node] (https://github.com/graphprotocol/graph-node) ist nun in der Lage, NEAR-Ereignisse zu verarbeiten, was bedeutet, dass NEAR-Entwickler nun Subgraphen erstellen können, um ihre Smart Contracts zu indizieren. + +Subgraphen sind ereignisbasiert, was bedeutet, dass sie auf Onchain-Ereignisse warten und diese dann verarbeiten. Derzeit werden zwei Arten von Handlern für NEAR-Subgraphen unterstützt: + +- Blockhandler: diese werden bei jedem neuen Block ausgeführt +- Empfangshandler: werden jedes Mal ausgeführt, wenn eine Nachricht auf einem bestimmten Konto ausgeführt wird + +[Aus der NEAR-Dokumentation] (https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> Eine Quittung ist das einzige handlungsfähige Objekt im System. Wenn wir auf der NEAR-Plattform von der „Verarbeitung einer Transaktion“ sprechen, bedeutet dies letztendlich, dass an einem bestimmten Punkt „Quittungen angewendet werden“. + +## Aufbau eines NEAR-Subgraphen + +`@graphprotocol/graph-cli` ist ein Kommandozeilen-Werkzeug zum Erstellen und Bereitstellen von Subgraphen. + +`@graphprotocol/graph-ts` ist eine Bibliothek mit subgraphspezifischen Typen. + +Die NEAR-Subgraph-Entwicklung erfordert `graph-cli` ab Version `0.23.0`, und `graph-ts` ab Version `0.23.0`. + +> Der Aufbau eines NEAR-Subgraphen ist dem Aufbau eines Subgraphen, der Ethereum indiziert, sehr ähnlich. + +Bei der Definition von Subgraphen gibt es drei Aspekte: + +**subgraph.yaml:** das Subgraph-Manifest, das die interessierenden Datenquellen und deren Verarbeitung definiert. NEAR ist eine neue `kind` (Art) von Datenquelle. + +**schema.graphql:** eine Schemadatei, die definiert, welche Daten für Ihren Subgraphen gespeichert werden und wie sie über GraphQL abgefragt werden können. Die Anforderungen für NEAR-Subgraphen werden in [der bestehenden Dokumentation](/developing/creating-a-subgraph/#the-graphql-schema) behandelt. + +**AssemblyScript-Mappings:** [AssemblyScript-Code](/subgraphs/developing/creating/graph-ts/api/), der die Ereignisdaten in die in Ihrem Schema definierten Entitäten übersetzt. Die NEAR-Unterstützung führt NEAR-spezifische Datentypen und neue JSON-Parsing-Funktionen ein. + +Bei der Entwicklung von Subgraphen gibt es zwei wichtige Befehle: + +```bash +$ graph codegen # erzeugt Typen aus der im Manifest angegebenen Schemadatei +$ graph build # generiert Web Assembly aus den AssemblyScript-Dateien und bereitet alle Subgraph-Dateien in einem /build-Ordner vor +``` + +### Subgraph-Manifest-Definition + +Das Subgraph-Manifest (`subgraph.yaml`) identifiziert die Datenquellen für den Subgraphen, die Auslöser von Interesse und die Funktionen, die als Reaktion auf diese Auslöser ausgeführt werden sollen. Im Folgenden finden Sie ein Beispiel für ein Subgraph-Manifest für einen NEAR-Subgraphen: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # Verweis auf die Schemadatei +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # Diese Datenquelle wird dieses Konto überwachen + startBlock: 10662188 # Erforderlich für NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # der Funktionsname in der Mapping-Datei + receiptHandlers: + - handler: handleReceipt # der Funktionsname in der Mapping-Datei + file: ./src/mapping.ts # Verweis auf die Datei mit den Assemblyscript-Mappings +``` + +- NEAR Subgraphen führen eine neue `kind` (Art) von Datenquelle ein (`near`) +- Das `network` sollte einem Netz auf dem hostenden Graph Node entsprechen. In Subgraph Studio ist das Mainnet von NEAR `near-mainnet` und das Testnetz von NEAR `near-testnet` +- NEAR-Datenquellen führen ein optionales Feld `source.account` ein, das eine von Menschen lesbare ID ist, die einem [NEAR-Konto] (https://docs.near.org/concepts/protocol/account-model) entspricht. Dies kann ein Konto oder ein Unterkonto sein. +- NEAR-Datenquellen führen ein alternatives optionales Feld `source.accounts` ein, das optionale Suffixe und Präfixe enthält. Es müssen mindestens Präfix oder Suffix angegeben werden, da sie mit jedem Konto übereinstimmen, das mit der Liste der Werte beginnt bzw. endet. Das folgende Beispiel würde passen: `[app|good].*[morning.near|morning.testnet]`. Wenn nur eine Liste von Präfixen oder Suffixen erforderlich ist, kann das andere Feld weggelassen werden. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +NEAR-Datenquellen unterstützen zwei Arten von Handlern: + +- `blockHandlers`: werden bei jedem neuen NEAR-Block ausgeführt. Es ist kein `source.account` erforderlich. +- `receiptHandlers`: wird bei jeder Quittung ausgeführt, bei der das `source.account` der Datenquelle der Empfänger ist. Beachten Sie, dass nur exakte Übereinstimmungen verarbeitet werden ([Unterkonten](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) müssen als unabhängige Datenquellen hinzugefügt werden). + +### Schema-Definition + +Die Schemadefinition beschreibt die Struktur der entstehenden Subgraph-Datenbank und die Beziehungen zwischen den Entitäten. Dies ist unabhängig von der ursprünglichen Datenquelle. Weitere Details zur Subgraph-Schemadefinition finden Sie [hier](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript-Mappings + +Die Handler für die Ereignisverarbeitung sind in [AssemblyScript](https://www.assemblyscript.org/) geschrieben. + +Die NEAR-Indizierung führt NEAR-spezifische Datentypen in die [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) ein. + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: String, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Immer Null wenn Version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + Autor: String, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +Diese Typen werden an Block- und Quittungshandler weitergegeben: + +- Block-Handler erhalten einen `Block` +- Empfangshandler erhalten einen `ReceiptWithOutcome` + +Andernfalls ist der Rest der [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) für NEAR-Subgraph-Entwickler während der Mapping-Ausführung verfügbar. + +Dazu gehört eine neue JSON-Parsing-Funktion - Logs auf NEAR werden häufig als stringifizierte JSONs ausgegeben. Eine neue Funktion `json.fromString(...)` ist als Teil der [JSON-API] (/subgraphs/developing/creating/graph-ts/api/#json-api) verfügbar, damit Entwickler diese Protokolle einfach verarbeiten können. + +## Bereitstellen eines NEAR- Subgraphen + +Sobald Sie einen Subgraphen erstellt haben, ist es an der Zeit, ihn für die Indizierung auf Graph Node zu übertragen. NEAR-Subgraphen können an jeden Graph Node `>=v0.26.x` deployed werden (diese Version wurde noch nicht getaggt und freigegeben). + +Subgraph Studio und der Upgrade Indexierer auf The Graph Network unterstützen derzeit die Indizierung von NEAR Mainnet und Testnet in der Betaphase, mit den folgenden Netzwerknamen: + +- `near-mainnet` +- `near-testnet` + +Weitere Informationen zum Erstellen und Bereitstellen von Subgraphen in Subgraph Studio finden Sie [hier](/deploying/deploying-a-subgraph-to-studio/). + +Als kurze Einführung - der erste Schritt ist das „Erstellen“ Ihres Subgraphen - dies muss nur einmal gemacht werden. In Subgraph Studio können Sie dies über [Ihr Dashboard] (https://thegraph.com/studio/) tun: „Einen Subgraphen erstellen“. + +Sobald Ihr Subgraph erstellt wurde, können Sie ihn mit dem CLI-Befehl `graph deploy` einsetzen: + +```sh +$ graph create --node # erstellt einen Subgraph auf einem lokalen Graph-Knoten (bei Subgraph Studio wird dies über die Benutzeroberfläche erledigt) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # lädt die Build-Dateien auf einen angegebenen IPFS-Endpunkt hoch und stellt den Subgraphen dann auf der Grundlage des manifestierten IPFS-Hashs auf einem angegebenen Graph-Knoten bereit +``` + +Die Knotenkonfiguration hängt davon ab, wo der Subgraph eingesetzt werden soll. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Lokaler Graph-Knoten (basierend auf der Standardkonfiguration) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Sobald Ihr Subgraph bereitgestellt wurde, wird er von Graph Node indiziert. Sie können den Fortschritt überprüfen, indem Sie den Subgraphen selbst abfragen: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indizieren von NEAR mit einem lokalen Graph-Knoten + +Für den Betrieb eines Graph-Knotens, der NEAR indiziert, gelten die folgenden betrieblichen Anforderungen: + +- NEAR Indexierer Framework mit Firehose-Instrumentierung +- NEAR-Firehose-Komponente(n) +- Graph-Knoten mit konfiguriertem Firehose-Endpunkt + +Wir werden in Kürze weitere Informationen zum Betrieb der oben genannten Komponenten bereitstellen. + +## Abfrage eines NEAR-Subgraphen + +Der GraphQL-Endpunkt für NEAR Subgraphen wird durch die Schemadefinition bestimmt, mit der vorhandenen API-Schnittstelle. Bitte besuchen Sie die [GraphQL-API-Dokumentation](/subgraphs/querying/graphql-api/) für weitere Informationen. + +## Beispiele von Subgraphen + +Hier sind einige Beispiel- Subgraphen als Referenz: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Quittungen](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### Wie funktioniert die Beta-Version? + +Die NEAR-Unterstützung befindet sich in der Beta-Phase, was bedeutet, dass es zu Änderungen an der API kommen kann, während wir weiter an der Verbesserung der Integration arbeiten. Bitte senden Sie eine E-Mail an near@thegraph.com, damit wir Sie bei der Erstellung von NEAR-Subgraphen unterstützen und Sie über die neuesten Entwicklungen auf dem Laufenden halten können! + +### Kann ein Subgraph sowohl NEAR- als auch EVM-Ketten indizieren? + +Nein, ein Subgraph kann nur Datenquellen von einer Kette oder einem Netzwerk unterstützen. + +### Können Subgraphen auf spezifischere Auslöser reagieren? + +Zurzeit werden nur Auslöser für Sperren und Quittungen unterstützt. Wir untersuchen derzeit Auslöser für Funktionsaufrufe an ein bestimmtes Konto. Wir sind auch an der Unterstützung von Ereignisauslösern interessiert, sobald NEAR über eine native Ereignisunterstützung verfügt. + +### Werden Empfangshandler für Konten und deren Unterkonten ausgelöst? + +Wenn ein `account` angegeben wird, wird nur der exakte Kontoname abgeglichen. Es ist möglich, Unterkonten abzugleichen, indem ein Feld `account` mit `suffixes` und `prefixes` angegeben wird, um Konten und Unterkonten abzugleichen, z. B. würde das folgende Feld allen Unterkonten von `mintbase1.near` entsprechen: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Können NEAR-Subgraphen bei Mappings Sichtaufrufe auf NEAR-Konten machen? + +Dies wird nicht unterstützt. Wir prüfen derzeit, ob diese Funktion für die Indizierung erforderlich ist. + +### Kann ich Datenquellenvorlagen in meinem NEAR-Subgraphen verwenden? + +Dies wird derzeit nicht unterstützt. Wir prüfen derzeit, ob diese Funktion für die Indizierung erforderlich ist. + +### Ethereum-Subgraphen unterstützen „schwebende“ und „aktuelle“ Versionen. Wie kann ich eine „schwebende“ Version eines NEAR-Subgraphen bereitstellen? + +Die Pending-Funktionalität wird für NEAR-Subgraphen noch nicht unterstützt. In der Zwischenzeit können Sie eine neue Version in einem anderen „benannten“ Subgraphen bereitstellen. Wenn dieser dann mit dem Kettenkopf synchronisiert ist, können Sie eine erneute Bereitstellung in Ihrem primären „benannten“ Subgraphen vornehmen, der dieselbe zugrunde liegende Bereitstellungs-ID verwendet, sodass der Haupt-Subgraph sofort synchronisiert wird. + +### Meine Frage wurde nicht beantwortet. Wo kann ich weitere Hilfe bei der Erstellung von NEAR Subgraphen erhalten? + +Wenn es sich um eine allgemeine Frage zur Entwicklung von Subgraphen handelt, gibt es viele weitere Informationen im Rest der [Entwicklerdokumentation](/subgraphs/quick-start/). Andernfalls treten Sie bitte dem [The Graph Protocol Discord](https://discord.gg/graphprotocol) bei und fragen Sie im Kanal #near oder schreiben Sie eine E-Mail an near@thegraph.com. + +## Referenzen + +- [NEAR Entwicklerdokumentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/de/subgraphs/guides/polymarket.mdx b/website/src/pages/de/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..548c823e58a6 --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Abfrage von Blockchain-Daten von Polymarket mit Subgraphen auf The Graph +sidebarTitle: Abfrage von Polymarktdaten +--- + +Abfrage der Onchain-Daten von Polymarket mit GraphQL über Subgraphen im The Graph Network. Subgraphen sind dezentrale APIs, die von The Graph angetrieben werden, einem Protokoll zur Indizierung & Abfrage von Daten aus Blockchains. + +## Polymarkt Subgraph auf Graph Explorer + +Auf der Seite [Polymarket Subgraph's page on The Graph Explorer] (https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) können Sie eine interaktive Abfrage-Spielwiese sehen, auf der Sie jede Abfrage testen können. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## Verwendung des visuellen Abfrageeditors + +Der visuelle Abfrage-Editor hilft Ihnen beim Testen von Beispielabfragen aus Ihrem Subgraphen. + +Mit dem GraphiQL Explorer können Sie Ihre GraphQL-Abfragen zusammenstellen, indem Sie auf die gewünschten Felder klicken. + +### Beispielabfrage: Erhalten Sie die Top 5 der höchsten Auszahlungen von Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Beispielausgabe + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL-Schema + +Das Schema für diesen Subgraphen ist [hier in Polymarkets GitHub] definiert (https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarkt Subgraph Endpunkt + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +Der Polymarket Subgraph Endpunkt ist auf [Graph Explorer] (https://thegraph.com/explorer) verfügbar. + +![Polymarket Endpunkt](/img/Polymarket-endpoint.png) + +## Wie Sie Ihren eigenen API-Schlüssel erhalten + +1. Gehen Sie zu [https://thegraph.com/studio](http://thegraph.com/studio) und verbinden Sie Ihre Wallet +2. Rufen Sie https://thegraph.com/studio/apikeys/ auf, um einen API-Schlüssel zu erstellen + +Sie können diesen API-Schlüssel für jeden Subgraphen im [Graph Explorer] (https://thegraph.com/explorer) verwenden, und er ist nicht nur auf Polymarket beschränkt. + +100k Abfragen pro Monat sind kostenlos, was perfekt für Ihr Nebenprojekt ist! + +## Zusätzliche Polymarkt Subgraphen + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket-Aktivitätspolygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarkt Profit & Verlust](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Offenes Interesse](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## Abfragen mit der API + +Sie können eine beliebige GraphQL-Abfrage an den Polymarket-Endpunkt übergeben und Daten im json-Format erhalten. + +Das folgende Beispiel für einen Datencode liefert genau die gleiche Ausgabe wie oben. + +### Beispielcode aus node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Senden der GraphQL-Abfrage +axios(graphQLRequest) + .then((response) => { + // Behandeln Sie die Antwort hier + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Behandeln Sie eventuelle Fehler + console.error(error); + }); +``` + +### Zusätzliche Ressourcen + +Weitere Informationen zur Abfrage von Daten aus Ihrem Subgraphen finden Sie [hier](/subgraphs/querying/introduction/). + +Um alle Möglichkeiten zu erkunden, wie Sie Ihren Subgraphen optimieren & anpassen können, um eine bessere Leistung zu erzielen, lesen Sie mehr über [Erstellen eines Subgraphen hier](/developing/creating-a-subgraph/). diff --git a/website/src/pages/de/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/de/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..dc8bea1a3a0c --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: Wie man API-Schlüssel mit Next.js Server-Komponenten sichert +--- + +## Überblick + +Wir können [Next.js Server-Komponenten](https://nextjs.org/docs/app/building-your-application/rendering/server-components) verwenden, um unseren API-Schlüssel vor der Offenlegung im Frontend unserer App zu schützen. Um die Sicherheit unseres API-Schlüssels weiter zu erhöhen, können wir auch [unseren API-Schlüssel auf bestimmte Subgraphen oder Domänen in Subgraph Studio beschränken](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In diesem Kochbuch (Schritt-für-Schritt Anleitung) wird gezeigt, wie man eine Next.js-Serverkomponente erstellt, die einen Subgraphen abfragt und gleichzeitig den API-Schlüssel vor dem Frontend verbirgt. + +### Vorbehalte + +- Next.js-Serverkomponenten schützen API-Schlüssel nicht vor Denial-of-Service-Angriffen. +- The Graph Network Gateways verfügen über Strategien zur Erkennung und Eindämmung von Denial-of-Service-Attacken, doch die Verwendung von Serverkomponenten kann diese Schutzmaßnahmen schwächen. +- Next.js-Serverkomponenten bergen Zentralisierungsrisiken, da der Server ausfallen kann. + +### Warum es gebraucht wird + +In einer Standard-React-Anwendung können API-Schlüssel, die im Frontend-Code enthalten sind, auf der Client-Seite offengelegt werden, was ein Sicherheitsrisiko darstellt. Obwohl \`.env'-Dateien häufig verwendet werden, schützen sie die Schlüssel nicht vollständig, da der Code von React auf der Client-Seite ausgeführt wird und die API-Schlüssel in den Headern offengelegt werden. Next.js Server Components lösen dieses Problem, indem sie sensible Operationen serverseitig verarbeiten. + +### Client-seitiges Rendering zur Abfrage eines Subgraphen verwenden + +![Client-seitiges Rendering](/img/api-key-client-side-rendering.png) + +### Voraussetzungen + +- Ein API-Schlüssel von [Subgraph Studio] (https://thegraph.com/studio) +- Grundkenntnisse in Next.js und React. +- Ein bestehendes Next.js-Projekt, das den [App Router](https://nextjs.org/docs/app) verwendet. + +## Schritt-für-Schritt Cookbook + +### Schritt 1: Einrichten der Umgebungsvariablen + +1. Erstellen Sie im Stammverzeichnis unseres Next.js-Projekts eine Datei `.env.local`. +2. Fügen Sie unseren API-Schlüssel hinzu: `API_KEY=`. + +### Schritt 2: Erstellen einer Server-Komponente + +1. Erstellen Sie in unserem Verzeichnis `components` eine neue Datei `ServerComponent.js`. +2. Verwenden Sie den mitgelieferten Beispielcode, um die Serverkomponente einzurichten. + +### Schritt 3: Implementierung der serverseitigen API-Anfrage + +Fügen Sie in `ServerComponent.js` den folgenden Code ein: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Schritt 4: Verwenden Sie die Server-Komponente + +1. In unserer Seitendatei (z. B. `pages/index.js`) importieren Sie `ServerComponent`. +2. Rendern Sie die Komponente: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Schritt 5: Starten und testen Sie unseren Dapp + +Starten Sie unsere Next.js-Anwendung mit `npm run dev`. Überprüfen Sie, ob die Serverkomponente Daten abruft, ohne den API-Schlüssel preiszugeben. + +![Serverseitiges Rendering](/img/api-key-server-side-rendering.png) + +### Schlussfolgerung + +Durch die Verwendung von Next.js Server Components haben wir den API-Schlüssel effektiv vor der Client-Seite versteckt, was die Sicherheit unserer Anwendung erhöht. Diese Methode stellt sicher, dass sensible Vorgänge serverseitig behandelt werden, weit weg von potentiellen clientseitigen Schwachstellen. Abschließend sollten Sie unbedingt [andere Sicherheitsmaßnahmen für API-Schlüssel](/subgraphs/querying/managing-api-keys/) erkunden, um die Sicherheit Ihrer API-Schlüssel noch weiter zu erhöhen. diff --git a/website/src/pages/de/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/de/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..900ecb8e636d --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregieren von Daten mit Hilfe von Subgraphen-Komposition +sidebarTitle: Erstellen eines zusammensetzbaren Subgraphen mit mehreren Subgraphen +--- + +Nutzen Sie die Komposition von Subgraphen, um die Entwicklungszeit zu verkürzen. Erstellen Sie einen Basis-Subgraphen mit den wichtigsten Daten und bauen Sie dann weitere Subgraphen darauf auf. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Einführung + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Vorteile der Komposition + +Die Komposition von Subgraphen ist eine leistungsstarke Funktion für die Skalierung, die es Ihnen ermöglicht,: + +- Wiederverwendung, Mischung und Kombination vorhandener Daten +- Rationalisierung von Entwicklung und Abfragen +- Verwendung mehrerer Datenquellen (bis zu fünf Subgraphen als Quelle) +- Beschleunigen Sie die Synchronisierungsgeschwindigkeit Ihres Subgraphen +- Behandlung von Fehlern und Optimierung der Neusynchronisierung + +## Architektur-Übersicht + +Für dieses Beispiel werden zwei Subgraphen erstellt: + +1. **Quellensubgraph**: Verfolgt Ereignisdaten als Entitäten. +2. **Abhängiger Subgraph**: Verwendet den Quell-Subgraphen als Datenquelle. + +Sie finden diese in den Verzeichnissen `source` und `dependent`. + +- Der **Quellen-Subgraph** ist ein grundlegender Ereignisverfolgungs-Subgraph, der Ereignisse aufzeichnet, die von relevanten Verträgen ausgehen. +- Der **abhängige Subgraph** referenziert den Quell-Subgraph als Datenquelle und verwendet die Entitäten aus der Quelle als Auslöser. + +Während der Ausgangs-Subgraph ein Standard-Subgraph ist, verwendet der abhängige Subgraph die Subgraph-Kompositionsfunktion. + +## Voraussetzungen + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Los geht’s + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Besonderheiten + +- Um dieses Beispiel einfach zu halten, verwenden alle Source-Subgraphen nur Block-Handler. In einer realen Umgebung wird jedoch jeder Source-Subgraph Daten aus verschiedenen Smart Contracts verwenden. +- Die folgenden Beispiele zeigen, wie Sie das Schema eines anderen Subgraphen importieren und erweitern können, um seine Funktionalität zu verbessern. +- Jeder Source-Subgraph wird für eine bestimmte Entität optimiert. +- Alle aufgeführten Befehle installieren die erforderlichen Abhängigkeiten, generieren Code auf der Grundlage des GraphQL-Schemas, erstellen den Subgraphen und stellen ihn auf Ihrer lokalen Graph Node-Instanz bereit. + +### Schritt 1. Blockzeit-Source-Subgraph bereitstellen + +Dieser erste Source-Subgraph berechnet die Blockzeit für jeden Block. + +- Es importiert Schemata aus anderen Subgraphen und fügt eine `block`-Entität mit einem `timestamp`-Feld hinzu, das die Zeit angibt, zu der jeder Block abgebaut wurde. +- Er hört auf zeitbezogene Blockchain-Ereignisse (z. B. Blockzeitstempel) und verarbeitet diese Daten, um die Entitäten des Subgraphen entsprechend zu aktualisieren. + +Um diesen Subgraphen lokal einzusetzen, führen Sie die folgenden Befehle aus: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Schritt 2. Block Cost Source-Subgraph bereitstellen + +Dieser zweite Source-Subgraph indiziert die Kosten für jeden Block. + +#### Schlüsselfunktionen + +- Es importiert Schemata aus anderen Subgraphen und fügt eine `block`-Entität mit kostenbezogenen Feldern hinzu. +- Er hört auf Blockchain-Ereignisse im Zusammenhang mit Kosten (z. B. Gasgebühren, Transaktionskosten) und verarbeitet diese Daten, um die Entitäten des Subgraphen entsprechend zu aktualisieren. + +Um diesen Subgraphen lokal zu verteilen, führen Sie die gleichen Befehle wie oben aus. + +### Schritt 3. Blockgröße im Source-Subgraphen definieren + +Dieser dritte Source-Subgraph indiziert die Größe der einzelnen Blöcke. Um diesen Subgraphen lokal einzusetzen, führen Sie die gleichen Befehle wie oben aus. + +#### Schlüsselfunktionen + +- Es importiert bestehende Schemata von anderen Subgraphen und fügt eine `block`-Entität mit einem `size`-Feld hinzu, das die Größe eines jeden Blocks angibt. +- Er hört auf Blockchain-Ereignisse in Bezug auf Blockgrößen (z. B. Speicher oder Volumen) und verarbeitet diese Daten, um die Entitäten des Subgrafen entsprechend zu aktualisieren. + +### Schritt 4. Kombinieren Sie in Block-Statistik-Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Jede Änderung an einem Source-Subgraphen wird wahrscheinlich eine neue Bereitstellungs-ID erzeugen. +> - Stellen Sie sicher, dass Sie die Bereitstellungs-ID in der Datenquellenadresse des Subgraph-Manifests aktualisieren, um von den neuesten Änderungen zu profitieren. +> - Alle Source-Subgraphen sollten bereitgestellt werden, bevor der zusammengesetzte Subgraph bereitgestellt wird. + +#### Schlüsselfunktionen + +- Es bietet ein konsolidiertes Datenmodell, das alle relevanten Blockmetriken umfasst. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Wichtigste Erkenntnisse + +- Dieses leistungsstarke Werkzeug skaliert die Entwicklung von Subgraphen und ermöglicht es Ihnen, mehrere Subgraphen zu kombinieren. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- Diese Funktion ermöglicht Skalierbarkeit und vereinfacht sowohl die Entwicklung als auch die Wartungseffizienz. + +## Zusätzliche Ressourcen + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- Um Ihrem Subgraphen erweiterte Funktionen hinzuzufügen, lesen Sie [Erweiterte Subgraph-Funktionen](/developing/creating/advanced/). +- Um mehr über Aggregationen zu erfahren, lesen Sie [Zeitreihen und Aggregationen](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/de/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/de/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..3f5549637b9a --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Schnelles und einfaches Debuggen von Subgraphen mit Forks +--- + +Wie bei vielen Systemen, die große Datenmengen verarbeiten, können die Indexierer (Graph Nodes) von The Graph einige Zeit benötigen, um Ihren Subgraphen mit der Ziel-Blockchain zu synchronisieren. Die Diskrepanz zwischen schnellen Änderungen zum Zweck der Fehlersuche und langen Wartezeiten für die Indizierung ist äußerst kontraproduktiv und wir sind uns dessen bewusst. Aus diesem Grund führen wir das **Subgraph forking** ein, das von [LimeChain] (https://limechain.tech/) entwickelt wurde, und in diesem Artikel zeige ich Ihnen, wie diese Funktion genutzt werden kann, um das Debuggen von Subgraphen erheblich zu beschleunigen! + +## Ok, was ist es? + +**Subgraph forking** ist der Prozess, bei dem Entitäten aus dem Speicher eines _anderen_ Subgraphen (normalerweise eines entfernten) geholt werden. + +Im Zusammenhang mit der Fehlersuche ermöglicht **Subgraph forking** die Fehlersuche in einem fehlgeschlagenen Subgraphen im Block _X_, ohne dass Sie auf die Synchronisierung mit Block _X_ warten müssen. + +## Was? Wie? + +Wenn Sie einen Subgraphen an einen entfernten Graph Node zur Indizierung bereitstellen und dieser bei Block _X_ ausfällt, ist die gute Nachricht, dass der Graph Node weiterhin GraphQL-Abfragen mit seinem Speicher bedient, der mit Block _X_ synchronisiert ist. Das ist großartig! Das bedeutet, dass wir diesen „aktuellen“ Speicher nutzen können, um die Fehler zu beheben, die bei der Indizierung von Block _X_ auftreten. + +Kurz gesagt, wir _forken den fehlgeschlagenen Subgraphen_ von einem entfernten Graph Node, der garantiert den Subgraphen bis zum Block _X_ indiziert hat, um dem lokal eingesetzten Subgraphen, der im Block _X_ debuggt wird, eine aktuelle Sicht auf den Indizierungsstatus zu geben. + +## Bitte, zeigen Sie mir einen Code! + +Um uns auf das Debuggen von Subgraphen zu konzentrieren, halten wir die Dinge einfach und führen den [Beispiel-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) aus, der den Ethereum Gravity Smart Contract indiziert. + +Hier sind die für die Indizierung von `Gravatar` definierten Handler, die keinerlei Fehler aufweisen: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, wie schade, wenn ich meinen perfekt aussehenden Subgraphen in [Subgraph Studio] (https://thegraph.com/studio/) einsetze, schlägt er mit der Fehlermeldung _„Gravatar not found!“_ fehl. + +Der übliche Weg, eine Lösung zu finden, ist: + +1. Nehmen Sie eine Änderung in der Mappingquelle vor, von der Sie glauben, dass sie das Problem lösen wird (während ich weiß, dass sie es nicht tut). +2. Stellen Sie den Subgraphen erneut in [Subgraph Studio](https://thegraph.com/studio/) (oder einem anderen entfernten Graph-Knoten) bereit. +3. Warten Sie, bis es synchronisiert wird. +4. Wenn es wieder bricht, gehen Sie zurück zu 1, sonst: Hurra! + +Es ist in der Tat ziemlich vertraut mit einem normalen Debug-Prozess, aber es gibt einen Schritt, der den Prozess schrecklich verlangsamt: _3. Warten Sie auf die Synchronisierung._ + +Mit **Subgraph forking** können wir diesen Schritt im Wesentlichen eliminieren. So sieht es aus: + +0. Starten Sie einen lokalen Graph-Knotens mit der **_geeigneten fork-base_**-Satz. +1. Nehmen Sie eine Änderung in der Mappingquelle vor, von der Sie glauben, dass sie das Problem lösen wird. +2. Bereitstellung auf dem lokalen Graph Node, **_Forking des fehlgeschlagenen Subgraphs_** und **_Start vom problematischen Block_**. +3. Wenn es wieder bricht, gehen Sie zurück zu 1, sonst: Hurra! + +Jetzt haben Sie vielleicht 2 Fragen: + +1. fork-base, was??? +2. Forking von wem?! + +Und ich antworte: + +1. `fork-base` ist die ‚Basis‘-URL, so dass, wenn die _subgraph id_ angehängt wird, die resultierende URL (/\`) ein gültiger GraphQL-Endpunkt für den Subgraph-Speicher ist. +2. Forken ist einfach, keine Notwendigkeit zu schwitzen : + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Vergessen Sie auch nicht, das Feld `dataSources.source.startBlock` im Subgraph-Manifest auf die Nummer des problematischen Blocks zu setzen, damit Sie die Indizierung unnötiger Blöcke überspringen und die Vorteile der Gabelung nutzen können! + +Also, ich mache Folgendes: + +1. Ich erstelle einen lokalen Graph Node ([hier wird erklärt, wie man es macht](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) mit der Option `fork-base` auf: `https://api.thegraph.com/subgraphs/id/`, da ich einen Subgraphen, den fehlerhaften, den ich zuvor eingesetzt habe, von [Subgraph Studio](https://thegraph.com/studio/) forken werde. + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. Nach sorgfältiger Prüfung stelle ich fest, dass es eine Unstimmigkeit in der `id`-Darstellung gibt, die bei der Indizierung von `Gravatar` in meinen beiden Handlern verwendet wird. Während `handleNewGravatar` sie in eine Hexadezimaldarstellung umwandelt (`event.params.id.toHex()`), verwendet `handleUpdatedGravatar` eine int32-Darstellung (`event.params.id.toI32()`), was dazu führt, dass `handleUpdatedGravatar` in Panik gerät mit „Gravatar not found!“. Ich lasse sie beide die `id` in eine Hexadezimalzahl konvertieren. +3. Nachdem ich die Änderungen vorgenommen habe, verteile ich meinen Subgraphen auf dem lokalen Graph Node, **_forking the failing Subgraph_** und setze `dataSources.source.startBlock` auf `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. Ich schaue mir die vom lokalen Graph Node erstellten Protokolle an, und - hurra - alles scheint zu funktionieren. +5. Ich verteile meinen nun fehlerfreien Subgraphen an einen entfernten Graph Node und lebe glücklich bis ans Ende meiner Tage! (allerdings ohne Kartoffeln) diff --git a/website/src/pages/de/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/de/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..17c44f701811 --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Sicherer Subgraph Code Generator +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) ist ein Codegenerierungswerkzeug, das eine Reihe von Hilfsfunktionen aus dem Graphql-Schema eines Projekts generiert. Es stellt sicher, dass alle Interaktionen mit Entitäten in Ihrem Subgraphen vollkommen sicher und konsistent sind. + +## Warum sollte man mit Subgraph Uncrashable integrieren? + +- **Kontinuierliche Betriebszeit**. Falsch behandelte Entitäten können zum Absturz von Subgraphen führen, was für Projekte, die von The Graph abhängig sind, störend sein kann. Richten Sie Hilfsfunktionen ein, um Ihre Subgraphen „absturzsicher“ zu machen und die Geschäftskontinuität zu gewährleisten. + +- **Vollständig sicher**. Häufig auftretende Probleme bei der Entwicklung von Subgraphen sind das Laden von undefinierten Entitäten, das nicht Setzen oder Initialisieren aller Werte von Entitäten und Race Conditions beim Laden und Speichern von Entitäten. Stellen Sie sicher, dass alle Interaktionen mit Entitäten vollständig atomar sind. + +- **Benutzerdefinierbar** Legen Sie Standardwerte fest und konfigurieren Sie den Grad der Sicherheitsprüfungen, der Ihren individuellen Projektanforderungen entspricht. Es werden Warnprotokolle aufgezeichnet, die anzeigen, wo eine Verletzung der Subgraph-Logik vorliegt, um das Problem zu beheben und die Datengenauigkeit zu gewährleisten. + +**Schlüsselfunktionen** + +- Das Code-Generierungstool unterstützt **alle** Subgraphentypen und ist für Benutzer konfigurierbar, um sinnvolle Standardwerte festzulegen. Die Codegenerierung verwendet diese Konfiguration, um Hilfsfunktionen zu generieren, die den Vorgaben des Benutzers entsprechen. + +- Das Framework enthält auch eine Möglichkeit (über die Konfigurationsdatei), benutzerdefinierte, aber sichere Setter-Funktionen für Gruppen von Entitätsvariablen zu erstellen. Auf diese Weise ist es für den Benutzer unmöglich, eine veraltete Graph-Entität zu laden/zu verwenden, und es ist auch unmöglich, zu vergessen, eine Variable zu speichern oder zu setzen, die von der Funktion benötigt wird. + +- Warnmeldungen werden als Protokolle aufgezeichnet, die anzeigen, wo ein Verstoß gegen die Subgraph-Logik vorliegt, um das Problem zu beheben und die Datengenauigkeit zu gewährleisten. + +Subgraph Uncrashable kann als optionales Flag mit dem Graph CLI Codegen-Befehl ausgeführt werden. + +```sh +graph codegen -u [options] [] +``` + +Besuchen Sie die [Subgraph-Dokumentation zur Uncrash-Funktion](https://float-capital.github.io/float-subgraph-uncrashable/docs/) oder sehen Sie sich dieses [Video-Tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) an, um mehr zu erfahren und mit der Entwicklung sicherer Subgraphen zu beginnen. diff --git a/website/src/pages/de/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/de/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..680d30f2f4b6 --- /dev/null +++ b/website/src/pages/de/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Übertragung auf The Graph +--- + +Aktualisieren Sie schnell Ihre Subgraphen von jeder Plattform auf [The Graph's decentralized network] (https://thegraph.com/networks/). + +## Vorteile der Umstellung auf The Graph + +- Verwenden Sie denselben Subgraphen, den Ihre Anwendungen bereits verwenden, mit einer Zero-Downtime-Migration. +- Erhöhen Sie die Zuverlässigkeit durch ein globales Netzwerk, das von über 100 Indexierern unterstützt wird. +- Erhalten Sie blitzschnellen Support für Subgraphen rund um die Uhr, mit einem technischen Team, das auf Abruf bereitsteht. + +## Aktualisieren Sie Ihren Subgraph in 3 einfachen Schritten zu The Graph + +1. [Richten Sie Ihre Studioumgebung ein](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Stellen Sie Ihren Subgraphen im Studio bereit](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Veröffentlichen Sie im The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Einrichten der Studioumgebung + +### Create a subgraph in Subgraph Studio + +- Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und verbinden Sie Ihre Wallet. +- Klicken Sie auf „Einen Subgraphen erstellen“. Es wird empfohlen, den Subgraph in Title Case zu benennen: „Subgraph Name Chain Name“. + +> Hinweis: Nach der Veröffentlichung ist der Name des Subgraphen bearbeitbar, erfordert aber jedes Mal eine Onchain-Aktion, also benennen Sie ihn richtig. + +### Installieren Sie die Graph CLI⁠ + +Sie müssen [Node.js](https://nodejs.org/) und einen Paketmanager Ihrer Wahl (`npm` oder `pnpm`) installiert haben, um das Graph CLI zu verwenden. Prüfen Sie, ob die [aktuellste](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI-Version installiert ist. + +Führen Sie auf Ihrem lokalen Computer den folgenden Befehl aus: + +Verwendung von [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Verwenden Sie den folgenden Befehl, um einen Subgraphen in Studio über die CLI zu erstellen: + +```sh +graph init --product subgraph-studio +``` + +### Authentifizieren Sie Ihren Subgraphen + +Verwenden Sie in The Graph CLI den Befehl auth aus Subgraph Studio: + +```sh +graph auth +``` + +## 2. Bereitstellung des Subgraphen in Studio + +Wenn Sie Ihren Quellcode haben, können Sie ihn einfach in Studio bereitstellen. Wenn Sie ihn nicht haben, finden Sie hier eine schnelle Möglichkeit, Ihren Subgraphen bereitzustellen. + +Führen Sie in The Graph CLI den folgenden Befehl aus: + +```sh +graph deploy --ipfs-hash + +``` + +> **Hinweis:** Jeder Subgraph hat einen IPFS-Hash (Deployment ID), der wie folgt aussieht: „Qmasdfad...“. Zur Bereitstellung verwenden Sie einfach diesen **IPFS-Hash**. Sie werden aufgefordert, eine Version einzugeben (z. B. v0.0.1). + +## 3. Veröffentlichen Ihres Subgraphen im The Graph Network + +![Schaltfläche „Veröffentlichen“](/img/publish-sub-transfer.png) + +### Fragen Sie Ihren Subgraphen ab + +> Um etwa 3 Indexierer für die Abfrage Ihres Subgraphen zu gewinnen, wird empfohlen, mindestens 3.000 GRT zu kuratieren. Um mehr über das Kuratieren zu erfahren, lesen Sie [Kuratieren](/resources/roles/curating/) auf The Graph. + +Sie können die [Abfrage](/subgraphs/querying/introduction/) eines beliebigen Subgraphen starten, indem Sie eine GraphQL-Abfrage an den Abfrage-URL-Endpunkt des Subgraphen senden, der sich am oberen Rand seiner Explorer-Seite in Subgraph Studio befindet. + +#### Beispiel + +[CryptoPunks Ethereum Subgraph] (https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) von Messari: + +![Abfrage-URL](/img/cryptopunks-screenshot-transfer.png) + +Die Abfrage-URL für diesen Subgraphen lautet: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**ihr-eigener-api-schlüssel**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Jetzt müssen Sie nur noch **Ihren eigenen API-Schlüssel** eingeben, um GraphQL-Abfragen an diesen Endpunkt zu senden. + +### Erhalten Sie Ihren eigenen API-Schlüssel + +Sie können API-Schlüssel in Subgraph Studio unter dem Menüpunkt „API-Schlüssel“ oben auf der Seite erstellen: + +![API-Schlüssel](/img/Api-keys-screenshot.png) + +### Überwachen Sie den Subgraph-Status + +Nach dem Upgrade können Sie auf Ihre Subgraphen in [Subgraph Studio] (https://thegraph.com/studio/) zugreifen und sie verwalten und alle Subgraphen in [The Graph Explorer] (https://thegraph.com/networks/) erkunden. + +### Zusätzliche Ressourcen + +- Wie Sie schnell einen neuen Subgraphen erstellen und veröffentlichen können, erfahren Sie im [Schnellstart](/subgraphs/quick-start/). +- Um alle Möglichkeiten zu erkunden, wie Sie Ihren Subgraphen optimieren und anpassen können, um eine bessere Leistung zu erzielen, lesen Sie mehr über [Erstellen eines Subgraphen hier](/developing/creating-a-subgraph/). diff --git a/website/src/pages/de/subgraphs/querying/_meta-titles.json b/website/src/pages/de/subgraphs/querying/_meta-titles.json index a30daaefc9d0..1f70ade23096 100644 --- a/website/src/pages/de/subgraphs/querying/_meta-titles.json +++ b/website/src/pages/de/subgraphs/querying/_meta-titles.json @@ -1,3 +1,3 @@ { - "graph-client": "Graph Client" + "graph-client": "Graph-Client" } diff --git a/website/src/pages/de/subgraphs/querying/best-practices.mdx b/website/src/pages/de/subgraphs/querying/best-practices.mdx index ff5f381e2993..54c6be8f3151 100644 --- a/website/src/pages/de/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/de/subgraphs/querying/best-practices.mdx @@ -1,20 +1,20 @@ --- -title: Querying Best Practices +title: Best Practices für Abfragen --- -The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. +The Graph bietet eine dezentrale Möglichkeit zur Abfrage von Daten aus Blockchains. Die Daten werden über eine GraphQL-API zugänglich gemacht, was die Abfrage mit der GraphQL-Sprache erleichtert. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Lernen Sie die wesentlichen GraphQL-Sprachregeln und Best Practices, um Ihren Subgraph zu optimieren. --- -## Querying a GraphQL API +## Abfrage einer GraphQL-API -### The Anatomy of a GraphQL Query +### Die Anatomie einer GraphQL-Abfrage -Unlike REST API, a GraphQL API is built upon a Schema that defines which queries can be performed. +Im Gegensatz zur REST-API basiert eine GraphQL-API auf einem Schema, das definiert, welche Abfragen durchgeführt werden können. -For example, a query to get a token using the `token` query will look as follows: +Eine Abfrage zum Abrufen eines Tokens mit der Abfrage `token` sieht zum Beispiel wie folgt aus: ```graphql query GetToken($id: ID!) { @@ -25,7 +25,7 @@ query GetToken($id: ID!) { } ``` -which will return the following predictable JSON response (_when passing the proper `$id` variable value_): +die die folgende vorhersehbare JSON-Antwort zurückgibt (_bei Übergabe des richtigen Variablenwerts `$id`): ```json { @@ -36,47 +36,47 @@ which will return the following predictable JSON response (_when passing the pro } ``` -GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/). +GraphQL-Abfragen verwenden die GraphQL-Sprache, die nach [einer Spezifikation] (https://spec.graphql.org/) definiert ist. -The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders): +Die obige `GetToken`-Abfrage besteht aus mehreren Sprachteilen (im Folgenden durch `[...]` Platzhalter ersetzt): ```graphql query [operationName]([variableName]: [variableType]) { [queryName]([argumentName]: [variableName]) { - # "{ ... }" express a Selection-Set, we are querying fields from `queryName`. + # "{ ... }" Express-Sets auswählen, wir fragen Felder von `queryName` ab. [field] [field] } } ``` -## Rules for Writing GraphQL Queries +## Regeln für das Schreiben von GraphQL-Abfragen -- Each `queryName` must only be used once per operation. -- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`) -- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/). -- Any variable assigned to an argument must match its type. -- In a given list of variables, each of them must be unique. -- All defined variables must be used. +- Jeder `queryName` darf nur einmal pro Vorgang verwendet werden. +- Jedes `field` darf nur einmal in einer Auswahl verwendet werden (wir können `id` nicht zweimal unter `token`abfragen) +- Einige `field`s oder Abfragen (wie `tokens`) geben komplexe Typen zurück, die eine Auswahl von Unterfeldern erfordern. Wird eine Auswahl nicht bereitgestellt, wenn sie erwartet wird (oder eine Auswahl bereitgestellt, wenn sie nicht erwartet wird - zum Beispiel bei `id`), wird ein Fehler ausgelöst. Um einen Feldtyp zu kennen, schauen Sie bitte im [Graph Explorer](/subgraphs/explorer/) nach. +- Jede Variable, die einem Argument zugewiesen wird, muss ihrem Typ entsprechen. +- In einer gegebenen Liste von Variablen muss jede von ihnen eindeutig sein. +- Alle definierten Variablen müssen verwendet werden. -> Note: Failing to follow these rules will result in an error from The Graph API. +> Hinweis: Die Nichtbeachtung dieser Regeln führt zu einer Fehlermeldung von The Graph API. For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/). -### Sending a query to a GraphQL API +### Senden einer Abfrage an eine GraphQL API -GraphQL is a language and set of conventions that transport over HTTP. +GraphQL ist eine Sprache und ein Satz von Konventionen, die über HTTP transportiert werden. -It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). +Das bedeutet, dass Sie eine GraphQL-API mit dem Standard `fetch` abfragen können (nativ oder über `@whatwg-node/fetch` oder `isomorphic-fetch`). -However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: +Wie in [„Abfragen von einer Anwendung“](/subgraphs/querying/from-an-application/) erwähnt, wird jedoch empfohlen, den `graph-client` zu verwenden, der die folgenden einzigartigen Funktionen unterstützt: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) -- Fully typed result +- Kettenübergreifende Behandlung von Subgraphen: Abfragen von mehreren Subgraphen in einer einzigen Abfrage +- [Automatische Blockverfolgung](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [Automatische Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Vollständig typisiertes Ergebnis -Here's how to query The Graph with `graph-client`: +So wird The Graph mit `graph-client` abgefragt: ```tsx import { execute } from '../.graphclient' @@ -93,45 +93,43 @@ const variables = { id: '1' } async function main() { const result = await execute(query, variables) - // `result` is fully typed! - console.log(result) + // `result` ist vollständig typisiert! + console.log(result) } main() ``` -More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/). +Weitere GraphQL-Client-Alternativen werden in [„Abfragen von einer Anwendung“](/subgraphs/querying/from-an-application/) behandelt. --- -## Best Practices +## Bewährte Praktiken -### Always write static queries +### Schreiben Sie immer statische Abfragen -A common (bad) practice is to dynamically build query strings as follows: +Eine gängige (schlechte) Praxis ist es, Abfragezeichenfolgen dynamisch wie folgt zu erstellen: ```tsx const id = params.id const fields = ['id', 'owner'] const query = ` query GetToken { - token(id: ${id}) { - ${fields.join('\n')} + token(id: ${id}) { + ${fields.join('\n')} } } ` - -// Execute query... ``` -While the above snippet produces a valid GraphQL query, **it has many drawbacks**: +Auch wenn das obige Snippet eine gültige GraphQL-Abfrage erzeugt, **hat es viele Nachteile**: -- it makes it **harder to understand** the query as a whole -- developers are **responsible for safely sanitizing the string interpolation** -- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side** -- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools) +- es macht es **schwieriger**, die Abfrage als Ganzes zu verstehen +- Die Entwickler sind **für die sichere Bereinigung der String-Interpolation verantwortlich**. +- die Werte der Variablen nicht als Teil der Anforderungsparameter zu senden **eine mögliche Zwischenspeicherung auf der Server-Seite zu verhindern** +- es **verhindert, dass Werkzeuge die Abfrage statisch analysieren** (z. B. Linter oder Werkzeuge zur Typgenerierung) -For this reason, it is recommended to always write queries as static strings: +Aus diesem Grund ist es empfehlenswert, Abfragen immer als statische Strings zu schreiben: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -153,18 +151,18 @@ const result = await execute(query, { }) ``` -Doing so brings **many advantages**: +Dies bringt **viele Vorteile**: -- **Easy to read and maintain** queries -- The GraphQL **server handles variables sanitization** -- **Variables can be cached** at server-level -- **Queries can be statically analyzed by tools** (more on this in the following sections) +- **Einfach zu lesende und zu pflegende** Abfragen +- Der GraphQL **Server kümmert sich um die Bereinigung von Variablen** +- **Variablen können auf Server-Ebene zwischengespeichert werden**. +- **Abfragen können von Tools statisch analysiert werden** (mehr dazu in den folgenden Abschnitten) -### How to include fields conditionally in static queries +### Wie man Felder bedingt in statische Abfragen einbezieht -You might want to include the `owner` field only on a particular condition. +Möglicherweise möchten Sie das Feld `owner` nur unter einer bestimmten Bedingung einbeziehen. -For this, you can leverage the `@include(if:...)` directive as follows: +Dazu können Sie die Richtlinie `@include(if:...)` wie folgt nutzen: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -187,41 +185,41 @@ const result = await execute(query, { }) ``` -> Note: The opposite directive is `@skip(if: ...)`. +> Anmerkung: Die gegenteilige Direktive ist `@skip(if: ...)`. -### Ask for what you want +### Verlangen Sie, was Sie wollen -GraphQL became famous for its "Ask for what you want" tagline. +GraphQL wurde durch den Slogan „Frag nach dem, was du willst“ bekannt. -For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. +Aus diesem Grund gibt es in GraphQL keine Möglichkeit, alle verfügbaren Felder zu erhalten, ohne sie einzeln auflisten zu müssen. -- When querying GraphQL APIs, always think of querying only the fields that will be actually used. -- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- Denken Sie bei der Abfrage von GraphQL-APIs immer daran, nur die Felder abzufragen, die tatsächlich verwendet werden. +- Stellen Sie sicher, dass Abfragen nur so viele Entitäten abrufen, wie Sie tatsächlich benötigen. Standardmäßig rufen Abfragen 100 Entitäten in einer Sammlung ab, was in der Regel viel mehr ist, als tatsächlich verwendet wird, z. B. für die Anzeige für den Benutzer. Dies gilt nicht nur für die Top-Level-Sammlungen in einer Abfrage, sondern vor allem auch für verschachtelte Sammlungen von Entitäten. -For example, in the following query: +Zum Beispiel in der folgenden Abfrage: ```graphql query listTokens { tokens { - # will fetch up to 100 tokens + # wird bis zu 100 Tokens id - transactions { - # will fetch up to 100 transactions + Transaktionen abrufen { + # wird bis zu 100 Transaktionen abrufen id } } } ``` -The response could contain 100 transactions for each of the 100 tokens. +Die Antwort könnte 100 Transaktionen für jedes der 100 Token enthalten. -If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field. +Wenn die Anwendung nur 10 Transaktionen benötigt, sollte die Abfrage explizit `first: 10` für das Feld „transactions“ festlegen. -### Use a single query to request multiple records +### Verwenden Sie eine einzige Abfrage, um mehrere Datensätze abzufragen -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +Standardmäßig haben Subgraphen eine singuläre Entität für einen Datensatz. Für mehrere Datensätze verwenden Sie die Plural-Entitäten und den Filter: `where: {id_in:[X,Y,Z]}` oder `where: {Volumen_gt:100000}` -Example of inefficient querying: +Beispiel für eine ineffiziente Abfrage: ```graphql query SingleRecord { @@ -238,7 +236,7 @@ query SingleRecord { } ``` -Example of optimized querying: +Beispiel für eine optimierte Abfrage: ```graphql query ManyRecords { @@ -249,9 +247,9 @@ query ManyRecords { } ``` -### Combine multiple queries in a single request +### Mehrere Abfragen in einer einzigen Anfrage kombinieren -Your application might require querying multiple types of data as follows: +Für Ihre Anwendung kann es erforderlich sein, mehrere Datentypen wie folgt abzufragen: ```graphql import { execute } from "your-favorite-graphql-client" @@ -281,9 +279,9 @@ const [tokens, counters] = Promise.all( ) ``` -While this implementation is totally valid, it will require two round trips with the GraphQL API. +Diese Implementierung ist zwar durchaus sinnvoll, erfordert aber zwei Umläufe mit der GraphQL-API. -Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows: +Glücklicherweise ist es auch möglich, mehrere Abfragen in der gleichen GraphQL-Anfrage wie folgt zu senden: ```graphql import { execute } from "your-favorite-graphql-client" @@ -300,17 +298,16 @@ query GetTokensandCounters { } } ` - -const { result: { tokens, counters } } = execute(query) +const { result: { tokens, counters } } = execute(query) ``` -This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**. +Dieser Ansatz **verbessert die Gesamtleistung**, indem er die im Netz verbrachte Zeit reduziert (erspart Ihnen einen Hin- und Rückweg zur API) und bietet eine **präzisere Implementierung**. -### Leverage GraphQL Fragments +### Nutzung von GraphQL-Fragmenten -A helpful feature to write GraphQL queries is GraphQL Fragment. +Eine hilfreiche Funktion zum Schreiben von GraphQL-Abfragen ist GraphQL Fragment. -Looking at the following query, you will notice that some fields are repeated across multiple Selection-Sets (`{ ... }`): +Wenn Sie sich die folgende Abfrage ansehen, werden Sie feststellen, dass einige Felder über mehrere Auswahlsätze hinweg wiederholt werden (`{ ... }`): ```graphql query { @@ -330,12 +327,12 @@ query { } ``` -Such repeated fields (`id`, `active`, `status`) bring many issues: +Solche wiederholten Felder (`id`, `active`, `status`) bringen viele Probleme mit sich: -- More extensive queries become harder to read. -- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- Umfangreichere Abfragen werden schwieriger zu lesen. +- Bei der Verwendung von Tools, die TypeScript-Typen auf Basis von Abfragen generieren (_mehr dazu im letzten Abschnitt_), führen `newDelegate` und `oldDelegate` zu zwei unterschiedlichen Inline-Schnittstellen. -A refactored version of the query would be the following: +Eine überarbeitete Version der Abfrage würde wie folgt aussehen: ```graphql query { @@ -350,45 +347,46 @@ query { } } -# we define a fragment (subtype) on Transcoder -# to factorize repeated fields in the query -fragment DelegateItem on Transcoder { +# wir definieren ein Fragment (Subtyp) auf Transcoder +# um wiederholte Felder in der Abfrage zu faktorisieren +fragment DelegateItem auf Transcoder { id active status } ``` -Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. +Die Verwendung von GraphQL `fragment` verbessert die Lesbarkeit (insbesondere bei Skalierung) und führt zu einer besseren TypeScript-Typengenerierung. -When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). +Wenn Sie das Tool zur Generierung von Typen verwenden, wird die obige Abfrage einen geeigneten Typ `DelegateItemFragment` erzeugen (_siehe letzter Abschnitt „Tools“). -### GraphQL Fragment do's and don'ts +### GraphQL-Fragmente: Was man tun und lassen sollte -### Fragment base must be a type +### Die Fragmentbasis muss ein Typ sein -A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: +Ein Fragment kann nicht auf einem nicht anwendbaren Typ basieren, kurz gesagt, **auf einem Typ, der keine Felder hat**: ```graphql fragment MyFragment on BigInt { - # ... + # ... } ``` -`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. +`BigInt` ist ein **Skalar** (nativer “einfacher" Typ), der nicht als Basis für ein Fragment verwendet werden kann. -#### How to spread a Fragment +#### Wie man ein Fragment verbreitet -Fragments are defined on specific types and should be used accordingly in queries. +Fragmente sind für bestimmte Typen definiert und sollten entsprechend in Abfragen verwendet werden. -Example: +Beispiel: ```graphql query { bondEvents { id newDelegate { - ...VoteItem # Error! `VoteItem` cannot be spread on `Transcoder` type + ...VoteItem # Fehler! `VoteItem` kann nicht auf `Transcoder` Typ + verteilt werden } oldDelegate { ...VoteItem @@ -402,29 +400,29 @@ fragment VoteItem on Vote { } ``` -`newDelegate` and `oldDelegate` are of type `Transcoder`. +`newDelegate` und `oldDelegate` sind vom Typ `Transcoder`. -It is not possible to spread a fragment of type `Vote` here. +Es ist nicht möglich, ein Fragment des Typs `Vote` hier zu verbreiten. -#### Define Fragment as an atomic business unit of data +#### Definition eines Fragments als atomare Geschäftseinheit von Daten -GraphQL `Fragment`s must be defined based on their usage. +GraphQL `Fragment`s müssen entsprechend ihrer Verwendung definiert werden. -For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. +Für die meisten Anwendungsfälle reicht es aus, ein Fragment pro Typ zu definieren (im Falle der Verwendung wiederholter Felder oder der Generierung von Typen). -Here is a rule of thumb for using fragments: +Hier ist eine Faustregel für die Verwendung von Fragmenten: -- When fields of the same type are repeated in a query, group them in a `Fragment`. -- When similar but different fields are repeated, create multiple fragments, for instance: +- Wenn Felder desselben Typs in einer Abfrage wiederholt werden, gruppieren Sie sie in einem `Fragment`. +- Wenn sich ähnliche, aber unterschiedliche Felder wiederholen, erstellen Sie z. B. mehrere Fragmente: ```graphql -# base fragment (mostly used in listing) +# Basisfragment (meist im Listing verwendet) fragment Voter on Vote { id voter } -# extended fragment (when querying a detailed view of a vote) +# erweitertes Fragment (bei Abfrage einer detaillierten Ansicht einer Abstimmung) fragment VoteWithPoll on Vote { id voter @@ -438,51 +436,51 @@ fragment VoteWithPoll on Vote { --- -## The Essential Tools +## Die wichtigsten Tools -### GraphQL web-based explorers +### Webbasierte GraphQL-Explorer -Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries. +Das Iterieren von Abfragen, indem Sie sie in Ihrer Anwendung ausführen, kann mühsam sein. Zögern Sie deshalb nicht, den [Graph Explorer] (https://thegraph.com/explorer) zu verwenden, um Ihre Abfragen zu testen, bevor Sie sie Ihrer Anwendung hinzufügen. Der Graph Explorer bietet Ihnen eine vorkonfigurierte GraphQL-Spielwiese zum Testen Ihrer Abfragen. -If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql). +Wenn Sie nach einer flexibleren Methode zum Debuggen/Testen Ihrer Abfragen suchen, gibt es ähnliche webbasierte Tools wie [Altair] (https://altairgraphql.dev/) und [GraphiQL] (https://graphiql-online.com/graphiql). -### GraphQL Linting +### GraphQL-Linting -In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools. +Um die oben genannten Best Practices und syntaktischen Regeln einzuhalten, wird die Verwendung der folgenden Workflow- und IDE-Tools dringend empfohlen. **GraphQL ESLint** -[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort. +[GraphQL ESLint] (https://the-guild.dev/graphql/eslint/docs/getting-started) hilft Ihnen dabei, mit null Aufwand auf dem neuesten Stand der GraphQL Best Practices zu bleiben. -[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as: +[Die „operations-recommended“](https://the-guild.dev/graphql/eslint/docs/configs) Konfiguration setzt wichtige Regeln wie z.B.: -- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type? -- `@graphql-eslint/no-unused variables`: should a given variable stay unused? -- and more! +- `@graphql-eslint/fields-on-correct-type`: wird ein Feld auf einen richtigen Typ verwendet? +- `@graphql-eslint/no-unused variables`: Soll eine bestimmte Variable unbenutzt bleiben? +- und mehr! -This will allow you to **catch errors without even testing queries** on the playground or running them in production! +So können Sie **Fehler aufspüren, ohne Abfragen** auf dem Playground zu testen oder sie in der Produktion auszuführen! -### IDE plugins +### IDE-Plugins -**VSCode and GraphQL** +**VSCode und GraphQL** -The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: +Die [GraphQL VSCode-Erweiterung] (https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) ist eine hervorragende Ergänzung zu Ihrem Entwicklungs-Workflow zu bekommen: -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema +- Syntaxhervorhebung +- Autovervollständigungsvorschläge +- Validierung gegen Schema - Snippets -- Go to definition for fragments and input types +- Zur Definition von Fragmenten und Eingabetypen -If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. +Wenn Sie `graphql-eslint` verwenden, ist die [ESLint VSCode-Erweiterung] (https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) ein Muss, um Fehler und Warnungen in Ihrem Code korrekt zu visualisieren. -**WebStorm/Intellij and GraphQL** +**WebStorm/Intellij und GraphQL** -The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: +Das [JS GraphQL Plugin] (https://plugins.jetbrains.com/plugin/8097-graphql/) wird Ihre Erfahrung bei der Arbeit mit GraphQL erheblich verbessern, indem es Folgendes bietet: -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema +- Syntaxhervorhebung +- Autovervollständigungsvorschläge +- Validierung gegen Schema - Snippets -For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. +Weitere Informationen zu diesem Thema finden Sie im [WebStorm-Artikel] (https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/), in dem alle wichtigen Funktionen des Plugins vorgestellt werden. diff --git a/website/src/pages/de/subgraphs/querying/distributed-systems.mdx b/website/src/pages/de/subgraphs/querying/distributed-systems.mdx index 85337206bfd3..0fd4c34dfa85 100644 --- a/website/src/pages/de/subgraphs/querying/distributed-systems.mdx +++ b/website/src/pages/de/subgraphs/querying/distributed-systems.mdx @@ -1,51 +1,51 @@ --- -title: Distributed Systems +title: Verteilte Systeme --- -The Graph is a protocol implemented as a distributed system. +The Graph ist ein Protokoll, das als verteiltes System implementiert ist. -Connections fail. Requests arrive out of order. Different computers with out-of-sync clocks and states process related requests. Servers restart. Re-orgs happen between requests. These problems are inherent to all distributed systems but are exacerbated in systems operating at a global scale. +Verbindungen schlagen fehl. Anfragen treffen nicht in der richtigen Reihenfolge ein. Verschiedene Computer mit nicht synchronisierten Uhren und Zuständen bearbeiten zusammengehörige Anfragen. Server werden neu gestartet. Zwischen den Anfragen kommt es zu Re-orgs. Diese Probleme treten bei allen verteilten Systemen auf, verschärfen sich jedoch bei Systemen, die in globalem Maßstab arbeiten. -Consider this example of what may occur if a client polls an Indexer for the latest data during a re-org. +Ein Beispiel zeigt, was passieren kann, wenn ein Client während einer Reorganisation einen Indexierer nach den neuesten Daten abfragt. -1. Indexer ingests block 8 -2. Request served to the client for block 8 -3. Indexer ingests block 9 -4. Indexer ingests block 10A -5. Request served to the client for block 10A -6. Indexer detects reorg to 10B and rolls back 10A -7. Request served to the client for block 9 -8. Indexer ingests block 10B -9. Indexer ingests block 11 -10. Request served to the client for block 11 +1. Indexierer nimmt Block 8 auf +2. Anfrage an den Kunden für Block 8 +3. Indexierer nimmt Block 9 auf +4. Indexierer nimmt Block 10A auf +5. Anfrage an den Kunden für Block 10A +6. Indexierer erkennt Reorg nach 10B und rollt 10A zurück +7. Anfrage an den Kunden für Block 9 +8. Indexierer nimmt den Block 10B auf +9. Indexierer nimmt Block 11 auf +10. Anfrage an den Kunden für Block 11 -From the point of view of the Indexer, things are progressing forward logically. Time is moving forward, though we did have to roll back an uncle block and play the block under consensus forward on top of it. Along the way, the Indexer serves requests using the latest state it knows about at that time. +Aus der Sicht des Indexers schreiten die Dinge logisch voran. Die Zeit schreitet voran, obwohl wir einen Uncle-Block zurückdrehen und den Block unter Konsens vorwärts auf ihn spielen mussten. Auf dem Weg dorthin bedient der Indexer die Anfragen mit dem neuesten Stand, der ihm zu diesem Zeitpunkt bekannt ist. -From the point of view of the client, however, things appear chaotic. The client observes that the responses were for blocks 8, 10, 9, and 11 in that order. We call this the "block wobble" problem. When a client experiences block wobble, data may appear to contradict itself over time. The situation worsens when we consider that Indexers do not all ingest the latest blocks simultaneously, and your requests may be routed to multiple Indexers. +Aus der Sicht des Kunden erscheinen die Dinge jedoch chaotisch. Der Kunde stellt fest, dass die Antworten für die Blöcke 8, 10, 9 und 11 in dieser Reihenfolge erfolgten. Wir nennen dies das „Block Wobble“-Problem. Wenn ein Kunde von Block-Wobble betroffen ist, kann es sein, dass sich die Daten im Laufe der Zeit widersprechen. Die Situation verschlimmert sich noch, wenn man bedenkt, dass nicht alle Indexer die neuesten Blöcke gleichzeitig aufnehmen und Ihre Anfragen möglicherweise an mehrere Indexierer weitergeleitet werden. -It is the responsibility of the client and server to work together to provide consistent data to the user. Different approaches must be used depending on the desired consistency as there is no one right program for every problem. +Es liegt in der Verantwortung von Client und Server, zusammenzuarbeiten, um dem Benutzer konsistente Daten zu liefern. Je nach gewünschter Konsistenz müssen unterschiedliche Ansätze verwendet werden, da es nicht das eine richtige Programm für jedes Problem gibt. -Reasoning through the implications of distributed systems is hard, but the fix may not be! We've established APIs and patterns to help you navigate some common use-cases. The following examples illustrate those patterns but still elide details required by production code (like error handling and cancellation) to not obfuscate the main ideas. +Die Implikationen verteilter Systeme zu durchdenken ist schwierig, aber die Lösung muss es nicht sein! Wir haben APIs und Muster entwickelt, die Ihnen bei der Navigation in einigen häufigen Anwendungsfällen helfen. Die folgenden Beispiele veranschaulichen diese Muster, lassen aber Details aus, die für den Produktionscode erforderlich sind (z. B. Fehlerbehandlung und Stornierung), um die wichtigsten Ideen nicht zu verschleiern. -## Polling for updated data +## Abruf von aktualisierten Daten -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph bietet die `block: { number_gte: $minBlock }` API, die sicherstellt, dass die Antwort für einen einzelnen Block gleich oder höher als `$minBlock` ist. Wenn die Anfrage an eine `graph-node` Instanz gestellt wird und der Min-Block noch nicht synchronisiert ist, wird `graph-node` einen Fehler zurückgeben. Wenn `graph-node` den Min-Block synchronisiert hat, wird er die Antwort für den letzten Block ausführen. Wenn die Anfrage an ein Edge & Node Gateway gerichtet ist, wird das Gateway alle Indexer herausfiltern, die den Min-Block noch nicht synchronisiert haben, und die Anfrage für den letzten Block stellen, den der Indexierer synchronisiert hat. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +Wir können `number_gte` verwenden, um sicherzustellen, dass die Zeit niemals rückwärts läuft, wenn wir Daten in einer Schleife abfragen. Hier ist ein Beispiel: ```javascript -/// Updates the protocol.paused variable to the latest -/// known value in a loop by fetching it using The Graph. +/// Aktualisiert die Variable protocol.paused auf den letzten +/// bekannten Wert in einer Schleife, indem sie ihn mit The Graph abruft. async function updateProtocolPaused() { - // It's ok to start with minBlock at 0. The query will be served - // using the latest block available. Setting minBlock to 0 is the - // same as leaving out that argument. + // Es ist in Ordnung, mit minBlock bei 0 zu beginnen. Die Abfrage wird + // mit dem letzten verfügbaren Block bedient. Das Setzen von minBlock auf 0 ist + // dasselbe wie das Weglassen dieses Arguments. let minBlock = 0 for (;;) { - // Schedule a promise that will be ready once - // the next Ethereum block will likely be available. - const nextBlock = new Promise((f) => { + // Planen Sie ein Versprechen, das bereit sein wird, sobald + // der nächste Ethereum-Block wahrscheinlich verfügbar ist. + const nextBlock = new Promise((f) => { setTimeout(f, 14000) }) @@ -65,30 +65,30 @@ async function updateProtocolPaused() { const response = await graphql(query, variables) minBlock = response._meta.block.number - // TODO: Do something with the response data here instead of logging it. + // TODO: Machen Sie hier etwas mit den Antwortdaten, anstatt sie zu protokollieren. console.log(response.protocol.paused) - // Sleep to wait for the next block + // Sleep um auf den nächsten Block zu warten await nextBlock } } ``` -## Fetching a set of related items +## Abrufen einer Gruppe verwandter Elemente -Another use-case is retrieving a large set or, more generally, retrieving related items across multiple requests. Unlike the polling case (where the desired consistency was to move forward in time), the desired consistency is for a single point in time. +Ein weiterer Anwendungsfall ist der Abruf einer großen Menge oder, allgemeiner, der Abruf zusammengehöriger Elemente über mehrere Anfragen hinweg. Im Gegensatz zum Abruffall (bei dem die gewünschte Konsistenz in der Zeit vorwärts gehen sollte), bezieht sich die gewünschte Konsistenz auf einen einzigen Zeitpunkt. -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +Hier werden wir den \`block: { hash: $blockHash }, um alle unsere Ergebnisse an denselben Block zu binden. ```javascript -/// Gets a list of domain names from a single block using pagination +/// Ruft eine Liste von Domainnamen aus einem einzelnen Block mit Paginierung ab async function getDomainNames() { - // Set a cap on the maximum number of items to pull. + // Legen Sie eine Obergrenze für die maximale Anzahl der zu ziehenden Elemente fest. let pages = 5 const perPage = 1000 - // The first query will get the first page of results and also get the block - // hash so that the remainder of the queries are consistent with the first. + // Die erste Abfrage erhält die erste Seite der Ergebnisse und auch den Block + // Hash, so dass die restlichen Abfragen mit der ersten konsistent sind. const listDomainsQuery = ` query ListDomains($perPage: Int!) { domains(first: $perPage) { @@ -107,9 +107,9 @@ async function getDomainNames() { let blockHash = data._meta.block.hash let query - // Continue fetching additional pages until either we run into the limit of - // 5 pages total (specified above) or we know we have reached the last page - // because the page has fewer entities than a full page. + // Wir fahren fort, weitere Seiten zu holen, bis wir entweder auf das Limit von + // 5 Seiten insgesamt (oben angegeben) stoßen oder wissen, dass wir die letzte Seite + // erreicht haben, weil die Seite weniger Entitäten als eine volle Seite hat. while (data.domains.length == perPage && --pages) { let lastID = data.domains[data.domains.length - 1].id query = ` @@ -122,7 +122,7 @@ async function getDomainNames() { data = await graphql(query, { perPage, lastID, blockHash }) - // Accumulate domain names into the result + // Domainnamen im Ergebnis akkumulieren for (domain of data.domains) { result.push(domain.name) } @@ -131,4 +131,4 @@ async function getDomainNames() { } ``` -Note that in case of a re-org, the client will need to retry from the first request to update the block hash to a non-uncle block. +Beachten Sie, dass der Client im Falle eines Reorgs ab der ersten Anfrage erneut versuchen muss, den Block-Hash auf einen nicht-uncle-Block zu aktualisieren. diff --git a/website/src/pages/de/subgraphs/querying/from-an-application.mdx b/website/src/pages/de/subgraphs/querying/from-an-application.mdx index af85c4086630..9f016b3f2952 100644 --- a/website/src/pages/de/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/de/subgraphs/querying/from-an-application.mdx @@ -1,73 +1,74 @@ --- -title: Querying from an Application +title: Abfragen aus einer Anwendung +sidebarTitle: Querying from an App --- -Learn how to query The Graph from your application. +Erfahren Sie, wie Sie The Graph von Ihrer Anwendung aus abfragen können. -## Getting GraphQL Endpoints +## GraphQL-Endpunkte abrufen -During the development process, you will receive a GraphQL API endpoint at two different stages: one for testing in Subgraph Studio, and another for making queries to The Graph Network in production. +Während des Entwicklungsprozesses erhalten Sie einen GraphQL-API-Endpunkt in zwei verschiedenen Stadien: einen zum Testen in Subgraph Studio und einen weiteren für Abfragen an The Graph Network in der Produktion. -### Subgraph Studio Endpoint +### Subgraph Studio Endpunkt -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +Nachdem Sie Ihren Subgraphen in [Subgraph Studio] (https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/) bereitgestellt haben, erhalten Sie einen Endpunkt, der wie folgt aussieht: ``` https://api.studio.thegraph.com/query/// ``` -> This endpoint is intended for testing purposes **only** and is rate-limited. +> Dieser Endpunkt ist **nur** für Testzwecke gedacht und hat eine begrenzte Übertragungsrate. -### The Graph Network Endpoint +### The Graph Network Endpunkt -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +Nachdem Sie Ihren Subgraphen im Netzwerk veröffentlicht haben, erhalten Sie einen Endpunkt, der wie folgt aussieht: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> Dieser Endpunkt ist für die aktive Nutzung im Netz gedacht. Er ermöglicht es Ihnen, verschiedene GraphQL-Client-Bibliotheken zu verwenden, um den Subgraphen abzufragen und Ihre Anwendung mit indizierten Daten zu bestücken. -## Using Popular GraphQL Clients +## Gängige GraphQL-Clients verwenden -### Graph Client +### Graph-Client -The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: +The Graph bietet einen eigenen GraphQL-Client, `graph-client`, der einzigartige Funktionen wie z.B.: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) -- Fully typed result +- Kettenübergreifende Behandlung von Subgraphen: Abfragen von mehreren Subgraphen in einer einzigen Abfrage +- [Automatische Blockverfolgung](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [Automatische Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Vollständig typisiertes Ergebnis -> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. +> Hinweis: `graph-client` ist mit anderen beliebten GraphQL-Clients wie Apollo und URQL integriert, die mit Umgebungen wie React, Angular, Node.js und React Native kompatibel sind. Die Verwendung von `graph-client` bietet Ihnen daher eine verbesserte Erfahrung bei der Arbeit mit The Graph. -### Fetch Data with Graph Client +### Daten mit Graph Client abrufen -Let's look at how to fetch data from a subgraph with `graph-client`: +Schauen wir uns an, wie man mit `graph-client` Daten aus einem Subgraphen holt: #### Schritt 1 -Install The Graph Client CLI in your project: +Installieren Sie The Graph Client CLI in Ihrem Projekt: ```sh yarn add -D @graphprotocol/client-cli -# or, with NPM: +# oder, mit NPM: npm install --save-dev @graphprotocol/client-cli ``` #### Schritt 2 -Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): +Definieren Sie Ihre Abfrage in einer `.graphql` Datei (oder inline in Ihrer `.js` oder `.ts` Datei): ```graphql query ExampleQuery { - # this one is coming from compound-v2 + # dieses kommt von Compound-v2 markets(first: 7) { borrowRate cash collateralFactor } - # this one is coming from uniswap-v2 + # dieses kommt von Uniswap-v2 pair(id: "0x00004ee988665cdda9a1080d5792cecd16dc1220") { id token0 { @@ -86,7 +87,7 @@ query ExampleQuery { #### Schritt 3 -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Erstellen Sie eine Konfigurationsdatei (mit dem Namen `.graphclientrc.yml`) und verweisen Sie auf Ihre GraphQL-Endpunkte, die z.B. von The Graph bereitgestellt werden: ```yaml # .graphclientrc.yml @@ -104,22 +105,22 @@ documents: - ./src/example-query.graphql ``` -#### Step 4 +#### Schritt 4 -Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: +Führen Sie den folgenden The Graph Client CLI-Befehl aus, um typisierten und gebrauchsfertigen JavaScript-Code zu erzeugen: ```sh -graphclient build +Graphclient erstellen ``` -#### Step 5 +#### Schritt 5 -Update your `.ts` file to use the generated typed GraphQL documents: +Aktualisieren Sie Ihre \`.ts'-Datei, um die generierten typisierten GraphQL-Dokumente zu verwenden: ```tsx import React, { useEffect } from 'react' // ... -// we import types and typed-graphql document from the generated code (`..graphclient/`) +// wir importieren Typen und typisierte GraphQL-Dokumente aus dem generierten Code (`..graphclient/`) import { ExampleQueryDocument, ExampleQueryQuery, execute } from '../.graphclient' function App() { @@ -152,27 +153,27 @@ function App() { export default App ``` -> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +> **Wichtiger Hinweis:** `graph-client` ist perfekt mit anderen GraphQL-Clients wie Apollo client, URQL oder React Query integriert; Sie können [Beispiele im offiziellen Repository finden](https://github.com/graphprotocol/graph-client/tree/main/examples). Wenn Sie sich jedoch für einen anderen Client entscheiden, bedenken Sie, dass **Sie nicht in der Lage sein werden, die kettenübergreifende Behandlung von Subgraphen oder die automatische Paginierung zu nutzen, die Kernfunktionen für die Abfrage von The Graph** sind. -### Apollo Client +### Apollo Klient -[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. +Der [Apollo-Client] (https://www.apollographql.com/docs/) ist ein gängiger GraphQL-Client für Frontend-Ökosysteme. Er ist für React, Angular, Vue, Ember, iOS und Android verfügbar. -Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: +Obwohl es der schwerste Client ist, hat er viele Funktionen, um fortgeschrittene UI auf GraphQL aufzubauen: -- Advanced error handling +- Erweiterte Fehlerbehandlung - Pagination -- Data prefetching -- Optimistic UI -- Local state management +- Vorabruf von Daten +- Optimistische Benutzeroberfläche +- Lokale Zustandsverwaltung (Local State Management) -### Fetch Data with Apollo Client +### Daten mit Apollo Client abrufen -Let's look at how to fetch data from a subgraph with Apollo client: +Schauen wir uns an, wie man mit dem Apollo-Client Daten aus einem Subgraphen abruft: #### Schritt 1 -Install `@apollo/client` and `graphql`: +Installieren Sie `@apollo/client` und `graphql`: ```sh npm install @apollo/client graphql @@ -180,7 +181,7 @@ npm install @apollo/client graphql #### Schritt 2 -Query the API with the following code: +Fragen Sie die API mit dem folgenden Code ab: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -215,7 +216,7 @@ client #### Schritt 3 -To use variables, you can pass in a `variables` argument to the query: +Um Variablen zu verwenden, können Sie der Abfrage das Argument `variables` hinzufügen: ```javascript const tokensQuery = ` @@ -246,22 +247,22 @@ client }) ``` -### URQL Overview +### URQL-Übersicht -[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: +[URQL] (https://formidable.com/open-source/urql/) ist in Node.js, React/Preact, Vue und Svelte-Umgebungen verfügbar, mit einigen erweiterten Funktionen: -- Flexible cache system -- Extensible design (easing adding new capabilities on top of it) -- Lightweight bundle (~5x lighter than Apollo Client) -- Support for file uploads and offline mode +- Flexibles Cache-System +- Erweiterbares Design (einfaches Hinzufügen neuer Funktionen) +- Leichtes Bundle (~5x leichter als Apollo Client) +- Unterstützung für Datei-Uploads und Offline-Modus -### Fetch data with URQL +### Daten mit URQL abrufen -Let's look at how to fetch data from a subgraph with URQL: +Schauen wir uns an, wie man mit URQL Daten aus einem Subgraphen abruft: #### Schritt 1 -Install `urql` and `graphql`: +Installieren Sie `urql` und `graphql`: ```sh npm install urql graphql @@ -269,7 +270,7 @@ npm install urql graphql #### Schritt 2 -Query the API with the following code: +Fragen Sie die API mit dem folgenden Code ab: ```javascript import { createClient } from 'urql' diff --git a/website/src/pages/de/subgraphs/querying/graph-client/README.md b/website/src/pages/de/subgraphs/querying/graph-client/README.md index 416cadc13c6f..583c61e95bc4 100644 --- a/website/src/pages/de/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/de/subgraphs/querying/graph-client/README.md @@ -1,54 +1,54 @@ # The Graph Client Tools -This repo is the home for [The Graph](https://thegraph.com) consumer-side tools (for both browser and NodeJS environments). +Dieses Repo ist das Zuhause für [The Graph](https://thegraph.com) Tools auf der Verbraucherseite (sowohl für Browser- als auch NodeJS-Umgebungen). -## Background +## Hintergrund -The tools provided in this repo are intended to enrich and extend the DX, and add the additional layer required for dApps in order to implement distributed applications. +Die in diesem Repo bereitgestellten Tools sollen den DX bereichern und erweitern und die zusätzliche Schicht hinzufügen, die für dApps erforderlich ist, um verteilte Anwendungen zu implementieren. -Developers who consume data from [The Graph](https://thegraph.com) GraphQL API often need peripherals for making data consumption easier, and also tools that allow using multiple indexers at the same time. +Entwickler, die Daten von [The Graph](https://thegraph.com) GraphQL API konsumieren, benötigen oft Peripheriegeräte, um den Datenkonsum zu vereinfachen, und auch Tools, die die gleichzeitige Verwendung mehrerer Indexer ermöglichen. -## Features and Goals +## Merkmale und Ziele -This library is intended to simplify the network aspect of data consumption for dApps. The tools provided within this repository are intended to run at build time, in order to make execution faster and performant at runtime. +Diese Bibliothek soll den Netzwerkaspekt des Datenverbrauchs für dApps vereinfachen. Die in diesem Repository bereitgestellten Tools sollen zur Build-Zeit ausgeführt werden, um die Ausführung zur Laufzeit schneller und leistungsfähiger zu machen. -> The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! +> Die in diesem Repo zur Verfügung gestellten Tools können als Standalone verwendet werden, aber Sie können sie auch mit jedem bestehenden GraphQL Client verwenden! -| Status | Feature | Notes | -| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| Status | Merkmal | Anmerkungen | +| :----: | ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| ✅ | Mehrere Indexer | basierend auf Abrufstrategien | +| ✅ | Abruf-Strategien | timeout, retry, fallback, race, highestValue | +| ✅ | Validierung der Erstellungszeit & Optimierungen | | +| ✅ | Kundenseitige Zusammensetzung | mit verbessertem Ausführungsplaner (basierend auf GraphQL-Mesh) | +| ✅ | Behandlung kettenübergreifender Subgraphen | Verwenden Sie ähnliche Subgraphen als eine einzige Quelle | +| ✅ | Unbearbeitete Ausführung (Standalone-Modus) | ohne einen umhüllenden GraphQL-Client | +| ✅ | Lokale (client-seitige) Mutationen | | +| ✅ | [Automatische Blockverfolgung](../packages/block-tracking/README.md) | Tracking-Blocknummern [wie hier beschrieben] (https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatischer Seitenumbruch](../packages/auto-pagination/README.md) | mehrere Anfragen in einem einzigen Aufruf, um mehr als das Indexierer-Limit abzurufen | +| ✅ | Integration mit `@apollo/client` | | +| ✅ | Integration mit `urql` | | +| ✅ | TypeScript-Unterstützung | mit eingebautem GraphQL Codegen und `TypedDocumentNode` | +| ✅ | [`@live`-Abfragen](./live.md) | Auf der Grundlage von Umfragen | -> You can find an [extended architecture design here](./architecture.md) +> Einen [erweiterten Architekturentwurf finden Sie hier](./architecture.md) -## Getting Started +## Erste Schritte -You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: +Sie können [Episode 45 von `graphql.wtf`] (https://graphql.wtf/episodes/45-the-graph-client) verfolgen, um mehr über Graph Client zu erfahren: [![GraphQL.wtf Episode 45](https://img.youtube.com/vi/ZsRAmyUtvwg/0.jpg)](https://graphql.wtf/episodes/45-the-graph-client) -To get started, make sure to install [The Graph Client CLI] in your project: +Um loszulegen, stellen Sie sicher, dass Sie [The Graph Client CLI] in Ihrem Projekt installieren: ```sh yarn add -D @graphprotocol/client-cli -# or, with NPM: +# oder, mit NPM: npm install --save-dev @graphprotocol/client-cli ``` -> The CLI is installed as dev dependency since we are using it to produce optimized runtime artifacts that can be loaded directly from your app! +> Das CLI wird als Dev-Abhängigkeit installiert, da wir es verwenden, um optimierte Laufzeit-Artefakte zu erzeugen, die direkt aus Ihrer Anwendung geladen werden können! -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Erstellen Sie eine Konfigurationsdatei (mit dem Namen `.graphclientrc.yml`) und verweisen Sie auf Ihre GraphQL-Endpunkte, die z.B. von The Graph bereitgestellt werden: ```yml # .graphclientrc.yml @@ -59,28 +59,28 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 ``` -Now, create a runtime artifact by running The Graph Client CLI: +Erstellen Sie nun ein Laufzeit-Artefakt, indem Sie The Graph Client CLI ausführen: ```sh -graphclient build +Graphclient erstellen ``` -> Note: you need to run this with `yarn` prefix, or add that as a script in your `package.json`. +> Hinweis: Sie müssen dies mit dem Präfix `yarn` ausführen oder es als Skript in Ihrer `package.json` hinzufügen. -This should produce a ready-to-use standalone `execute` function, that you can use for running your application GraphQL operations, you should have an output similar to the following: +Dies sollte eine einsatzbereite eigenständige Funktion `execute` erzeugen, die Sie für die Ausführung Ihrer GraphQL-Operationen verwenden können. Sie sollten eine Ausgabe ähnlich der folgenden erhalten: ```sh -GraphClient: Cleaning existing artifacts -GraphClient: Reading the configuration -🕸️: Generating the unified schema -🕸️: Generating artifacts -🕸️: Generating index file in TypeScript -🕸️: Writing index.ts for ESM to the disk. -🕸️: Cleanup -🕸️: Done! => .graphclient +GraphClient: Bereinigung vorhandener Artefakte +GraphClient: Einlesen der Konfiguration +🕸️: Erzeugen des einheitlichen Schemas +🕸️: Erzeugen von Artefakten +🕸️: Erzeugen der Indexdatei in TypeScript +🕸️: Schreiben der index.ts für ESM auf die Festplatte. +🕸️: Aufräumen +🕸️: Erledigt! => .graphclient ``` -Now, the `.graphclient` artifact is generated for you, and you can import it directly from your code, and run your queries: +Nun wird das Artefakt `.graphclient` für Sie generiert, und Sie können es direkt aus Ihrem Code importieren und Ihre Abfragen ausführen: ```ts import { execute } from '../.graphclient' @@ -111,54 +111,54 @@ async function main() { main() ``` -### Using Vanilla JavaScript Instead of TypeScript +### Vanilla JavaScript anstelle von TypeScript verwenden -GraphClient CLI generates the client artifacts as TypeScript files by default, but you can configure CLI to generate JavaScript and JSON files together with additional TypeScript definition files by using `--fileType js` or `--fileType json`. +GraphClient CLI generiert die Client-Artefakte standardmäßig als TypeScript-Dateien, aber Sie können CLI so konfigurieren, dass JavaScript- und JSON-Dateien zusammen mit zusätzlichen TypeScript-Definitionsdateien generiert werden, indem Sie `--fileType js` oder `--fileType json` verwenden. -`js` flag generates all files as JavaScript files with ESM Syntax and `json` flag generates source artifacts as JSON files while entrypoint JavaScript file with old CommonJS syntax because only CommonJS supports JSON files as modules. +Das `js`-Flag generiert alle Dateien als JavaScript-Dateien mit ESM-Syntax und das `json`-Flag generiert Quellartefakte als JSON-Dateien, während der Einstiegspunkt JavaScript-Dateien mit der alten CommonJS-Syntax erzeugt, da nur CommonJS JSON-Dateien als Module unterstützt. -Unless you use CommonJS(`require`) specifically, we'd recommend you to use `js` flag. +Wenn Sie nicht gerade CommonJS(`require`) verwenden, empfehlen wir Ihnen, das `js`-Flag zu verwenden. `graphclient --fileType js` -- [An example for JavaScript usage in CommonJS syntax with JSON files](../examples/javascript-cjs) -- [An example for JavaScript usage in ESM syntax](../examples/javascript-esm) +- [Ein Beispiel für die Verwendung von JavaScript in CommonJS-Syntax mit JSON-Dateien](../examples/javascript-cjs) +- [Ein Beispiel für die Verwendung von JavaScript in der ESM-Syntax](../examples/javascript-esm) -#### The Graph Client DevTools +#### The Graph Client Tools -The Graph Client CLI comes with a built-in GraphiQL, so you can experiment with queries in real-time. +The Graph Client CLI verfügt über ein eingebautes GraphiQL, so dass Sie mit Abfragen in Echtzeit experimentieren können. -The GraphQL schema served in that environment, is the eventual schema based on all composed Subgraphs and transformations you applied. +Das GraphQL-Schema, das in dieser Umgebung serviert wird, ist das letztendliche Schema, das auf allen zusammengesetzten Subgraphen und Transformationen basiert, die Sie angewendet haben. -To start the DevTool GraphiQL, run the following command: +Um das DevTool GraphiQL zu starten, führen Sie den folgenden Befehl aus: ```sh graphclient serve-dev ``` -And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 +Und öffnen Sie http://localhost:4000/, um GraphiQL zu verwenden. Sie können nun mit Ihrem Graph-Client-seitigen GraphQL-Schema lokal experimentieren! 🥳 -#### Examples +#### Beispiele -You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: +Sie können auch auf [examples directory in this repo](../examples) verweisen, für fortgeschrittene Beispiele und Integrationsbeispiele: - [TypeScript & React example with raw `execute` and built-in GraphQL-Codegen](../examples/execute) -- [TS/JS NodeJS standalone mode](../examples/node) -- [Client-Side GraphQL Composition](../examples/composition) -- [Integration with Urql and React](../examples/urql) -- [Integration with NextJS and TypeScript](../examples/nextjs) -- [Integration with Apollo-Client and React](../examples/apollo) +- [TS/JS NodeJS Einzelplatzmodus](../Beispiele/node) +- [Client-seitige GraphQL-Zusammensetzung](../Beispiele/Zusammensetzung) +- [Integration mit Urql und React](../Beispiele/urql) +- [Integration mit NextJS und TypeScript](../examples/nextjs) +- [Integration mit Apollo-Client und React](../examples/apollo) - [Integration with React-Query](../examples/react-query) -- _Cross-chain merging (same Subgraph, different chains)_ -- - [Parallel SDK calls](../examples/cross-chain-sdk) -- - [Parallel internal calls with schema extensions](../examples/cross-chain-extension) -- [Customize execution with Transforms (auto-pagination and auto-block-tracking)](../examples/transforms) +- _Kettenübergreifende Zusammenführung (gleicher Subgraphen, unterschiedliche Ketten)_ +- - [Parallele SDK-Aufrufe](../examples/cross-chain-sdk) +- - [Parallele interne Aufrufe mit Schemaerweiterungen](../examples/cross-chain-extension) +- [Ausführung mit Transforms anpassen (Auto-Pagination und Auto-Block-Tracking)](../examples/transforms) -### Advanced Examples/Features +### Erweiterte Beispiele/Funktionen -#### Customize Network Calls +#### Anpassen von Netzanrufen -You can customize the network execution (for example, to add authentication headers) by using `operationHeaders`: +Sie können die Netzwerkausführung anpassen (z. B. um Authentifizierungs-Header hinzuzufügen), indem Sie `operationHeaders` verwenden: ```yaml sources: @@ -170,19 +170,19 @@ sources: Authorization: Bearer MY_TOKEN ``` -You can also use runtime variables if you wish, and specify it in a declarative way: +Sie können auch Laufzeitvariablen verwenden, wenn Sie dies wünschen, und sie deklarativ angeben: ```yaml -sources: - - name: uniswapv2 - handler: +Quellen: + - Name: uniswapv2 + Handler: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 operationHeaders: Authorization: Bearer {context.config.apiToken} ``` -Then, you can specify that when you execute operations: +Dann können Sie dies bei der Ausführung von Vorgängen angeben: ```ts execute(myQuery, myVariables, { @@ -192,11 +192,11 @@ execute(myQuery, myVariables, { }) ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Sie finden die [vollständige Dokumentation für den `graphql`-Handler hier](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Environment Variables Interpolation +#### Umgebungsvariablen Interpolation -If you wish to use environment variables in your Graph Client configuration file, you can use interpolation with `env` helper: +Wenn Sie Umgebungsvariablen in Ihrer Graph-Client-Konfigurationsdatei verwenden möchten, können Sie die Interpolation mit dem `env`-Helper nutzen: ```yaml sources: @@ -208,9 +208,9 @@ sources: Authorization: Bearer {env.MY_API_TOKEN} # runtime ``` -Then, make sure to have `MY_API_TOKEN` defined when you run `process.env` at runtime. +Stellen Sie dann sicher, dass Sie `MY_API_TOKEN` definiert haben, wenn Sie `process.env` zur Laufzeit ausführen. -You can also specify environment variables to be filled at build time (during `graphclient build` run) by using the env-var name directly: +Sie können auch Umgebungsvariablen angeben, die zur Erstellungszeit (während der Ausführung von `graphclient build`) gefüllt werden sollen, indem Sie den Namen der Umgebungsvariablen direkt verwenden: ```yaml sources: @@ -222,20 +222,19 @@ sources: Authorization: Bearer ${MY_API_TOKEN} # build time ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Sie finden die [vollständige Dokumentation für den `graphql`-Handler hier](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Fetch Strategies and Multiple Graph Indexers +#### Abrufstrategien und mehrere Graph-Indexer -It's a common practice to use more than one indexer in dApps, so to achieve the ideal experience with The Graph, you can specify several `fetch` strategies in order to make it more smooth and simple. +Es ist eine gängige Praxis, mehr als einen Indexer in dApps zu verwenden. Um die ideale Erfahrung mit The Graph zu erreichen, können Sie mehrere „Fetch“-Strategien angeben, um den Vorgang reibungsloser und einfacher zu gestalten. -All `fetch` strategies can be combined to create the ultimate execution flow. +Alle „Abruf“-Strategien können kombiniert werden, um den ultimativen Ausführungsfluss zu schaffen. -
- `retry` +
`retry` -The `retry` mechanism allow you to specify the retry attempts for a single GraphQL endpoint/source. +Mit dem Mechanismus `retry` können Sie die Wiederholungsversuche für einen einzelnen GraphQL-Endpunkt/Quelle festlegen. -The retry flow will execute in both conditions: a netword error, or due to a runtime error (indexing issue/inavailability of the indexer). +Der Wiederholungslauf wird unter beiden Bedingungen ausgeführt: bei einem Netzwortfehler oder aufgrund eines Laufzeitfehlers (Indizierungsproblem/Verfügbarkeit des Indexers). ```yaml sources: @@ -248,10 +247,9 @@ sources:
-
- `timeout` +
`Timeout` -The `timeout` mechanism allow you to specify the `timeout` for a given GraphQL endpoint. +Der „Timeout“-Mechanismus ermöglicht es Ihnen, den „Timeout“ für einen bestimmten GraphQL-Endpunkt anzugeben. ```yaml sources: @@ -264,12 +262,11 @@ sources:
-
- `fallback` +
`Fallback` -The `fallback` mechanism allow you to specify use more than one GraphQL endpoint, for the same source. +Der „Fallback“-Mechanismus ermöglicht es Ihnen, mehr als einen GraphQL-Endpunkt für dieselbe Quelle zu verwenden. -This is useful if you want to use more than one indexer for the same Subgraph, and fallback when an error/timeout happens. You can also use this strategy in order to use a custom indexer, but allow it to fallback to [The Graph Hosted Service](https://thegraph.com/hosted-service). +Dies ist nützlich, wenn Sie mehr als einen Indexer für denselben Subgraphen verwenden und bei einem Fehler/Timeout zurückgreifen möchten. Sie können diese Strategie auch verwenden, um einen benutzerdefinierten Indexer zu verwenden, der jedoch auf [The Graph Hosted Service] (https://thegraph.com/hosted-service) zurückgreifen kann. ```yaml sources: @@ -286,12 +283,11 @@ sources:
-
- `race` +
`race` -The `race` mechanism allow you to specify use more than one GraphQL endpoint, for the same source, and race on every execution. +Der „Race“-Mechanismus ermöglicht es Ihnen, mehr als einen GraphQL-Endpunkt für dieselbe Quelle zu verwenden und bei jeder Ausführung ein Race durchzuführen. -This is useful if you want to use more than one indexer for the same Subgraph, and allow both sources to race and get the fastest response from all specified indexers. +Dies ist nützlich, wenn Sie mehr als einen Indizierer für denselben Subgraphen verwenden möchten und beide Quellen gegeneinander antreten lassen wollen, um die schnellste Antwort von allen angegebenen Indizierern zu erhalten. ```yaml sources: @@ -306,12 +302,11 @@ sources:
-
- `highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. +
`höchsterWert` -This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. +Diese Strategie ermöglicht es Ihnen, parallele Anfragen an verschiedene Endpunkte für dieselbe Quelle zu senden und die aktuellste auszuwählen. + +Dies ist nützlich, wenn Sie die meisten synchronisierten Daten für denselben Subgraphen über verschiedene Indexer/Quellen auswählen möchten. ```yaml sources: @@ -349,9 +344,9 @@ graph LR;
-#### Block Tracking +#### Blockverfolgung -The Graph Client can track block numbers and do the following queries by following [this pattern](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) with `blockTracking` transform; +The Graph Client kann Blocknummern verfolgen und die folgenden Abfragen durchführen, indem er [diesem Muster] (https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) mit der Transformation `blockTracking` folgt; ```yaml sources: @@ -361,57 +356,57 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - blockTracking: - # You might want to disable schema validation for faster startup - validateSchema: true - # Ignore the fields that you don't want to be tracked + # Sie möchten vielleicht die Schema-Validierung für einen schnelleren Start deaktivieren + validateSchema: true + # Ignorieren Sie die Felder, die nicht verfolgt werden sollen ignoreFieldNames: [users, prices] - # Exclude the operation with the following names + # Schließen Sie die Operation mit den folgenden Namen aus ignoreOperationNames: [NotFollowed] ``` -[You can try a working example here](../examples/transforms) +[Hier können Sie ein funktionierendes Beispiel ausprobieren](../examples/transforms) -#### Automatic Pagination +#### Automatische Paginierung -With most subgraphs, the number of records you can fetch is limited. In this case, you have to send multiple requests with pagination. +Bei den meisten Subgraphen ist die Anzahl der Datensätze, die Sie abrufen können, begrenzt. In diesem Fall müssen Sie mehrere Anfragen mit Paginierung senden. ```graphql query { - # Will throw an error if the limit is 1000 - users(first: 2000) { - id - name - } + # Wirft einen Fehler, wenn das Limit 1000 ist + users(first: 2000) { + id + name + } } ``` -So you have to send the following operations one after the other: +Sie müssen also die folgenden Vorgänge nacheinander senden: ```graphql query { - # Will throw an error if the limit is 1000 - users(first: 1000) { - id - name - } + # Wirft einen Fehler, wenn das Limit 1000 ist + users(first: 1000) { + id + name + } } ``` -Then after the first response: +Dann nach der ersten Antwort: ```graphql query { - # Will throw an error if the limit is 1000 - users(first: 1000, skip: 1000) { - id - name - } + # Wirft einen Fehler, wenn die Grenze bei 1000 liegt + users(first: 1000, skip: 1000) { + id + name + } } ``` -After the second response, you have to merge the results manually. But instead The Graph Client allows you to do the first one and automatically does those multiple requests for you under the hood. +Nach der zweiten Antwort müssen Sie die Ergebnisse manuell zusammenführen. The Graph Client erlaubt Ihnen jedoch, die erste Anfrage zu stellen, und führt diese mehreren Anfragen automatisch für Sie durch. -All you have to do is: +Alles, was Sie tun müssen, ist: ```yaml sources: @@ -421,21 +416,21 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - autoPagination: - # You might want to disable schema validation for faster startup + # Sie möchten vielleicht die Schema-Validierung für einen schnelleren Start deaktivieren validateSchema: true ``` -[You can try a working example here](../examples/transforms) +[Hier können Sie ein funktionierendes Beispiel ausprobieren](../examples/transforms) -#### Client-side Composition +#### Client-seitige Komposition -The Graph Client has built-in support for client-side GraphQL Composition (powered by [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). +The Graph Client verfügt über integrierte Unterstützung für clientseitige GraphQL Composition (powered by [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). -You can leverage this feature in order to create a single GraphQL layer from multiple Subgraphs, deployed on multiple indexers. +Sie können diese Funktion nutzen, um eine einzige GraphQL-Schicht aus mehreren Subgraphen zu erstellen, die auf mehreren Indexierern bereitgestellt werden. -> 💡 Tip: You can compose any GraphQL sources, and not only Subgraphs! +> 💡 Tipp: Sie können beliebige GraphQL-Quellen zusammenstellen, und nicht nur Subgraphen! -Trivial composition can be done by adding more than one GraphQL source to your `.graphclientrc.yml` file, here's an example: +Triviale Komposition kann durch Hinzufügen von mehr als einer GraphQL-Quelle zu Ihrer `.graphclientrc.yml`-Datei erfolgen, hier ein Beispiel: ```yaml sources: @@ -449,7 +444,7 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/graphprotocol/compound-v2 ``` -As long as there a no conflicts across the composed schemas, you can compose it, and then run a single query to both Subgraphs: +Solange es keine Konflikte zwischen den zusammengestellten Schemata gibt, können Sie sie zusammenstellen und dann eine einzige Abfrage für beide Subgraphen ausführen: ```graphql query myQuery { @@ -457,7 +452,7 @@ query myQuery { markets(first: 7) { borrowRate } - # this one is coming from uniswap-v2 + # dieser kommt von uniswap-v2 pair(id: "0x00004ee988665cdda9a1080d5792cecd16dc1220") { id token0 { @@ -470,71 +465,71 @@ query myQuery { } ``` -You can also resolve conflicts, rename parts of the schema, add custom GraphQL fields, and modify the entire execution phase. +Sie können auch Konflikte beheben, Teile des Schemas umbenennen, benutzerdefinierte GraphQL-Felder hinzufügen und die gesamte Ausführungsphase ändern. -For advanced use-cases with composition, please refer to the following resources: +Für fortgeschrittene Anwendungsfälle mit Komposition lesen Sie bitte die folgenden Ressourcen: -- [Advanced Composition Example](../examples/composition) -- [GraphQL-Mesh Schema transformations](https://graphql-mesh.com/docs/transforms/transforms-introduction) -- [GraphQL-Tools Schema-Stitching documentation](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) +- [Fortgeschrittenes Kompositionsbeispiel](../examples/composition) +- [GraphQL-Mesh Schema-Transformationen](https://graphql-mesh.com/docs/transforms/transforms-introduction) +- [GraphQL-Tools Schema-Stitching Dokumentation](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) -#### TypeScript Support +#### TypeScript-Unterstützung -If your project is written in TypeScript, you can leverage the power of [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) and have a fully-typed GraphQL client experience. +Wenn Ihr Projekt in TypeScript geschrieben ist, können Sie die Leistung von [`TypedDocumentNode`] (https://the-guild.dev/blog/typed-document-node) nutzen und eine vollständig typisierte GraphQL-Client-Erfahrung haben. -The standalone mode of The GraphQL, and popular GraphQL client libraries like Apollo-Client and urql has built-in support for `TypedDocumentNode`! +Der Standalone-Modus von The GraphQL und populäre GraphQL-Client-Bibliotheken wie Apollo-Client und urql haben integrierte Unterstützung für `TypedDocumentNode`! -The Graph Client CLI comes with a ready-to-use configuration for [GraphQL Code Generator](https://graphql-code-generator.com), and it can generate `TypedDocumentNode` based on your GraphQL operations. +The Graph Client CLI wird mit einer gebrauchsfertigen Konfiguration für den [GraphQL Code Generator] (https://graphql-code-generator.com) geliefert und kann `TypedDocumentNode` basierend auf Ihren GraphQL-Operationen erzeugen. -To get started, define your GraphQL operations in your application code, and point to those files using the `documents` section of `.graphclientrc.yml`: +Um loszulegen, definieren Sie Ihre GraphQL-Operationen in Ihrem Anwendungscode und verweisen auf diese Dateien mit dem Abschnitt `documents` in `.graphclientrc.yml`: ```yaml sources: - - # ... your Subgraphs/GQL sources here + - # ... Ihre Subgraphs/GQL-Quellen hier documents: - ./src/example-query.graphql ``` -You can also use Glob expressions, or even point to code files, and the CLI will find your GraphQL queries automatically: +Sie können auch Glob-Ausdrücke verwenden oder sogar auf Codedateien verweisen, und die CLI wird Ihre GraphQL-Abfragen automatisch finden: ```yaml documents: - './src/**/*.graphql' - - './src/**/*.{ts,tsx,js,jsx}' + - './src/**/*.{ts,tsx,js,jsx}' ``` -Now, run the GraphQL CLI `build` command again, the CLI will generate a `TypedDocumentNode` object under `.graphclient` for every operation found. +Führen Sie nun den GraphQL-CLI-Befehl `build` erneut aus. Die CLI wird für jede gefundene Operation ein `TypedDocumentNode`-Objekt unter `.graphclient` erzeugen. -> Make sure to name your GraphQL operations, otherwise it will be ignored! +> Stellen Sie sicher, dass Sie Ihre GraphQL-Operationen benennen, sonst werden sie ignoriert! -For example, a query called `query ExampleQuery` will have the corresponding `ExampleQueryDocument` generated in `.graphclient`. You can now import it and use that for your GraphQL calls, and you'll have a fully typed experience without writing or specifying any TypeScript manually: +Zum Beispiel wird für eine Abfrage mit dem Namen `query ExampleQuery` das entsprechende `ExampleQueryDocument` in `.graphclient` generiert. Sie können es nun importieren und für Ihre GraphQL-Aufrufe verwenden. So haben Sie eine vollständig typisierte Erfahrung, ohne TypeScript manuell schreiben oder angeben zu müssen: ```ts import { ExampleQueryDocument, execute } from '../.graphclient' async function main() { - // "result" variable is fully typed, and represents the exact structure of the fields you selected in your query. - const result = await execute(ExampleQueryDocument, {}) - console.log(result) + // Die Variable "result" ist vollständig typisiert und repräsentiert die genaue Struktur der Felder, die Sie in Ihrer Abfrage ausgewählt haben. + const result = await execute(ExampleQueryDocument, {}) + console.log(result) } ``` -> You can find a [TypeScript project example here](../examples/urql). +> Sie können ein [TypeScript-Projektbeispiel hier](../examples/urql) finden. -#### Client-Side Mutations +#### Client-seitige Mutationen -Due to the nature of Graph-Client setup, it is possible to add client-side schema, that you can later bridge to run any arbitrary code. +Aufgrund der Natur des Graph-Client-Setups ist es möglich, clientseitige Schemata hinzuzufügen, die Sie später überbrücken können, um beliebigen Code auszuführen. -This is helpful since you can implement custom code as part of your GraphQL schema, and have it as unified application schema that is easier to track and develop. +Dies ist hilfreich, da Sie benutzerdefinierten Code als Teil Ihres GraphQL-Schemas implementieren können und es als einheitliches Anwendungsschema haben, das einfacher zu verfolgen und zu entwickeln ist. -> This document explains how to add custom mutations, but in fact you can add any GraphQL operation (query/mutation/subscriptions). See [Extending the unified schema article](https://graphql-mesh.com/docs/guides/extending-unified-schema) for more information about this feature. +> Dieses Dokument erklärt, wie man benutzerdefinierte Mutationen hinzufügt, aber eigentlich kann man jede GraphQL-Operation (Abfrage/Mutation/Abonnements) hinzufügen. Sehen Sie [Erweiterung des einheitlichen Schemaartikels](https://graphql-mesh.com/docs/guides/extending-unified-schema) für weitere Informationen über diese Funktion. -To get started, define a `additionalTypeDefs` section in your config file: +Um zu beginnen, definieren Sie einen Abschnitt `additionalTypeDefs` in Ihrer Konfigurationsdatei: ```yaml additionalTypeDefs: | - # We should define the missing `Mutation` type + # Wir sollten dtn fehlenden Typ `Mutation` definieren extend schema { mutation: Mutation } @@ -548,14 +543,14 @@ additionalTypeDefs: | } ``` -Then, add a pointer to a custom GraphQL resolvers file: +Fügen Sie dann einen Pointer auf eine benutzerdefinierte GraphQL-Resolver-Datei hinzu: ```yaml additionalResolvers: - './resolvers' ``` -Now, create `resolver.js` (or, `resolvers.ts`) in your project, and implement your custom mutation: +Erstellen Sie nun `resolver.js` (oder `resolvers.ts`) in Ihrem Projekt, und implementieren Sie Ihre benutzerdefinierte Mutation: ```js module.exports = { @@ -570,7 +565,7 @@ module.exports = { } ``` -If you are using TypeScript, you can also get fully type-safe signature by doing: +Wenn Sie TypeScript verwenden, können Sie auch eine vollständig typsichere Signatur erhalten, indem Sie dies tun: ```ts import { Resolvers } from './.graphclient' @@ -590,7 +585,7 @@ const resolvers: Resolvers = { export default resolvers ``` -If you need to inject runtime variables into your GraphQL execution `context`, you can use the following snippet: +Wenn Sie Laufzeitvariablen in Ihren GraphQL-Ausführungskontext einfügen müssen, können Sie das folgende Snippet verwenden: ```ts execute( @@ -602,10 +597,10 @@ execute( ) ``` -> [You can read more about client-side schema extensions here](https://graphql-mesh.com/docs/guides/extending-unified-schema) +> [Mehr über clientseitige Schemaerweiterungen erfahren Sie hier](https://graphql-mesh.com/docs/guides/extending-unified-schema) -> [You can also delegate and call Query fields as part of your mutation](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) +> [Sie können auch Abfragefelder als Teil Ihrer Mutation delegieren und aufrufen] (https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) -## License +## Lizenz -Released under the [MIT license](../LICENSE). +Freigegeben unter der [MIT-Lizenz](../LICENSE). diff --git a/website/src/pages/de/subgraphs/querying/graph-client/architecture.md b/website/src/pages/de/subgraphs/querying/graph-client/architecture.md index 99098cd77b95..60f45c85bb36 100644 --- a/website/src/pages/de/subgraphs/querying/graph-client/architecture.md +++ b/website/src/pages/de/subgraphs/querying/graph-client/architecture.md @@ -1,13 +1,13 @@ -# The Graph Client Architecture +# The Graph-Client-Architektur -To address the need to support a distributed network, we plan to take several actions to ensure The Graph client provides everything app needs: +Um der Notwendigkeit der Unterstützung eines verteilten Netzwerks gerecht zu werden, planen wir mehrere Maßnahmen, um sicherzustellen, dass der Graph-Client alles bietet, was eine App braucht: -1. Compose multiple Subgraphs (on the client-side) -2. Fallback to multiple indexers/sources/hosted services -3. Automatic/Manual source picking strategy -4. Agnostic core, with the ability to run integrate with any GraphQL client +1. Mehrere Subgraphen zusammenstellen (auf der Client-Seite) +2. Fallback auf mehrere Indexer/Quellen/gehostete Dienste +3. Automatische/manuelle Kommissionierstrategie +4. Agnostischer Kern, mit der Fähigkeit, die Integration mit jedem GraphQL-Client auszuführen -## Standalone mode +## Standalone-Modus ```mermaid graph LR; @@ -17,7 +17,7 @@ graph LR; op-->sB[Subgraph B]; ``` -## With any GraphQL client +## Mit jedem GraphQL-Client ```mermaid graph LR; @@ -28,11 +28,11 @@ graph LR; op-->sB[Subgraph B]; ``` -## Subgraph Composition +## Subgraphen-Zusammensetzung -To allow simple and efficient client-side composition, we'll use [`graphql-tools`](https://graphql-tools.com) to create a remote schema / Executor, then can be hooked into the GraphQL client. +Um eine einfache und effiziente client-seitige Komposition zu ermöglichen, werden wir [`graphql-tools`](https://graphql-tools.com) verwenden, um ein entferntes Schema / Executor zu erstellen, das dann in den GraphQL-Client eingehängt werden kann. -API could be either raw `graphql-tools` transformers, or using [GraphQL-Mesh declarative API](https://graphql-mesh.com/docs/transforms/transforms-introduction) for composing the schema. +API könnte entweder rohe `graphql-tools`-Transformatoren oder die Verwendung von [GraphQL-Mesh declarative API] (https://graphql-mesh.com/docs/transforms/transforms-introduction) für die Zusammenstellung des Schemas sein. ```mermaid graph LR; @@ -42,9 +42,9 @@ graph LR; m-->s3[Subgraph C GraphQL schema]; ``` -## Subgraph Execution Strategies +## Strategien für die Ausführung von Subgraphen -Within every Subgraph defined as source, there will be a way to define it's source(s) indexer and the querying strategy, here are a few options: +Für jeden Subgraphen, der als Quelle definiert ist, gibt es eine Möglichkeit, seine(n) Quell-Indexer und die Abfragestrategie zu definieren, hier einige Optionen: ```mermaid graph LR; @@ -85,9 +85,9 @@ graph LR; end ``` -> We can ship a several built-in strategies, along with a simple interfaces to allow developers to write their own. +> Wir können mehrere eingebaute Strategien liefern, zusammen mit einfachen Schnittstellen, die es Entwicklern ermöglichen, ihre eigenen zu schreiben. -To take the concept of strategies to the extreme, we can even build a magical layer that does subscription-as-query, with any hook, and provide a smooth DX for dapps: +Um das Konzept der Strategien auf die Spitze zu treiben, können wir sogar eine magische Schicht aufbauen, die Abonnement-als-Abfrage mit einem beliebigen Hook durchführt und einen reibungslosen DX für Dapps bietet: ```mermaid graph LR; @@ -99,5 +99,5 @@ graph LR; sc[Smart Contract]-->|change event|op; ``` -With this mechanism, developers can write and execute GraphQL `subscription`, but under the hood we'll execute a GraphQL `query` to The Graph indexers, and allow to connect any external hook/probe for re-running the operation. -This way, we can watch for changes on the Smart Contract itself, and the GraphQL client will fill the gap on the need to real-time changes from The Graph. +Mit diesem Mechanismus können Entwickler GraphQL-Abonnements schreiben und ausführen, aber unter der Haube führen wir eine GraphQL-Abfrage an die The Graph-Indexer aus und ermöglichen den Anschluss eines externen Hooks/einer externen Probe zur erneuten Ausführung der Operation. +Auf diese Weise können wir auf Änderungen am Smart Contract selbst achten, und der GraphQL-Client füllt die Lücke, wenn Echtzeitänderungen von The Graph erforderlich sind. diff --git a/website/src/pages/de/subgraphs/querying/graph-client/live.md b/website/src/pages/de/subgraphs/querying/graph-client/live.md index e6f726cb4352..de4940e23ea8 100644 --- a/website/src/pages/de/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/de/subgraphs/querying/graph-client/live.md @@ -1,10 +1,10 @@ -# `@live` queries in `graph-client` +# Live"-Abfragen im “Graph-Client -Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. +Graph-Client implementiert eine benutzerdefinierte `@live`-Direktive, mit der jede GraphQL-Abfrage mit Echtzeitdaten arbeiten kann. -## Getting Started +## Erste Schritte -Start by adding the following configuration to your `.graphclientrc.yml` file: +Beginnen Sie, indem Sie die folgende Konfiguration zu Ihrer `.graphclientrc.yml`-Datei hinzufügen: ```yaml plugins: @@ -12,9 +12,9 @@ plugins: defaultInterval: 1000 ``` -## Usage +## Verwendung -Set the default update interval you wish to use, and then you can apply the following GraphQL `@directive` over your GraphQL queries: +Legen Sie das standardmäßige Aktualisierungsintervall fest, das Sie verwenden möchten, und wenden Sie dann die folgende GraphQL-@directive“ auf Ihre GraphQL-Abfragen an: ```graphql query ExampleQuery @live { @@ -26,7 +26,7 @@ query ExampleQuery @live { } ``` -Or, you can specify a per-query interval: +Sie können auch ein Intervall pro Abfrage festlegen: ```graphql query ExampleQuery @live(interval: 5000) { @@ -36,8 +36,8 @@ query ExampleQuery @live(interval: 5000) { } ``` -## Integrations +## Integrationen -Since the entire network layer (along with the `@live` mechanism) is implemented inside `graph-client` core, you can use Live queries with every GraphQL client (such as Urql or Apollo-Client), as long as it supports streame responses (`AsyncIterable`). +Da die gesamte Netzwerkschicht (zusammen mit dem `@live`-Mechanismus) innerhalb des `graph-client`-Kerns implementiert ist, können Sie Live-Abfragen mit jedem GraphQL-Client (wie z. B. Urql oder Apollo-Client) verwenden, solange dieser Streame-Antworten (`AsyncIterable`) unterstützt. -No additional setup is required for GraphQL clients cache updates. +Für die Cache-Aktualisierung von GraphQL-Clients ist keine zusätzliche Einrichtung erforderlich. diff --git a/website/src/pages/de/subgraphs/querying/graphql-api.mdx b/website/src/pages/de/subgraphs/querying/graphql-api.mdx index e6636e20a53e..523d18a45740 100644 --- a/website/src/pages/de/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/de/subgraphs/querying/graphql-api.mdx @@ -2,23 +2,23 @@ title: GraphQL-API --- -Learn about the GraphQL Query API used in The Graph. +Erfahren Sie mehr über die GraphQL Query API, die in The Graph verwendet wird. -## What is GraphQL? +## Was ist GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL] (https://graphql.org/learn/) ist eine Abfragesprache für APIs und eine Laufzeitumgebung für die Ausführung dieser Abfragen mit Ihren vorhandenen Daten. The Graph verwendet GraphQL zur Abfrage von Subgraphen. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +Um die größere Rolle, die GraphQL spielt, zu verstehen, lesen Sie [Entwickeln](/subgraphs/entwickeln/einfuehrung/) und [Erstellen eines Subgraphen](/entwickeln/einen-subgraph-erstellen/). -## Queries with GraphQL +## Abfragen mit GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In Ihrem Subgraph-Schema definieren Sie Typen namens `Entities`. Für jeden `Entity`-Typ werden `entity`- und `entities`-Felder auf der obersten Ebene des `Query`-Typs erzeugt. -> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. +> Hinweis: Bei der Verwendung von The Graph muss `query` nicht am Anfang der `graphql`-Abfrage stehen. ### Beispiele -Query for a single `Token` entity defined in your schema: +Abfrage nach einer einzelnen, in Ihrem Schema definierten Entität `Token`: ```graphql { @@ -29,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> Note: When querying for a single entity, the `id` field is required, and it must be written as a string. +> Hinweis: Bei der Abfrage einer einzelnen Entität ist das Feld `id` erforderlich und muss als String geschrieben werden. -Query all `Token` entities: +Abfrage aller `Token`-Entitäten: ```graphql { @@ -44,10 +44,10 @@ Query all `Token` entities: ### Sortierung -When querying a collection, you may: +Wenn Sie eine Sammlung abfragen, können Sie: -- Use the `orderBy` parameter to sort by a specific attribute. -- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. +- den Parameter `orderBy` verwenden, um nach einem bestimmten Attribut zu sortieren. +- `orderDirection` verwenden, um die Sortierrichtung anzugeben, `asc` für aufsteigend oder `desc` für absteigend. #### Beispiel @@ -62,9 +62,9 @@ When querying a collection, you may: #### Beispiel für die Sortierung verschachtelter Entitäten -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Ab Graph Node [`v0.30.0`] (https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) können Entitäten auf der Basis von verschachtelten Entitäten sortiert werden. -The following example shows tokens sorted by the name of their owner: +Im folgenden Beispiel werden die Token nach dem Namen ihres Besitzers sortiert: ```graphql { @@ -77,18 +77,18 @@ The following example shows tokens sorted by the name of their owner: } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> Derzeit können Sie nach den Typen `String` oder `ID` auf den Feldern `@entity` und `@derivedFrom` sortieren. Leider wird die [Sortierung nach Schnittstellen auf Entitäten mit einer Tiefe von einer Ebene] (https://github.com/graphprotocol/graph-node/pull/4058), die Sortierung nach Feldern, die Arrays und verschachtelte Entitäten sind, noch nicht unterstützt. ### Pagination -When querying a collection, it's best to: +Wenn Sie eine Sammlung abfragen, ist es am besten, dies zu tun: -- Use the `first` parameter to paginate from the beginning of the collection. - - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. -- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. +- Verwenden Sie den Parameter `first`, um vom Anfang der Sammlung an zu paginieren. + - Die Standardsortierung erfolgt nach `ID` in aufsteigender alphanumerischer Reihenfolge, **nicht** nach Erstellungszeit. +- Verwenden Sie den Parameter `skip`, um Entitäten zu überspringen und zu paginieren. Zum Beispiel zeigt `first:100` die ersten 100 Entitäten und `first:100, skip:100` zeigt die nächsten 100 Entitäten. +- Vermeiden Sie die Verwendung von `skip`-Werten in Abfragen, da diese im Allgemeinen schlecht funktionieren. Um eine große Anzahl von Elementen abzurufen, ist es am besten, die Entitäten auf der Grundlage eines Attributs zu durchblättern, wie im obigen Beispiel gezeigt. -#### Example using `first` +#### Beispiel mit `first` Die Abfrage für die ersten 10 Token: @@ -101,11 +101,11 @@ Die Abfrage für die ersten 10 Token: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Um nach Gruppen von Entitäten in der Mitte einer Sammlung zu suchen, kann der Parameter `skip` in Verbindung mit dem Parameter `first` verwendet werden, um eine bestimmte Anzahl von Entitäten zu überspringen, beginnend am Anfang der Sammlung. -#### Example using `first` and `skip` +#### Beispiel mit `first` und `skip` -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +Abfrage von 10 „Token“-Entitäten, versetzt um 10 Stellen vom Beginn der Sammlung: ```graphql { @@ -116,9 +116,9 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example using `first` and `id_ge` +#### Beispiel mit `first` und `id_ge` -If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: +Wenn ein Client eine große Anzahl von Entitäten abrufen muss, ist es leistungsfähiger, Abfragen auf ein Attribut zu stützen und nach diesem Attribut zu filtern. Zum Beispiel könnte ein Client mit dieser Abfrage eine große Anzahl von Token abrufen: ```graphql query manyTokens($lastID: String) { @@ -129,16 +129,16 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +Beim ersten Mal würde es die Abfrage mit `lastID = „“` senden, und bei nachfolgenden Anfragen würde es `lastID` auf das Attribut `id` der letzten Entität in der vorherigen Anfrage setzen. Dieser Ansatz ist wesentlich leistungsfähiger als die Verwendung steigender `skip`-Werte. -### Filtering +### Filtration -- You can use the `where` parameter in your queries to filter for different properties. -- You can filter on multiple values within the `where` parameter. +- Sie können den Parameter `where` in Ihren Abfragen verwenden, um nach verschiedenen Eigenschaften zu filtern. +- Sie können nach mehreren Werten innerhalb des Parameters `where` filtern. -#### Example using `where` +#### Beispiel mit `where` -Query challenges with `failed` outcome: +Abfrage von Herausforderungen mit `failed`-Ergebnis: ```graphql { @@ -152,9 +152,9 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +Sie können Suffixe wie `_gt`, `_lte` für den Wertevergleich verwenden: -#### Example for range filtering +#### Beispiel für Range-Filterung ```graphql { @@ -166,11 +166,11 @@ You can use suffixes like `_gt`, `_lte` for value comparison: } ``` -#### Example for block filtering +#### Beispiel für Block-Filterung -You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. +Sie können auch Entitäten filtern, die in oder nach einem bestimmten Block mit `_change_block(number_gte: Int)` aktualisiert wurden. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +Dies kann nützlich sein, wenn Sie nur Entitäten abrufen möchten, die sich geändert haben, z. B. seit der letzten Abfrage. Oder es kann nützlich sein, um zu untersuchen oder zu debuggen, wie sich Entitäten in Ihrem Subgraphen ändern (wenn Sie dies mit einem Blockfilter kombinieren, können Sie nur Entitäten isolieren, die sich in einem bestimmten Block geändert haben). ```graphql { @@ -182,11 +182,11 @@ This can be useful if you are looking to fetch only entities which have changed, } ``` -#### Example for nested entity filtering +#### Beispiel für die Filterung verschachtelter Entitäten -Filtering on the basis of nested entities is possible in the fields with the `_` suffix. +Die Filterung nach verschachtelten Entitäten ist in den Feldern mit dem Suffix `_`möglich. -This can be useful if you are looking to fetch only entities whose child-level entities meet the provided conditions. +Dies kann nützlich sein, wenn Sie nur die Entitäten abrufen möchten, deren untergeordnete Entitäten die angegebenen Bedingungen erfüllen. ```graphql { @@ -200,13 +200,13 @@ This can be useful if you are looking to fetch only entities whose child-level e } ``` -#### Logical operators +#### Logische Operatoren -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria. +Seit Graph Node [`v0.30.0`] (https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) können Sie mehrere Parameter im selben `where`-Argument gruppieren, indem Sie die `und`- oder `oder`-Operatoren verwenden, um Ergebnisse nach mehr als einem Kriterium zu filtern. -##### `AND` Operator +##### Operator `AND` -The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +Das folgende Beispiel filtert nach Challenges mit `outcome` `succeeded` und `number` größer als oder gleich `100`. ```graphql { @@ -220,7 +220,7 @@ The following example filters for challenges with `outcome` `succeeded` and `num } ``` -> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. +> **Syntaktischer Zucker:** Sie können die obige Abfrage vereinfachen, indem Sie den „und“-Operator entfernen und einen durch Kommata getrennten Unterausdruck übergeben. > > ```graphql > { @@ -234,9 +234,9 @@ The following example filters for challenges with `outcome` `succeeded` and `num > } > ``` -##### `OR` Operator +##### Operator `OR` -The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +Das folgende Beispiel filtert nach Herausforderungen mit `outcome` `succeeded` oder `number` größer oder gleich `100`. ```graphql { @@ -250,11 +250,11 @@ The following example filters for challenges with `outcome` `succeeded` or `numb } ``` -> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries. +> Hinweis: Beim Erstellen von Abfragen ist es wichtig, die Auswirkungen der Verwendung des or-Operators auf die Leistung zu berücksichtigen. Obwohl or ein nützliches Tool zum Erweitern von Suchergebnissen sein kann, kann es auch erhebliche Kosten verursachen. Eines der Hauptprobleme mit or ist, dass Abfragen dadurch verlangsamt werden können. Dies liegt daran, dass or erfordert, dass die Datenbank mehrere Indizes durchsucht, was ein zeitaufwändiger Prozess sein kann. Um diese Probleme zu vermeiden, wird empfohlen, dass Entwickler and -Operatoren anstelle von oder verwenden, wann immer dies möglich ist. Dies ermöglicht eine präzisere Filterung und kann zu schnelleren und genaueren Abfragen führen. -#### All Filters +#### Alle Filter -Full list of parameter suffixes: +Vollständige Liste der Parameter-Suffixe: ``` _ @@ -279,21 +279,21 @@ _not_ends_with _not_ends_with_nocase ``` -> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types. +> Bitte beachten Sie, dass einige Suffixe nur für bestimmte Typen unterstützt werden. So unterstützt `Boolean` nur `_not`, `_in` und `_not_in`, aber `_` ist nur für Objekt- und Schnittstellentypen verfügbar. -In addition, the following global filters are available as part of `where` argument: +Darüber hinaus sind die folgenden globalen Filter als Teil des Arguments `where` verfügbar: ```graphql _change_block(number_gte: Int) ``` -### Time-travel queries +### Time-travel-Anfragen -You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +Sie können den Zustand Ihrer Entitäten nicht nur für den letzten Block abfragen, was der Standard ist, sondern auch für einen beliebigen Block in der Vergangenheit. Der Block, zu dem eine Abfrage erfolgen soll, kann entweder durch seine Blocknummer oder seinen Block-Hash angegeben werden, indem ein `block`-Argument in die Toplevel-Felder von Abfragen aufgenommen wird. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +Das Ergebnis einer solchen Abfrage wird sich im Laufe der Zeit nicht ändern, d.h. die Abfrage eines bestimmten vergangenen Blocks wird das gleiche Ergebnis liefern, egal wann sie ausgeführt wird, mit der Ausnahme, dass sich das Ergebnis bei einer Abfrage eines Blocks, der sehr nahe am Kopf der Kette liegt, ändern kann, wenn sich herausstellt, dass dieser Block **nicht** in der Hauptkette ist und die Kette umorganisiert wird. Sobald ein Block als endgültig betrachtet werden kann, wird sich das Ergebnis der Abfrage nicht mehr ändern. -> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Hinweis: Die derzeitige Implementierung unterliegt noch bestimmten Beschränkungen, die diese Garantien verletzen könnten. Die Implementierung kann nicht immer erkennen, dass ein bestimmter Block-Hash überhaupt nicht in der Hauptkette ist, oder ob ein Abfrageergebnis durch einen Block-Hash für einen Block, der noch nicht als endgültig gilt, durch eine gleichzeitig mit der Abfrage laufende Blockumstrukturierung beeinflusst werden könnte. Sie haben keinen Einfluss auf die Ergebnisse von Abfragen per Block-Hash, wenn der Block endgültig ist und sich bekanntermaßen in der Hauptkette befindet. In [Diese Ausgabe] (https://github.com/graphprotocol/graph-node/issues/1405) werden diese Einschränkungen im Detail erläutert. #### Beispiel @@ -309,7 +309,7 @@ The result of such a query will not change over time, i.e., querying at a certai } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +Diese Abfrage gibt die `Challenge`-Entitäten und die zugehörigen `Application`-Entitäten so zurück, wie sie unmittelbar nach der Verarbeitung von Block Nummer 8.000.000 bestanden. #### Beispiel @@ -325,26 +325,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +Diese Abfrage gibt `Challenge`-Entitäten und die zugehörigen `Application`-Entitäten zurück, wie sie unmittelbar nach der Verarbeitung des Blocks mit dem angegebenen Hash vorhanden waren. -### Fulltext Search Queries +### Volltext-Suchanfragen -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Volltextsuchabfrage-Felder bieten eine aussagekräftige Textsuch-API, die dem Subgraph-Schema hinzugefügt und angepasst werden kann. Siehe [Definieren von Volltext-Suchfeldern](/developing/creating-a-subgraph/#defining-fulltext-search-fields), um die Volltextsuche zu Ihrem Subgraph hinzuzufügen. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Volltextsuchanfragen haben ein erforderliches Feld, `text`, für die Eingabe von Suchbegriffen. Mehrere spezielle Volltext-Operatoren sind verfügbar, die in diesem `text`-Suchfeld verwendet werden können. -Fulltext search operators: +Volltext-Suchanfragen: -| Symbol | Operator | Beschreibung | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Beschreibung | +| ------ | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | Zum Kombinieren mehrerer Suchbegriffe zu einem Filter für Entitäten, die alle bereitgestellten Begriffe enthalten | +| | | `Or` | Abfragen mit mehreren durch den Operator or getrennten Suchbegriffen geben alle Entitäten mit einer Übereinstimmung mit einem der bereitgestellten Begriffe zurück | +| `<->` | `Follow by` | Geben Sie den Abstand zwischen zwei Wörtern an. | +| `:*` | `Prefix` | Verwenden Sie den Präfix-Suchbegriff, um Wörter zu finden, deren Präfix übereinstimmt (2 Zeichen erforderlich) | #### Beispiele -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +Mit dem Operator `or` filtert diese Abfrage nach Blog-Entitäten mit Variationen von entweder "anarchism" oder „crumpet“ in ihren Volltextfeldern. ```graphql { @@ -357,7 +357,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +Der Operator `follow by` gibt Wörter an, die in den Volltextdokumenten einen bestimmten Abstand zueinander haben. Die folgende Abfrage gibt alle Blogs mit Variationen von „decentralize“ gefolgt von „philosophy“ zurück ```graphql { @@ -370,7 +370,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +Kombinieren Sie Volltextoperatoren, um komplexere Filter zu erstellen. Mit einem Präfix-Suchoperator in Kombination mit "follow by" von dieser Beispielabfrage werden alle Blog-Entitäten mit Wörtern abgeglichen, die mit „lou“ beginnen, gefolgt von „music“. ```graphql { @@ -385,25 +385,25 @@ Combine fulltext operators to make more complex filters. With a pretext search o ### Validierung -Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more. +Graph Node implementiert die [spezifikationsbasierte](https://spec.graphql.org/October2021/#sec-Validation) Validierung der empfangenen GraphQL-Abfragen mit [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), die auf der [graphql-js-Referenzimplementierung](https://github.com/graphql/graphql-js/tree/main/src/validation) basiert. Abfragen, die eine Validierungsregel nicht erfüllen, werden mit einem Standardfehler angezeigt - besuchen Sie die [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation), um mehr zu erfahren. ## Schema -The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +Das Schema Ihrer Datenquellen, d. h. die Entitätstypen, Werte und Beziehungen, die zur Abfrage zur Verfügung stehen, werden über die [GraphQL Interface Definition Language (IDL)] (https://facebook.github.io/graphql/draft/#sec-Type-System) definiert. -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL-Schemata definieren im Allgemeinen Wurzeltypen für „Abfragen“, „Abonnements“ und „Mutationen“. The Graph unterstützt nur `Abfragen`. Der Root-Typ „Abfrage“ für Ihren Subgraph wird automatisch aus dem GraphQL-Schema generiert, das in Ihrem [Subgraph-Manifest] enthalten ist (/developing/creating-a-subgraph/#components-of-a-subgraph). -> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Hinweis: Unsere API stellt keine Mutationen zur Verfügung, da von den Entwicklern erwartet wird, dass sie aus ihren Anwendungen heraus Transaktionen direkt gegen die zugrunde liegende Blockchain durchführen. -### Entities +### Entitäten -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +Alle GraphQL-Typen mit `@entity`-Direktiven in Ihrem Schema werden als Entitäten behandelt und müssen ein `ID`-Feld haben. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Hinweis:** Derzeit müssen alle Typen in Ihrem Schema eine `@entity`-Direktive haben. In Zukunft werden wir Typen ohne `@entity`-Direktive als Wertobjekte behandeln, aber dies wird noch nicht unterstützt. -### Subgraph Metadata +### Subgraph-Metadaten -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +Alle Subgraphen haben ein automatisch generiertes `_Meta_`-Objekt, das Zugriff auf die Metadaten des Subgraphen bietet. Dieses kann wie folgt abgefragt werden: ```graphQL { @@ -419,14 +419,14 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +Wenn ein Block angegeben wird, gelten die Metadaten ab diesem Block, andernfalls wird der zuletzt indizierte Block verwendet. Falls angegeben, muss der Block nach dem Startblock des Subgraphen liegen und kleiner oder gleich dem zuletzt indizierten Block sein. -`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. +`deployment` ist eine eindeutige ID, die der IPFS CID der Datei `subgraph.yaml` entspricht. -`block` provides information about the latest block (taking into account any block constraints passed to `_meta`): +`block` liefert Informationen über den letzten Block (unter Berücksichtigung aller an `_meta` übergebenen Blockeinschränkungen): -- hash: the hash of the block -- number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- hash: der Hash des Blocks +- number: die Blocknummer +- timestamp: der Zeitstempel des Blocks, falls verfügbar (dies ist derzeit nur für Subgraphen verfügbar, die EVM-Netzwerke indizieren) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +hasIndexingErrors“ ist ein boolescher Wert, der angibt, ob der Subgraph in einem vergangenen Block auf Indizierungsfehler gestoßen ist. diff --git a/website/src/pages/de/subgraphs/querying/introduction.mdx b/website/src/pages/de/subgraphs/querying/introduction.mdx index 58a720de4509..d889e2efc3d6 100644 --- a/website/src/pages/de/subgraphs/querying/introduction.mdx +++ b/website/src/pages/de/subgraphs/querying/introduction.mdx @@ -1,32 +1,32 @@ --- -title: Querying The Graph +title: The Graph abfragen sidebarTitle: Einführung --- -To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer). +Um sofort mit der Abfrage zu beginnen, besuchen Sie [The Graph Explorer] (https://thegraph.com/explorer). ## Überblick -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +Wenn ein Subgraph in The Graph Network veröffentlicht wird, können Sie die Detailseite des Subgraphen im Graph Explorer besuchen und die Registerkarte „Abfrage“ verwenden, um die eingesetzte GraphQL-API für jeden Subgraphen zu erkunden. ## Besonderheiten -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Jeder im The Graph Network veröffentlichte Subgraph hat eine eindeutige Abfrage-URL im Graph Explorer, um direkte Abfragen durchzuführen. Sie finden sie, indem Sie zur Detailseite des Subgraphen navigieren und auf die Schaltfläche „Abfrage“ in der oberen rechten Ecke klicken. -![Query Subgraph Button](/img/query-button-screenshot.png) +![Abfrage-Subgraphen-Schaltfläche](/img/query-button-screenshot.png) -![Query Subgraph URL](/img/query-url-screenshot.png) +![Abfrage-Subgraph URL](/img/query-url-screenshot.png) -You will notice that this query URL must use a unique API key. You can create and manage your API keys in [Subgraph Studio](https://thegraph.com/studio), under the "API Keys" section. Learn more about how to use Subgraph Studio [here](/deploying/subgraph-studio/). +Sie werden feststellen, dass diese Abfrage-URL einen eindeutigen API-Schlüssel verwenden muss. Sie können Ihre API-Schlüssel in [Subgraph Studio](https://thegraph.com/studio) unter dem Abschnitt „API-Schlüssel“ erstellen und verwalten. Erfahren Sie mehr über die Verwendung von Subgraph Studio [hier](/deploying/subgraph-studio/). -Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). +Benutzer von Subgraph Studio starten mit einem kostenlosen Plan, der ihnen 100.000 Abfragen pro Monat erlaubt. Zusätzliche Abfragen sind mit dem Growth Plan möglich, der nutzungsbasierte Preise für zusätzliche Abfragen bietet, zahlbar per Kreditkarte oder GRT auf Arbitrum. Sie können mehr über die Abrechnung [hier](/subgraphs/billing/) erfahren. -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> In der [Abfrage-API](/subgraphs/querying/graphql-api/) finden Sie eine vollständige Anleitung zur Abfrage der Entitäten des Subgraphen. > -> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. +> Hinweis: Wenn Sie bei einer GET-Anfrage an die Graph Explorer-URL 405-Fehler erhalten, wechseln Sie bitte zu einer POST-Anfrage. ### Zusätzliche Ressourcen -- Use [GraphQL querying best practices](/subgraphs/querying/best-practices/). -- To query from an application, click [here](/subgraphs/querying/from-an-application/). -- View [querying examples](https://github.com/graphprotocol/query-examples/tree/main). +- Verwenden Sie [GraphQL-Abfrage-Best-Practices](/subgraphs/querying/best-practices/). +- Um von einer Anwendung aus abzufragen, klicken Sie [hier](/subgraphs/querying/from-an-application/). +- Sehen Sie [Abfragebeispiele] (https://github.com/graphprotocol/query-examples/tree/main). diff --git a/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx index 45ead286cf8a..cc71c6e7afd0 100644 --- a/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/de/subgraphs/querying/managing-api-keys.mdx @@ -1,34 +1,34 @@ --- -title: Managing API keys +title: Verwalten von API-Schlüsseln --- ## Überblick -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API-Schlüssel werden für die Abfrage von Subgraphen benötigt. Sie stellen sicher, dass die Verbindungen zwischen Anwendungsdiensten gültig und autorisiert sind, einschließlich der Authentifizierung des Endnutzers und des Geräts, das die Anwendung verwendet. -### Create and Manage API Keys +### Erstellen und Verwalten von API-Schlüsseln -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und klicken Sie auf die Registerkarte **API-Schlüssel**, um Ihre API-Schlüssel für bestimmte Subgraphen zu erstellen und zu verwalten. -The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. +Die Tabelle „API-Schlüssel“ listet die vorhandenen API-Schlüssel auf und ermöglicht es Ihnen, diese zu verwalten oder zu löschen. Für jeden Schlüssel können Sie seinen Status, die Kosten für den aktuellen Zeitraum, das Ausgabenlimit für den aktuellen Zeitraum und die Gesamtzahl der Abfragen sehen. -You can click the "three dots" menu to the right of a given API key to: +Sie können auf das Menü mit den „drei Punkten“ rechts neben einem bestimmten API-Schlüssel klicken, um: -- Rename API key -- Regenerate API key -- Delete API key -- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +- Umbenennen des API-Schlüssels +- API-Schlüssel neu generieren +- Löschen des API-Schlüssels +- Ausgabenlimit verwalten: Dies ist ein optionales monatliches Ausgabenlimit für einen bestimmten API-Schlüssel, in USD. Dieses Limit gilt pro Abrechnungszeitraum (Kalendermonat). -### API Key Details +### API-Schlüssel Details -You can click on an individual API key to view the Details page: +Sie können auf einen einzelnen API-Schlüssel klicken, um die Detailseite anzuzeigen: -1. Under the **Overview** section, you can: - - Edit your key name - - Regenerate API keys - - View the current usage of the API key with stats: - - Number of queries - - Amount of GRT spent -2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key +1. Unter dem Abschnitt **Übersicht** können Sie: + - Bearbeiten Sie den Namen Ihres Schlüssels + - API-Schlüssel neu generieren + - Anzeige der aktuellen Nutzung des API-Schlüssels mit Statistiken: + - Anzahl der Abfragen + - Ausgegebener GRT-Betrag +2. Unter dem Abschnitt **Sicherheit** können Sie je nach gewünschter Kontrollstufe Sicherheitseinstellungen vornehmen. Im Einzelnen können Sie: + - Anzeigen und Verwalten der Domainnamen, die zur Verwendung Ihres API-Schlüssels berechtigt sind + - Zuweisung von Subgraphen, die mit Ihrem API-Schlüssel abgefragt werden können diff --git a/website/src/pages/de/subgraphs/querying/python.mdx b/website/src/pages/de/subgraphs/querying/python.mdx index a6640d513d6e..389e6f56a12c 100644 --- a/website/src/pages/de/subgraphs/querying/python.mdx +++ b/website/src/pages/de/subgraphs/querying/python.mdx @@ -1,57 +1,57 @@ --- -title: Query The Graph with Python and Subgrounds +title: Abfrage von The Graph mit Python und Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds ist eine intuitive Python-Bibliothek zur Abfrage von Subgraphen, entwickelt von [Playgrounds](https://playgrounds.network/). Sie ermöglicht es Ihnen, Subgraph-Daten direkt mit einer Python-Datenumgebung zu verbinden, so dass Sie Bibliotheken wie [pandas](https://pandas.pydata.org/) für die Datenanalyse verwenden können! -Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. +Subgrounds bietet eine einfache Pythonic-API für die Erstellung von GraphQL-Abfragen, automatisiert mühsame Arbeitsabläufe wie die Paginierung und ermöglicht fortgeschrittenen Nutzern kontrollierte Schema-Transformationen. ## Erste Schritte -Subgrounds requires Python 3.10 or higher and is available on [pypi](https://pypi.org/project/subgrounds/). +Subgrounds erfordert Python 3.10 oder höher und ist auf [pypi](https://pypi.org/project/subgrounds/) verfügbar. ```bash pip install --upgrade subgrounds -# or +# oder python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Nach der Installation können Sie die Subgrounds mit der folgenden Abfrage testen. Das folgende Beispiel greift auf einen Subgraph für das Aave v2-Protokoll zurück und fragt die Top 5 Märkte geordnet nach TVL (Total Value Locked) ab, wählt ihren Namen und ihren TVL (in USD) aus und gibt die Daten als pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) zurück. ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Laden des Subgraphen aave_v2 = sg.load_subgraph( - "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") + „https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# Construct the query +# Konstruieren Sie die Abfrage latest_markets = aave_v2.Query.markets( orderBy=aave_v2.Market.totalValueLockedUSD, orderDirection='desc', first=5, ) -# Return query to a dataframe +# Abfrage in einem Datenrahmen zurückgeben sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, ]) ``` -## Documentation +## Dokumentation -Subgrounds is built and maintained by the [Playgrounds](https://playgrounds.network/) team and can be accessed on the [Playgrounds docs](https://docs.playgrounds.network/subgrounds). +Subgrounds wird vom [Playgrounds](https://playgrounds.network/) Team entwickelt und gewartet und kann auf der [Playgrounds docs](https://docs.playgrounds.network/subgrounds) eingesehen werden. -Since subgrounds has a large feature set to explore, here are some helpful starting places: +Da Subgrounds einen großen Funktionsumfang hat, den es zu erkunden gilt, finden Sie hier einige hilfreiche Startpunkte: -- [Getting Started with Querying](https://docs.playgrounds.network/subgrounds/getting_started/basics/) - - A good first step for how to build queries with subgrounds. -- [Building Synthetic Fields](https://docs.playgrounds.network/subgrounds/getting_started/synthetic_fields/) - - A gentle introduction to defining synthetic fields that transform data defined from the schema. -- [Concurrent Queries](https://docs.playgrounds.network/subgrounds/getting_started/async/) - - Learn how to level up your queries by parallelizing them. -- [Exporting Data to CSVs](https://docs.playgrounds.network/subgrounds/faq/exporting/) - - A quick article on how to seamlessly save your data as CSVs for further analysis. +- [Erste Schritte mit Abfragen](https://docs.playgrounds.network/subgrounds/getting_started/basics/) + - Ein guter erster Schritt für die Erstellung von Abfragen mit Untergründen. +- [Aufbau synthetischer Felder](https://docs.playgrounds.network/subgrounds/getting_started/synthetic_fields/) + - Eine sanfte Einführung in die Definition synthetischer Felder, die aus dem Schema definierte Daten umwandeln. +- [Gleichzeitige Abfragen](https://docs.playgrounds.network/subgrounds/getting_started/async/) + - Lernen Sie, wie Sie Ihre Abfragen durch Parallelisierung verbessern können. +- [Exportieren von Daten in CSV-Dateien] (https://docs.playgrounds.network/subgrounds/faq/exporting/) + - Ein kurzer Artikel darüber, wie Sie Ihre Daten nahtlos als CSV-Dateien für weitere Analysen speichern können. diff --git a/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..b35d7d952215 100644 --- a/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/de/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -1,27 +1,27 @@ --- -title: Subgraph ID vs Deployment ID +title: Subgraphen-ID vs. Einsatz-ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +Ein Subgraph wird durch eine Subgraph-ID identifiziert, und jede Version des Subgraphen wird durch eine Deployment-ID identifiziert. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +Bei der Abfrage eines Subgraphen kann jede der beiden IDs verwendet werden, obwohl im Allgemeinen empfohlen wird, die Deployment ID zu verwenden, da sie eine bestimmte Version eines Subgraphen angeben kann. -Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) +Hier sind einige wichtige Unterschiede zwischen den beiden IDs: ![](/img/subgraph-id-vs-deployment-id.png) -## Deployment ID +## Einsatz-ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +Die Bereitstellungs-ID ist der IPFS-Hash der kompilierten Manifestdatei, der auf andere Dateien im IPFS statt auf relative URLs auf dem Computer verweist. Auf das kompilierte Manifest kann zum Beispiel zugegriffen werden über: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. Um die Bereitstellungs-ID zu ändern, kann man einfach die Manifestdatei aktualisieren, z. B. durch Ändern des Beschreibungsfeldes, wie in der [Subgraph manifest documentation] (https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api) beschrieben. -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +Wenn Abfragen unter Verwendung der Einsatz-ID eines Subgraphen durchgeführt werden, geben wir eine Version dieses Subgraphen zur Abfrage an. Die Verwendung der Bereitstellungs-ID zur Abfrage einer bestimmten Subgraphenversion führt zu einer ausgefeilteren und robusteren Einrichtung, da die volle Kontrolle über die abgefragte Subgraphenversion besteht. Dies hat jedoch zur Folge, dass der Abfragecode jedes Mal manuell aktualisiert werden muss, wenn eine neue Version des Subgraphen veröffentlicht wird. -Example endpoint that uses Deployment ID: +Beispiel für einen Endpunkt, der die Bereitstellungs-ID verwendet: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB` ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +Die Subgraph-ID ist ein eindeutiger Bezeichner für einen Subgraphen. Sie bleibt über alle Versionen eines Subgraphen hinweg konstant. Es wird empfohlen, die Subgraph-ID zu verwenden, um die neueste Version eines Subgraphen abzufragen, obwohl es einige Einschränkungen gibt. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Beachten Sie, dass Abfragen unter Verwendung der Subgraph-ID dazu führen können, dass Abfragen von einer älteren Version des Subgraphen beantwortet werden, da die neue Version Zeit zum Synchronisieren benötigt. Außerdem könnten neue Versionen Änderungen am Schema mit sich bringen. -Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` +Beispiel-Endpunkt, der die Subgraph-ID verwendet: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/de/subgraphs/quick-start.mdx b/website/src/pages/de/subgraphs/quick-start.mdx index 91172561a67d..fed57b3cd41a 100644 --- a/website/src/pages/de/subgraphs/quick-start.mdx +++ b/website/src/pages/de/subgraphs/quick-start.mdx @@ -2,24 +2,24 @@ title: Schnellstart --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Erfahren Sie, wie Sie auf einfache Weise einen [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) auf The Graph erstellen, veröffentlichen und abfragen können. -## Prerequisites +## Voraussetzungen - Eine Krypto-Wallet -- A smart contract address on a [supported network](/supported-networks/) -- [Node.js](https://nodejs.org/) installed -- A package manager of your choice (`npm`, `yarn` or `pnpm`) +- Eine Smart-Contract-Adresse in einem [unterstützten Netzwerk](/supported-networks/ +- [Node.js](https://nodejs.org/) installiert +- Ein Paketmanager Ihrer Wahl (`npm`, `yarn` oder `pnpm`) -## How to Build a Subgraph +## Wie man einen Subgraphen erstellt -### 1. Create a subgraph in Subgraph Studio +### 1. Erstellen Sie einen Subgraphen in Subgraph Studio Gehen Sie zu [Subgraph Studio] (https://thegraph.com/studio/) und verbinden Sie Ihre Wallet. Mit Subgraph Studio können Sie Subgraphen erstellen, verwalten, bereitstellen und veröffentlichen sowie API-Schlüssel erstellen und verwalten. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Klicken Sie auf „Einen Subgraphen erstellen“. Es wird empfohlen, den Subgraph in Title Case zu benennen: „Subgraph Name Chain Name“. ### 2. Installieren der Graph-CLI @@ -37,54 +37,54 @@ Verwendung von [yarn] (https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialisieren Sie Ihren Subgraphen -> Die Befehle für Ihren spezifischen Subgraphen finden Sie auf der Subgraphen-Seite in [Subgraph Studio](https://thegraph.com/studio/). +> Sie finden die Befehle für Ihren spezifischen Subgraphen auf der Subgraphen-Seite in [Subgraph Studio] (https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +Der Befehl `graph init` erstellt automatisch ein Gerüst eines Subgraphen auf der Grundlage der Ereignisse Ihres Vertrags. -Mit dem folgenden Befehl wird Ihr Subgraph aus einem bestehenden Vertrag initialisiert: +Der folgende Befehl initialisiert Ihren Subgraphen anhand eines bestehenden Vertrags: ```sh graph init ``` -If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. +Wenn Ihr Vertrag auf dem jeweiligen Blockscanner, auf dem er eingesetzt wird (z. B. [Etherscan](https://etherscan.io/)), verifiziert wird, wird die ABI automatisch im CLI erstellt. -When you initialize your subgraph, the CLI will ask you for the following information: +Wenn Sie Ihren Subgraphen initialisieren, werden Sie von der CLI nach den folgenden Informationen gefragt: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. -- **Contract address**: Locate the smart contract address you’d like to query data from. -- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. -- **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. -- **Add another contract** (optional): You can add another contract. +- **Protokoll**: Wählen Sie das Protokoll, mit dem Ihr Subgraph Daten indizieren soll. +- **Subgraph-Schlagwort**: Erstellen Sie einen Namen für Ihren Subgraphen. Ihr Subgraph-Slug ist ein Bezeichner für Ihren Subgraphen. +- **Verzeichnis**: Wählen Sie ein Verzeichnis, in dem Sie Ihren Subgraphen erstellen möchten. +- **Ethereum-Netzwerk** (optional): Möglicherweise müssen Sie angeben, von welchem EVM-kompatiblen Netzwerk Ihr Subgraph Daten indizieren soll. +- **Vertragsadresse**: Suchen Sie die Adresse des Smart Contracts, von dem Sie Daten abfragen möchten. +- **ABI**: Wenn die ABI nicht automatisch ausgefüllt wird, müssen Sie sie manuell in eine JSON-Datei eingeben. +- **Startblock**: Sie sollten den Startblock eingeben, um die Subgraph-Indizierung von Blockchain-Daten zu optimieren. Ermitteln Sie den Startblock, indem Sie den Block suchen, in dem Ihr Vertrag bereitgestellt wurde. +- **Vertragsname**: Geben Sie den Namen Ihres Vertrags ein. +- **Vertragsereignisse als Entitäten indizieren**: Es wird empfohlen, dies auf „true“ zu setzen, da es automatisch Mappings zu Ihrem Subgraph für jedes emittierte Ereignis hinzufügt. +- **Einen weiteren Vertrag hinzufügen** (optional): Sie können einen weiteren Vertrag hinzufügen. -Der folgende Screenshot zeigt ein Beispiel dafür, was Sie bei der Initialisierung Ihres Untergraphen ( Subgraph ) erwarten können: +Der folgende Screenshot zeigt ein Beispiel dafür, was Sie bei der Initialisierung Ihres Subgraphen erwarten können: -![Subgraph command](/img/CLI-Example.png) +![Subgraph-Befehl](/img/CLI-Beispiel.png) -### 4. Edit your subgraph +### 4. Bearbeiten Sie Ihren Subgraphen -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +Der `init`-Befehl im vorherigen Schritt erzeugt einen Gerüst-Subgraphen, den Sie als Ausgangspunkt für den Aufbau Ihres Subgraphen verwenden können. -When making changes to the subgraph, you will mainly work with three files: +Wenn Sie Änderungen am Subgraphen vornehmen, werden Sie hauptsächlich mit drei Dateien arbeiten: - Manifest (`subgraph.yaml`) - definiert, welche Datenquellen Ihr Subgraph indizieren wird. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Schema (`schema.graphql`) - legt fest, welche Daten Sie aus dem Subgraphen abrufen möchten. - AssemblyScript Mappings (mapping.ts) - Dies ist der Code, der die Daten aus Ihren Datenquellen in die im Schema definierten Entitäten übersetzt. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +Eine detaillierte Aufschlüsselung, wie Sie Ihren Subgraphen schreiben, finden Sie unter [Erstellen eines Subgraphen](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Verteilen Sie Ihren Subgraphen -> Remember, deploying is not the same as publishing. +> Denken Sie daran, dass die Bereitstellung nicht dasselbe ist wie die Veröffentlichung. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +Wenn Sie einen Subgraphen **breitstellen**, schieben Sie ihn in das [Subgraph Studio] (https://thegraph.com/studio/), wo Sie ihn testen, einstellen und überprüfen können. Die Indizierung eines bereitgestellten Subgraphen wird vom [Upgrade Indexierer](https://thegraph.com/blog/upgrade-indexer/) durchgeführt, der ein einzelner Indexierer ist, der von Edge & Node betrieben wird, und nicht von den vielen dezentralen Indexierern im Graph Network. Ein **eingesetzter** Subgraph ist frei nutzbar, ratenbegrenzt, für die Öffentlichkeit nicht sichtbar und für Entwicklungs-, Staging- und Testzwecke gedacht. Sobald Ihr Subgraph geschrieben ist, führen Sie die folgenden Befehle aus: @@ -94,9 +94,9 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authentifizieren Sie Ihren Subgraphen und stellen Sie ihn bereit. Den Bereitstellungsschlüssel finden Sie auf der Seite des Subgraphen in Subgraph Studio. -![Deploy key](/img/subgraph-studio-deploy-key.jpg) +![ Deploy-Schlüssel](/img/subgraph-studio-deploy-key.jpg) ```` ```sh @@ -107,39 +107,39 @@ graph deploy ``` ```` -The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. +Die CLI fragt nach einer Versionsbezeichnung. Es wird dringend empfohlen, [semantische Versionierung](https://semver.org/) zu verwenden, z.B. `0.0.1`. -### 6. Review your subgraph +### 6. Überprüfen Sie Ihren Subgraphen -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +Wenn Sie Ihren Subgraph vor der Veröffentlichung testen möchten, können Sie mit [Subgraph Studio] (https://thegraph.com/studio/) Folgendes tun: - Führen Sie eine Testabfrage durch. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analysieren Sie Ihren Subgraphen im Dashboard, um Informationen zu überprüfen. +- Überprüfen Sie die Protokolle auf dem Dashboard, um zu sehen, ob es irgendwelche Fehler mit Ihrem Subgraph gibt. Die Protokolle eines funktionierenden Subgraphen sehen wie folgt aus: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Veröffentlichen Sie Ihren Subgraphen im The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +Wenn Ihr Subgraph bereit für eine Produktionsumgebung ist, können Sie ihn im dezentralen Netzwerk veröffentlichen. Die Veröffentlichung ist eine Onchain-Aktion, die Folgendes bewirkt: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- Es macht Ihren Subgraphen verfügbar, um von den dezentralisierten [Indexierers](/indexing/overview/) auf The Graph Network indiziert zu werden. +- Sie hebt Ratenbeschränkungen auf und macht Ihren Subgraphen öffentlich durchsuchbar und abfragbar im [Graph Explorer] (https://thegraph.com/explorer/). +- Es macht Ihren Subgraphen für [Kuratoren](/resources/roles/curating/) verfügbar, um ihn zu kuratieren. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> Je mehr GRT Sie und andere auf Ihrem Subgraph kuratieren, desto mehr Indexierer werden dazu angeregt, Ihren Subgraphen zu indizieren, was die Servicequalität verbessert, die Latenzzeit reduziert und die Netzwerkredundanz für Ihren Subgraphen erhöht. #### Veröffentlichung mit Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +Um Ihren Subgraphen zu veröffentlichen, klicken Sie auf die Schaltfläche "Veröffentlichen" im Dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Veröffentlichen eines Subgraphen auf Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Wählen Sie das Netzwerk aus, in dem Sie Ihren Subgraphen veröffentlichen möchten. #### Veröffentlichen über die CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +Ab Version 0.73.0 können Sie Ihren Subgraphen auch mit dem Graph CLI veröffentlichen. Öffnen Sie den `graph-cli`. @@ -147,10 +147,10 @@ Verwenden Sie die folgenden Befehle: ```` ```sh -graph codegen && graph build +graph codegen &amp;&amp; graph build ``` -Then, +Dann, ```sh graph publish @@ -161,28 +161,28 @@ graph publish ![cli-ui](/img/cli-ui.png) -To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +Wie Sie Ihre Bereitstellung anpassen können, erfahren Sie unter [Veröffentlichen eines Subgraphen](/subgraphs/developing/publishing/publishing-a-subgraph/). #### Hinzufügen von Signalen zu Ihrem Subgraphen -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. Um Indexierer für die Abfrage Ihres Subgraphen zu gewinnen, sollten Sie ihn mit einem GRT-Kurationssignal versehen. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - Diese Maßnahme verbessert die Servicequalität, verringert die Latenz und erhöht die Netzwerkredundanz und -verfügbarkeit für Ihren Subgraphen. 2. Indexer erhalten GRT Rewards auf der Grundlage des signalisierten Betrags, wenn sie für Indexing Rewards in Frage kommen. - - Es wird empfohlen, mindestens 3.000 GRT zu kuratieren, um 3 Indexer anzuziehen. Prüfen Sie die Berechtigung zum Reward anhand der Nutzung der Subgraph-Funktionen und der unterstützten Netzwerke. + - Es wird empfohlen, mindestens 3.000 GRT zu kuratieren, um 3 Indexierer zu gewinnen. Prüfen Sie die Berechtigung zur Belohnung anhand der Nutzung der Subgraph-Funktion und der unterstützten Netzwerke. -To learn more about curation, read [Curating](/resources/roles/curating/). +Um mehr über das Kuratieren zu erfahren, lesen Sie [Kuratieren](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +Um Gaskosten zu sparen, können Sie Ihren Subgraphen in der gleichen Transaktion kuratieren, in der Sie ihn veröffentlichen, indem Sie diese Option wählen: ![Subgraph veröffentlichen](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Abfrage des Subgraphen -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +Sie haben jetzt Zugang zu 100.000 kostenlosen Abfragen pro Monat mit Ihrem Subgraph auf The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +Sie können Ihren Subgraphen abfragen, indem Sie GraphQL-Abfragen an seine Abfrage-URL senden, die Sie durch Klicken auf die Schaltfläche Abfrage finden können. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +Weitere Informationen zur Abfrage von Daten aus Ihrem Subgraphen finden Sie unter [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/de/substreams/_meta-titles.json b/website/src/pages/de/substreams/_meta-titles.json index 6262ad528c3a..cf75f2729d64 100644 --- a/website/src/pages/de/substreams/_meta-titles.json +++ b/website/src/pages/de/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "Entwicklung" } diff --git a/website/src/pages/de/substreams/developing/_meta-titles.json b/website/src/pages/de/substreams/developing/_meta-titles.json index 882ee9fc7c9c..8170106cbff4 100644 --- a/website/src/pages/de/substreams/developing/_meta-titles.json +++ b/website/src/pages/de/substreams/developing/_meta-titles.json @@ -1,4 +1,4 @@ { "solana": "Solana", - "sinks": "Sink your Substreams" + "sinks": "Versenken Sie Ihre Substreams" } diff --git a/website/src/pages/de/substreams/developing/dev-container.mdx b/website/src/pages/de/substreams/developing/dev-container.mdx index bd4acf16eec7..8e4a49286f43 100644 --- a/website/src/pages/de/substreams/developing/dev-container.mdx +++ b/website/src/pages/de/substreams/developing/dev-container.mdx @@ -3,46 +3,46 @@ title: Substreams Dev Container sidebarTitle: Dev Container --- -Develop your first project with Substreams Dev Container. +Entwickeln Sie Ihr erstes Projekt mit Substreams Dev Container. -## What is a Dev Container? +## Was ist ein Dev Container? -It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). +Es ist ein Tool, mit dem Sie Ihr erstes Projekt erstellen können. Sie können es entweder aus der Ferne über Github-Codespaces oder lokal durch Klonen des [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file) ausführen. -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Innerhalb des Dev Containers richtet der Befehl `substreams init` ein codegeneriertes Substreams-Projekt ein, mit dem Sie auf einfache Weise einen Subgraph oder eine SQL-basierte Lösung für die Datenverarbeitung erstellen können. -## Prerequisites +## Voraussetzungen -- Ensure Docker and VS Code are up-to-date. +- Stellen Sie sicher, dass Docker und VS Code auf dem neuesten Stand sind. -## Navigating the Dev Container +## Navigieren im Dev Container -In the Dev Container, you can either build or import your own `substreams.yaml` and associate modules within the minimal path or opt for the automatically generated Substreams paths. Then, when you run the `Substreams Build` it will generate the Protobuf files. +Im Dev Container können Sie entweder Ihre eigene `substreams.yaml` erstellen oder importieren und Module innerhalb des Minimalpfades assoziieren oder sich für die automatisch generierten Substreams-Pfade entscheiden. Wenn Sie dann den „Substreams Build“ ausführen, werden die Protobuf-Dateien generiert. -### Options +### Optionen -- **Minimal**: Starts you with the raw block `.proto` and requires development. This path is intended for experienced users. -- **Non-Minimal**: Extracts filtered data using network-specific caches and Protobufs taken from corresponding foundational modules (maintained by the StreamingFast team). This path generates a working Substreams out of the box. +- **Minimal**: Beginnt mit dem Rohblock `.proto` und erfordert Entwicklung. Dieser Pfad ist für erfahrene Benutzer gedacht. +- **Nicht-Minimal**: Extrahiert gefilterte Daten unter Verwendung von netzspezifischen Caches und Protobufs aus den entsprechenden Grundmodulen (die vom StreamingFast-Team gepflegt werden). Dieser Pfad generiert einen funktionsfähigen Substream aus der Box heraus. -To share your work with the broader community, publish your `.spkg` to [Substreams registry](https://substreams.dev/) using: +Um Ihre Arbeit mit einer breiteren Community zu teilen, veröffentlichen Sie Ihr `.spkg` im [Substreams registry](https://substreams.dev/): - `substreams registry login` - `substreams registry publish` -> Note: If you run into any problems within the Dev Container, use the `help` command to access trouble shooting tools. +> Hinweis: Wenn Sie im Dev Container auf Probleme stoßen, verwenden Sie den Befehl `help`, um auf Tools zur Fehlerbehebung zuzugreifen. -## Building a Sink for Your Project +## Bau einer Senkung für Ihr Projekt -You can configure your project to query data either through a Subgraph or directly from an SQL database: +Sie können Ihr Projekt so konfigurieren, dass Daten entweder über einen Subgraphen oder direkt von einer SQL-Datenbank abgefragt werden: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). -- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +- **Subgraph**: Führen Sie `substreams codegen subgraph` aus. Dies erzeugt ein Projekt mit einer grundlegenden `schema.graphql` und `mappings.ts` Datei. Sie können diese anpassen, um Entitäten basierend auf den von Substreams extrahierten Daten zu definieren. Für weitere Konfigurationen siehe [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **SQL**: Führen Sie `substreams codegen sql` für SQL-basierte Abfragen aus. Weitere Informationen zur Konfiguration einer SQL-Senke finden Sie in der [SQL-Dokumentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). -## Deployment Options +## Einsatz-Optionen -To deploy a Subgraph, you can either run the `graph-node` locally using the `deploy-local` command or deploy to Subgraph Studio by using the `deploy` command found in the `package.json` file. +Um einen Subgraph einzusetzen, können Sie entweder den `graph-node` lokal mit dem Befehl `deploy-local` ausführen oder mit dem Befehl `deploy` aus der Datei `package.json` in Subgraph Studio einsetzen. -## Common Errors +## Häufige Fehler -- When running locally, make sure to verify that all Docker containers are healthy by running the `dev-status` command. -- If you put the wrong start-block while generating your project, navigate to the `substreams.yaml` to change the block number, then re-run `substreams build`. +- Wenn Sie lokal arbeiten, stellen Sie sicher, dass alle Docker-Container in Ordnung sind, indem Sie den Befehl `dev-status` ausführen. +- Wenn Sie beim Generieren Ihres Projekts den falschen Startblock gesetzt haben, navigieren Sie zur `substreams.yaml`, um die Blocknummer zu ändern, und führen Sie dann `substreams build` erneut aus. diff --git a/website/src/pages/de/substreams/developing/sinks.mdx b/website/src/pages/de/substreams/developing/sinks.mdx index 6990190c555d..d80250c0df7d 100644 --- a/website/src/pages/de/substreams/developing/sinks.mdx +++ b/website/src/pages/de/substreams/developing/sinks.mdx @@ -1,51 +1,51 @@ --- -title: Official Sinks +title: Versenken Sie Ihre Substreams --- -Choose a sink that meets your project's needs. +Wählen Sie ein Becken, das den Anforderungen Ihres Projekts entspricht. ## Überblick -Once you find a package that fits your needs, you can choose how you want to consume the data. +Sobald Sie ein Paket gefunden haben, das Ihren Anforderungen entspricht, können Sie wählen, wie Sie die Daten nutzen möchten. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Senken sind Integrationen, die es Ihnen ermöglichen, die extrahierten Daten an verschiedene Ziele zu senden, z. B. an eine SQL-Datenbank, eine Datei oder einen Subgraphen. ## Sinks -> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. +> Hinweis: Einige der Sinks werden offiziell vom StreamingFast-Entwicklungsteam unterstützt (d.h. es wird aktiver Support angeboten), aber andere Sinks werden von der Community betrieben und der Support kann nicht garantiert werden. -- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. -- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. -- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. -- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. +- [SQL-Datenbank](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Senden Sie die Daten an eine Datenbank. +- [Subgraph](/sps/einfuehrung/): Konfigurieren Sie eine API, die Ihren Datenanforderungen entspricht, und hosten Sie sie im The Graph Network. +- [Direktes Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Streamen Sie Daten direkt aus Ihrer Anwendung. +- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Senden von Daten an ein PubSub-Thema. +- [[Community Sinks]] (https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Erforschen Sie hochwertige, von der Community unterhaltene Sinks. -> Important: If you’d like your sink (e.g., SQL or PubSub) hosted for you, reach out to the StreamingFast team [here](mailto:sales@streamingfast.io). +> Wichtig: Wenn Sie möchten, dass Ihre Senke (z. B. SQL oder PubSub) für Sie gehostet wird, wenden Sie sich an das StreamingFast-Team [hier] (mailto:sales@streamingfast.io). -## Navigating Sink Repos +## Sink Repos navigieren -### Official +### Offiziell -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Name | Support | Maintainer | Quellcode | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Name | Support | Maintainer | Quellcode | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -- O = Official Support (by one of the main Substreams providers) +- O = Offizielle Unterstützung (durch einen der wichtigsten Substreams-Anbieter) - C = Community Support diff --git a/website/src/pages/de/substreams/developing/solana/account-changes.mdx b/website/src/pages/de/substreams/developing/solana/account-changes.mdx index 74c54f3760c7..64e919552244 100644 --- a/website/src/pages/de/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/de/substreams/developing/solana/account-changes.mdx @@ -1,57 +1,57 @@ --- -title: Solana Account Changes -sidebarTitle: Account Changes +title: Änderungen am Solana-Konto +sidebarTitle: Kontoänderungen --- -Learn how to consume Solana account change data using Substreams. +Erfahren Sie, wie Sie Solana-Konto-Änderungsdaten mithilfe von Substreams nutzen können. ## Einführung -This guide walks you through the process of setting up your environment, configuring your first Substreams stream, and consuming account changes efficiently. By the end of this guide, you will have a working Substreams feed that allows you to track real-time account changes on the Solana blockchain, as well as historical account change data. +Dieser Leitfaden führt Sie durch den Prozess der Einrichtung Ihrer Umgebung, der Konfiguration Ihres ersten Substreams-Streams und der effizienten Nutzung von Kontoänderungen. Am Ende dieses Leitfadens werden Sie einen funktionierenden Substreams-Feed haben, der es Ihnen ermöglicht, Kontoänderungen in Echtzeit auf der Solana-Blockchain zu verfolgen, sowie historische Daten zu Kontoänderungen. -> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. +> HINWEIS: Die Historie für das Solana-Konto ändert sich ab 2025, Block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +Für jeden Substreams Solana-Kontoblock wird nur die letzte Aktualisierung pro Konto aufgezeichnet, siehe die [Protobuf-Referenz] (https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). Wenn ein Konto gelöscht wird, wird ein Payload mit `deleted == True` geliefert. Darüber hinaus werden Ereignisse von geringer Bedeutung ausgelassen, z. B. solche mit dem speziellen Eigentümer „Vote11111111...“ oder Änderungen, die sich nicht auf die Kontodaten auswirken (z. B. Lamportänderungen). -> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. +> HINWEIS: Um die Substreams-Latenz für Solana-Konten zu testen, gemessen als Block-Head-Drift, installieren Sie die [Substreams CLI] (https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) und führen Sie `substreams run solana-common blocks_without_votes -s -1 -o clock` aus. ## Erste Schritte -### Prerequisites +### Voraussetzungen -Before you begin, ensure that you have the following: +Bevor Sie beginnen, vergewissern Sie sich, dass Sie über die folgenden Informationen verfügen: -1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installed. -2. A [Substreams key](https://docs.substreams.dev/reference-material/substreams-cli/authentication) for access to the Solana Account Change data. -3. Basic knowledge of [how to use](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) the command line interface (CLI). +1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installiert. +2. Ein [Substreams-key](https://docs.substreams.dev/reference-material/substreams-cli/authentication) für den Zugriff auf die Solana-Kontoänderungsdaten. +3. Grundlegende Kenntnisse der Befehlszeilenschnittstelle (CLI) [hohe to Muse](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface). -### Step 1: Set Up a Connection to Solana Account Change Substreams +### Schritt 1: Einrichten einer Verbindung zu Solana Account Change Substreams -Now that you have Substreams CLI installed, you can set up a connection to the Solana Account Change Substreams feed. +Nachdem Sie nun Substreams CLI installiert haben, können Sie eine Verbindung zum Solana Account Change Substrats-Feed herstellen. -- Using the [Solana Accounts Foundational Module](https://substreams.dev/packages/solana-accounts-foundational/latest), you can choose to stream data directly or use the GUI for a more visual experience. The following `gui` example filters for Honey Token account data. +- Mit dem [Solana Accounts Foundational Module] (https://substreams.dev/packages/solana-accounts-foundational/latest) können Sie wählen, ob Sie Daten direkt streamen oder die grafische Benutzeroberfläche (GUI) für eine bessere visuelle Darstellung verwenden möchten. Das folgende `gui`-Beispiel filtert nach Honey Token-Kontodaten. ```bash substreams gui solana-accounts-foundational filtered_accounts -t +10 -p filtered_accounts="owner:TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA || account:4vMsoUT2BWatFweudnQM1xedRLfJgJ7hswhcpz4xgBTy" ``` -- This command will stream account changes directly to your terminal. +- Mit diesem Befehl werden Kontoänderungen direkt in Ihr Terminal übertragen. ```bash substreams run solana-accounts-foundational filtered_accounts -s -1 -o clock ``` -The Foundational Module has support for filtering on specific accounts and/or owners. You can adjust the query based on your needs. +Das Basismodul unterstützt die Filterung nach bestimmten Konten und/oder Eigentümern. Sie können die Abfrage an Ihre Bedürfnisse anpassen. -### Step 2: Sink the Substreams +### Schritt 2: Versenkung der Substreams -Consume the account stream [directly in your application](https://docs.substreams.dev/how-to-guides/sinks/stream) using a callback or make it queryable by using the [SQL-DB sink](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +Verwenden Sie den Kontenstrom [direkt in Ihrer Anwendung] (https://docs.substreams.dev/how-to-guides/sinks/stream) mit einem Callback oder machen Sie ihn mit der [SQL-DB-Senke] (https://docs.substreams.dev/how-to-guides/sinks/sql-sink) abfragbar. -### Step 3: Setting up a Reconnection Policy +### Schritt 3: Einrichten einer Verbindungswiederherstellungsrichtlinie -[Cursor Management](https://docs.substreams.dev/reference-material/reliability-guarantees) ensures seamless continuity and retraceability by allowing you to resume from the last consumed block if the connection is interrupted. This functionality prevents data loss and maintains a persistent stream. +Die [Cursor-Verwaltung] (https://docs.substreams.dev/reference-material/reliability-guarantees) sorgt für nahtlose Kontinuität und Rückverfolgbarkeit, indem sie es Ihnen ermöglicht, bei einer Unterbrechung der Verbindung mit dem letzten verbrauchten Block fortzufahren. Diese Funktion verhindert Datenverluste und sorgt für die Aufrechterhaltung eines kontinuierlichen Datenstroms. -When creating or using a sink, the user's primary responsibility is to provide implementations of BlockScopedDataHandler and a BlockUndoSignalHandler implementation(s) which has the following interface: +Bei der Erstellung oder Verwendung einer Senke ist der Benutzer in erster Linie dafür verantwortlich, Implementierungen von BlockScopedDataHandler und eine BlockUndoSignalHandler-Implementierung(en) bereitzustellen, die die folgende Schnittstelle aufweisen: ```go import ( diff --git a/website/src/pages/de/substreams/developing/solana/transactions.mdx b/website/src/pages/de/substreams/developing/solana/transactions.mdx index 74bb987f4578..d4c6b01ad24e 100644 --- a/website/src/pages/de/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/de/substreams/developing/solana/transactions.mdx @@ -1,61 +1,61 @@ --- -title: Solana Transactions -sidebarTitle: Transactions +title: Solana-Transaktionen +sidebarTitle: Transaktionen --- -Learn how to initialize a Solana-based Substreams project within the Dev Container. +Erfahren Sie, wie Sie ein Solana-basiertes Substreams-Projekt im Dev Container initialisieren. -> Note: This guide excludes [Account Changes](/substreams/developing/solana/account-changes/). +> Hinweis: Diese Anleitung schließt [Kontoänderungen](/substreams/developing/solana/account-changes/) aus. -## Options +## Optionen -If you prefer to begin locally within your terminal rather than through the Dev Container (VS Code required), refer to the [Substreams CLI installation guide](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). +Wenn Sie es vorziehen, lokal in Ihrem Terminal zu beginnen, anstatt über den Dev Container (VS Code erforderlich), lesen Sie die [Substreams CLI Installationsanleitung] (https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). -## Step 1: Initialize Your Solana Substreams Project +## Schritt 1: Initialisieren Sie Ihr Solana Substreams Projekt -1. Open the [Dev Container](https://github.com/streamingfast/substreams-starter) and follow the on-screen steps to initialize your project. +1. Öffnen Sie den [Dev Container] (https://github.com/streamingfast/substreams-starter) und folgen Sie den Schritten auf dem Bildschirm, um Ihr Projekt zu initialisieren. -2. Running `substreams init` will give you the option to choose between two Solana project options. Select the best option for your project: - - **sol-minimal**: This creates a simple Substreams that extracts raw Solana block data and generates corresponding Rust code. This path will start you with the full raw block, and you can navigate to the `substreams.yaml` (the manifest) to modify the input. - - **sol-transactions**: This creates a Substreams that filters Solana transactions based on one or more Program IDs and/or Account IDs, using the cached [Solana Foundational Module](https://substreams.dev/streamingfast/solana-common/v0.3.0). - - **sol-anchor-beta**: This creates a Substreams that decodes instructions and events with an Anchor IDL. If an IDL isn’t available (reference [Anchor CLI](https://www.anchor-lang.com/docs/cli)), then you’ll need to provide it yourself. +2. Wenn Sie `substreams init` ausführen, haben Sie die Möglichkeit, zwischen zwei Solana-Projektoptionen zu wählen. Wählen Sie die beste Option für Ihr Projekt: + - **sol-minimal**: Damit wird ein einfacher Substreams erstellt, der die Rohdaten des Solana-Blocks extrahiert und den entsprechenden Rust-Code erzeugt. Dieser Pfad startet mit dem vollständigen Rohblock, und Sie können zur `substreams.yaml` (dem Manifest) navigieren, um die Eingabe zu ändern. + - **sol-transactions**: Damit wird ein Substream erstellt, der Solana-Transaktionen auf der Grundlage einer oder mehrerer Programm-IDs und/oder Konto-IDs filtert, wobei das zwischengespeicherte [Solana-Grundlagenmodul] (https://substreams.dev/streamingfast/solana-common/v0.3.0) verwendet wird. + - **sol-anchor-beta**: Dies erzeugt einen Substream, der Anweisungen und Ereignisse mit einer Anchor-IDL dekodiert. Wenn eine IDL nicht verfügbar ist (siehe [Anchor CLI](https://www.anchor-lang.com/docs/cli)), müssen Sie sie selbst bereitstellen. -The modules within Solana Common do not include voting transactions. To gain a 75% reduction in data processing size and costs, delay your stream by over 1000 blocks from the head. This can be done using the [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) function in Rust. +Die Module in Solana Common enthalten keine Abstimmungstransaktionen. Um eine 75%ige Reduzierung der Datenverarbeitungsgröße und -kosten zu erreichen, verzögern Sie Ihren Stream um mehr als 1000 Blöcke vom Kopf. Dies kann mit der Funktion [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) in Rust erreicht werden. -To access voting transactions, use the full Solana block, `sf.solana.type.v1.Block`, as input. +Für den Zugriff auf Abstimmungsvorgänge ist der vollständige Solana-Block `sf.solana.type.v1.Block` als Eingabe zu verwenden. -## Step 2: Visualize the Data +## Schritt 2: Visualisierung der Daten -1. Run `substreams auth` to create your [account](https://thegraph.market/) and generate an authentication token (JWT), then pass this token back as input. +1. Führen Sie `substreams auth` aus, um Ihr [Konto](https://thegraph.market/) zu erstellen und ein Authentifizierungs-Token (JWT) zu generieren, und geben Sie dieses Token als Eingabe zurück. -2. Now you can freely use the `substreams gui` to visualize and iterate on your extracted data. +2. Jetzt können Sie die `substreams gui` frei verwenden, um Ihre extrahierten Daten zu visualisieren und zu iterieren. -## Step 2.5: (Optionally) Transform the Data +## Schritt 2.5: (Optional) Transformation der Daten -Within the generated directories, modify your Substreams modules to include additional filters, aggregations, and transformations, then update the manifest accordingly. +Ändern Sie innerhalb der generierten Verzeichnisse Ihre Substreams-Module, um zusätzliche Filter, Aggregationen und Transformationen aufzunehmen, und aktualisieren Sie das Manifest entsprechend. -## Step 3: Load the Data +## Schritt 3: Laden der Daten -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +Um Ihre Substreams abfragbar zu machen (im Gegensatz zu [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), können Sie automatisch einen [Substreams-powered subgraph](/sps/introduction/) oder eine SQL-DB-Senke erzeugen. -### Subgraph +### Subgrafen -1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. -3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. +1. Führen Sie `substreams codegen subgraph` aus, um die Senke zu initialisieren und die erforderlichen Dateien und Funktionsdefinitionen zu erstellen. +2. Erstellen Sie Ihre [[Subgraph Mappings]] (/sps/triggers/) in der Datei `mappings.ts` und die zugehörigen Entitäten in der Datei `schema.graphql`. +3. Erstellen und verteilen Sie lokal oder in [Subgraph Studio] (https://thegraph.com/studio-pricing/), indem Sie `deploy-studio` ausführen. ### SQL -1. Run `substreams codegen sql` and choose from either ClickHouse or Postgres to initialize the sink, producing the necessary files. -2. Run `substreams build` build the [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink) sink. -3. Run `substreams-sink-sql` to sink the data into your selected SQL DB. +1. Führen Sie `substreams codegen sql` aus und wählen Sie entweder ClickHouse oder Postgres aus, um die Senke zu initialisieren und die erforderlichen Dateien zu erzeugen. +2. Führen Sie `substreams build` aus, um die [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink) Senke zu bauen. +3. Führen Sie `substreams-sink-sql` aus, um die Daten in die von Ihnen ausgewählte SQL-DB zu übertragen. -> Note: Run `help` to better navigate the development environment and check the health of containers. +> Hinweis: Führen Sie `help` aus, um sich in der Entwicklungsumgebung besser zurechtzufinden und den Zustand der Container zu überprüfen. ## Zusätzliche Ressourcen -You may find these additional resources helpful for developing your first Solana application. +Vielleicht finden Sie diese zusätzlichen Ressourcen hilfreich für die Entwicklung Ihrer ersten Solana-Anwendung. -- The [Dev Container Reference](/substreams/developing/dev-container/) helps you navigate the container and its common errors. -- The [CLI reference](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) lets you explore all the tools available in the Substreams CLI. -- The [Components Reference](https://docs.substreams.dev/reference-material/substreams-components/packages) dives deeper into navigating the `substreams.yaml`. +- Die [Dev-Container-Referenz](/substreams/developing/dev-container/) hilft Ihnen bei der Navigation im Container und bei häufigen Fehlern. +- Mit der [CLI-Referenz](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) können Sie alle in der Substreams-CLI verfügbaren Tools erkunden. +- Die [Komponenten-Referenz] (https://docs.substreams.dev/reference-material/substreams-components/packages) taucht tiefer in die Navigation in der `substreams.yaml` ein. diff --git a/website/src/pages/de/substreams/introduction.mdx b/website/src/pages/de/substreams/introduction.mdx index feb5b5d6fb13..b835c7916802 100644 --- a/website/src/pages/de/substreams/introduction.mdx +++ b/website/src/pages/de/substreams/introduction.mdx @@ -1,45 +1,45 @@ --- -title: Introduction to Substreams +title: Einführung in Substreams sidebarTitle: Einführung --- ![Substreams Logo](/img/substreams-logo.png) -To start coding right away, check out the [Substreams Quick Start](/substreams/quick-start/). +Wenn Sie sofort mit dem Programmieren beginnen möchten, lesen Sie den [Substreams Quick Start](/substreams/quick-start/). ## Überblick -Substreams is a powerful parallel blockchain indexing technology designed to enhance performance and scalability within The Graph Network. +Substreams ist eine leistungsstarke parallele Blockchain-Indizierungstechnologie, die entwickelt wurde, um die Leistung und Skalierbarkeit innerhalb von The Graph Network zu verbessern. -## Substreams Benefits +## Substreams Vorteile -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. -- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. -- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. -- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. +- **Beschleunigte Indizierung**: Beschleunigen Sie die Indizierung von Subgraphen mit einer parallelisierten Engine für schnelleren Datenabruf und -verarbeitung. +- **Multi-Ketten-Unterstützung**: Erweitern Sie die Indizierungsmöglichkeiten über EVM-basierte Ketten hinaus und unterstützen Sie Ökosysteme wie Solana, Injective, Starknet und Vara. +- **Erweitertes Datenmodell**: Zugriff auf umfassende Daten, einschließlich der `trace`-Ebene von EVM oder Kontoänderungen auf Solana, bei effizienter Verwaltung von Forks/Trennungen. +- **Multi-Sink-Unterstützung:** Für Subgraph, Postgres-Datenbank, Clickhouse und Mongo-Datenbank. -## How Substreams Works in 4 Steps +## So funktioniert Substreams in 4 Schritten -1. You write a Rust program, which defines the transformations that you want to apply to the blockchain data. For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). +1. Sie schreiben ein Rust-Programm, das die Transformationen definiert, die Sie auf die Blockchain-Daten anwenden möchten. Zum Beispiel extrahiert die folgende Rust-Funktion relevante Informationen aus einem Ethereum-Block (Nummer, Hash und übergeordneter Hash). ```rust -fn get_my_block(blk: Block) -> Result { - let header = blk.header.as_ref().unwrap(); - - Ok(MyBlock { - number: blk.number, - hash: Hex::encode(&blk.hash), - parent_hash: Hex::encode(&header.parent_hash), - }) +fn get_my_block(blk: Block) -&gt; Ergebnis&lt;MyBlock, substreams::errors::Error&gt; { + let header = blk.header.as_ref().unwrap(); + + Ok(MyBlock { + number: blk.number, + hash: Hex::encode(&amp;blk.hash), + parent_hash: Hex::encode(&amp;header.parent_hash), + }) } ``` -2. You wrap up your Rust program into a WASM module just by running a single CLI command. +2. Sie verpacken Ihr Rust-Programm in ein WASM-Modul, indem Sie einfach einen einzigen CLI-Befehl ausführen. -3. The WASM container is sent to a Substreams endpoint for execution. The Substreams provider feeds the WASM container with the blockchain data and the transformations are applied. +3. Der WASM-Container wird zur Ausführung an einen Substreams-Endpunkt gesendet. Der Substreams-Anbieter füttert den WASM-Container mit den Blockchain-Daten und die Transformationen werden angewendet. -4. You select a [sink](https://docs.substreams.dev/how-to-guides/sinks), a place where you want to send the transformed data (such as a SQL database or a Subgraph). +4. Sie wählen eine [„sink“] (https://docs.substreams.dev/how-to-guides/sinks), einen Ort, an den Sie die umgewandelten Daten senden möchten (z. B. eine SQL-Datenbank oder einen Subgraph). ## Zusätzliche Ressourcen -All Substreams developer documentation is maintained by the StreamingFast core development team on the [Substreams registry](https://docs.substreams.dev). +Die gesamte Substreams-Entwicklerdokumentation wird vom StreamingFast-Kernentwicklungsteam auf der [Substreams-Registry] (https://docs.substreams.dev) gepflegt. diff --git a/website/src/pages/de/substreams/publishing.mdx b/website/src/pages/de/substreams/publishing.mdx index c2878910fb9e..ec0ce0c8f9b0 100644 --- a/website/src/pages/de/substreams/publishing.mdx +++ b/website/src/pages/de/substreams/publishing.mdx @@ -1,53 +1,53 @@ --- -title: Publishing a Substreams Package -sidebarTitle: Publishing +title: Veröffentlichung eines Substrats-Pakets +sidebarTitle: Veröffentlichung --- -Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). +Erfahren Sie, wie Sie ein Substreams-Paket in der [Substreams Registry] (https://substreams.dev) veröffentlichen. ## Überblick -### What is a package? +### Was ist ein Paket? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +Ein Substreams-Paket ist eine vorkompilierte Binärdatei, die die spezifischen Daten definiert, die Sie aus der Blockchain extrahieren möchten, ähnlich wie die Datei `mapping.ts` in traditionellen Subgraphen. -## Publish a Package +## Veröffentlichung eines Pakets -### Prerequisites +### Voraussetzungen -- You must have the Substreams CLI installed. -- You must have a Substreams package (`.spkg`) that you want to publish. +- Sie müssen die Substreams CLI installiert haben. +- Sie müssen ein Substreams-Paket (`.spkg`) haben, das Sie veröffentlichen wollen. -### Step 1: Run the `substreams publish` Command +### Schritt 1: Führen Sie den Befehl `substreams publish` aus -1. In a command-line terminal, run `substreams publish .spkg`. +1. Führen Sie in einem Befehlszeilen-Terminal die Datei `substreams publish .spkg` aus. -2. If you do not have a token set in your computer, navigate to `https://substreams.dev/me`. +2. Wenn Sie keinen Token auf Ihrem Computer haben, navigieren Sie zu `https://substreams.dev/me`. ![get token](/img/1_get-token.png) -### Step 2: Get a Token in the Substreams Registry +### Schritt 2: Erhalten Sie ein Token in der Substrats-Registrierung -1. In the Substreams Registry, log in with your GitHub account. +1. Melden Sie sich in der Substreams Registry mit Ihrem GitHub-Konto an. -2. Create a new token and copy it in a safe location. +2. Erstellen Sie einen neuen Token und kopieren Sie ihn an einen sicheren Ort. -![new token](/img/2_new_token.png) +![neues Token](/img/2_new_token.png) -### Step 3: Authenticate in the Substreams CLI +### Schritt 3: Authentifizierung in der Substreams-CLI -1. Back in the Substreams CLI, paste the previously generated token. +1. Zurück in der Substreams-CLI fügen Sie das zuvor generierte Token ein. -![paste token](/img/3_paste_token.png) +![Token einfügen](/img/3_paste_token.png) -2. Lastly, confirm that you want to publish the package. +2. Bestätigen Sie abschließend, dass Sie das Paket veröffentlichen möchten. -![confirm](/img/4_confirm.png) +![bestätigen](/img/4_bestätigen.png) -That's it! You have succesfully published a package in the Substreams registry. +Das war's! Sie haben erfolgreich ein Paket in der Substreams-Registrierung veröffentlicht. -![success](/img/5_success.png) +![Erfolg](/img/5_success.png) ## Zusätzliche Ressourcen -Visit [Substreams](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. +Besuchen Sie [Substreams] (https://substreams.dev/), um eine wachsende Sammlung von gebrauchsfertigen Substreams-Paketen für verschiedene Blockchain-Netzwerke zu entdecken. diff --git a/website/src/pages/de/substreams/quick-start.mdx b/website/src/pages/de/substreams/quick-start.mdx index cd29be60d2f9..6d82be0f8ac1 100644 --- a/website/src/pages/de/substreams/quick-start.mdx +++ b/website/src/pages/de/substreams/quick-start.mdx @@ -3,28 +3,28 @@ title: Substreams Kurzanleitung sidebarTitle: Schnellstart --- -Discover how to utilize ready-to-use substream packages or develop your own. +Entdecken Sie, wie Sie gebrauchsfertige Substream-Pakete verwenden oder eigene entwickeln können. ## Überblick -Integrating Substreams can be quick and easy. They are permissionless, and you can [obtain a key here](https://thegraph.market/) without providing personal information to start streaming on-chain data. +Die Integration von Substreams kann schnell und einfach sein. Sie sind erlaubnisfrei, und Sie können [hier einen Schlüssel erhalten] (https://thegraph.market/), ohne persönliche Informationen anzugeben, mit dem Streaming von In-Rhein-Daten beginnen. ## Start des Erstellens -### Use Substreams Packages +### Substreams Pakete verwenden -There are many ready-to-use Substreams packages available. You can explore these packages by visiting the [Substreams Registry](https://substreams.dev) and [sinking them](/substreams/developing/sinks/). The registry lets you search for and find any package that meets your needs. +Es sind viele gebrauchsfertige Substreams-Pakete verfügbar. Sie können diese Pakete erforschen, indem Sie die [Substreams Registry](https://substreams.dev) besuchen und [sinking them](/substreams/developing/sinks/). In der Registry können Sie jedes Paket suchen und finden, das Ihren Anforderungen entspricht. -Once you find a package that fits your needs, you can choose how you want to consume the data: +Sobald Sie ein Paket gefunden haben, das Ihren Anforderungen entspricht, können Sie wählen, wie Sie die Daten nutzen möchten: -- **[Subgraph](/sps/introduction/)**: Configure an API to meet your data needs and host it on The Graph Network. -- **[SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)**: Send the data to a database. -- **[Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Stream data directly to your application. -- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Send data to a PubSub topic. +- **[Subgraph](/sps/einführung/)**: Konfigurieren Sie eine API, die Ihren Datenanforderungen entspricht, und hosten Sie sie im The Graph Network. +- \*[SQL-Datenbank](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)\*\*: Senden Sie die Daten an eine Datenbank. +- **[Direktes Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Streamen Sie Daten direkt in Ihre Anwendung. +- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Daten an ein PubSub-Thema senden. -### Develop Your Own +### Entwickeln Sie Ihr eigenes -If you can't find a Substreams package that meets your specific needs, you can develop your own. Substreams are built with Rust, so you'll write functions that extract and filter the data you need from the blockchain. To get started, check out the following tutorials: +Wenn Sie kein Substreams-Paket finden können, das Ihren speziellen Anforderungen entspricht, können Sie Ihr eigenes entwickeln. Substreams werden mit Rust erstellt, sodass Sie Funktionen schreiben, die die benötigten Daten aus der Blockchain extrahieren und filtern. Schauen Sie sich für den Einstieg die folgenden Tutorials an: - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Solana](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-solana) @@ -32,11 +32,11 @@ If you can't find a Substreams package that meets your specific needs, you can d - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) -To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/). +Um Ihre Substreams von Anfang an zu erstellen und zu optimieren, verwenden Sie den minimalen Pfad innerhalb des [Dev Containers](/substreams/developing/dev-container/). -> Note: Substreams guarantees that you'll [never miss data](https://docs.substreams.dev/reference-material/reliability-guarantees) with a simple reconnection policy. +> Hinweis: Substreams garantiert, dass Sie [niemals Daten verpassen] (https://docs.substreams.dev/reference-material/reliability-guarantees) mit einer einfachen Wiederverbindungsrichtlinie. ## Zusätzliche Ressourcen -- For additional guidance, reference the [Tutorials](https://docs.substreams.dev/tutorials/intro-to-tutorials) and follow the [How-To Guides](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) on Streaming Fast docs. -- For a deeper understanding of how Substreams works, explore the [architectural overview](https://docs.substreams.dev/reference-material/architecture) of the data service. +- Weitere Anleitungen finden Sie in den [Tutorials] (https://docs.substreams.dev/tutorials/intro-to-tutorials) und in den [How-To Guides] (https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) auf Streaming Fast docs. +- Ein tieferes Verständnis der Funktionsweise von Substreams finden Sie in der [Architekturübersicht] (https://docs.substreams.dev/reference-material/architecture) des Datendienstes. diff --git a/website/src/pages/de/supported-networks.mdx b/website/src/pages/de/supported-networks.mdx index 02e45c66ca42..1ae4bd5d095b 100644 --- a/website/src/pages/de/supported-networks.mdx +++ b/website/src/pages/de/supported-networks.mdx @@ -1,22 +1,28 @@ --- -title: Supported Networks +title: Unterstützte Netzwerke hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + -- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. -- For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- Subgraph Studio verlässt sich auf die Stabilität und Zuverlässigkeit der zugrundeliegenden Technologien, z.B. JSON-RPC, Firehose und Substreams Endpunkte. +- Subgraphs, die die Gnosis-Kette indizieren, können jetzt mit dem gnosis- Netzwerkidentifikator eingesetzt werden. +- Wenn ein Subgraph über die CLI veröffentlicht und von einem Indexer aufgenommen wurde, könnte er technisch gesehen auch ohne Unterstützung abgefragt werden, und es wird daran gearbeitet, die Integration neuer Netzwerke weiter zu vereinfachen. +- Für eine vollständige Liste, welche Funktionen im dezentralen Netzwerk unterstützt werden, siehe [diese Seite](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). -## Running Graph Node locally +## Graph Node lokal ausführen If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node kann über eine Firehose-Integration auch andere Protokolle indizieren. Firehose-Integrationen wurden für NEAR, Arweave und Cosmos-basierte Netzwerke erstellt. Darüber hinaus kann Graph Node Subgraphs auf Basis von Substreams für jedes Netzwerk mit Substreams-Unterstützung unterstützen. diff --git a/website/src/pages/de/token-api/_meta-titles.json b/website/src/pages/de/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/de/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/de/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/de/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/de/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/de/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/de/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/de/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/de/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/de/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/de/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/de/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/de/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/de/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/de/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/de/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/de/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/de/token-api/faq.mdx b/website/src/pages/de/token-api/faq.mdx new file mode 100644 index 000000000000..c90af204668f --- /dev/null +++ b/website/src/pages/de/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Allgemein + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/de/token-api/mcp/claude.mdx b/website/src/pages/de/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..8c151e39a608 --- /dev/null +++ b/website/src/pages/de/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Voraussetzungen + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Konfiguration + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/de/token-api/mcp/cline.mdx b/website/src/pages/de/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..d0269aa67aff --- /dev/null +++ b/website/src/pages/de/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Voraussetzungen + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Konfiguration + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/de/token-api/mcp/cursor.mdx b/website/src/pages/de/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..953d283fd2b3 --- /dev/null +++ b/website/src/pages/de/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Voraussetzungen + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Konfiguration + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/de/token-api/monitoring/get-health.mdx b/website/src/pages/de/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/de/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/de/token-api/monitoring/get-networks.mdx b/website/src/pages/de/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/de/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/de/token-api/monitoring/get-version.mdx b/website/src/pages/de/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/de/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/de/token-api/quick-start.mdx b/website/src/pages/de/token-api/quick-start.mdx new file mode 100644 index 000000000000..b84fad5f665a --- /dev/null +++ b/website/src/pages/de/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Schnellstart +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Voraussetzungen + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/es/about.mdx b/website/src/pages/es/about.mdx index 22dafa9785ad..ffa133b4e0b7 100644 --- a/website/src/pages/es/about.mdx +++ b/website/src/pages/es/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Un gráfico explicando como The Graph usa Graph Node para servir consultas a los consumidores de datos](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ El flujo sigue estos pasos: 1. Una aplicación descentralizada (dapp) añade datos a Ethereum a través de una transacción en un contrato inteligente. 2. El contrato inteligente emite uno o más eventos mientras procesa la transacción. -3. Graph Node escanea continuamente la red de Ethereum en busca de nuevos bloques y los datos de tu subgrafo que puedan contener. -4. Graph Node encuentra los eventos de la red Ethereum, a fin de proveerlos en tu subgrafo mediante estos bloques y ejecuta los mapping handlers que proporcionaste. El mapeo (mapping) es un módulo WASM que crea o actualiza las entidades de datos que Graph Node almacena en respuesta a los eventos de Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. La dapp consulta a través de Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo de The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacenamiento de datos subyacentes con el fin de obtener estos datos, haciendo uso de las capacidades de indexación que ofrece el almacenamiento. La dapp muestra estos datos en una interfaz muy completa para el usuario, a fin de que los end users que usan este subgrafo puedan emitir nuevas transacciones en Ethereum. El ciclo se repite. ## Próximos puntos -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/es/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/es/archived/arbitrum/arbitrum-faq.mdx index 85ad70c11ca2..2b7fe7284fc8 100644 --- a/website/src/pages/es/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/es/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -39,7 +39,7 @@ Para aprovechar el uso de The Graph en L2, usa este conmutador desplegable para ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Como developer de subgrafos, consumidor de datos, Indexador, Curador o Delegador, ¿qué debo hacer ahora? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/es/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/es/archived/arbitrum/l2-transfer-tools-faq.mdx index 4b5963a153d4..730aa861a37d 100644 --- a/website/src/pages/es/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/es/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con Las Herramientas de Transferencia a L2 utilizan el mecanismo nativo de Arbitrum para enviar mensajes de L1 a L2. Este mecanismo se llama "ticket reintentable" y es utilizado por todos los puentes de tokens nativos, incluido el puente GRT de Arbitrum. Puedes obtener más información sobre los tickets reintentables en la [Documentación de Arbitrum](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Cuando transfieres tus activos (subgrafo, stake, delegación o curación) a L2, se envía un mensaje a través del puente Arbitrum GRT que crea un ticket reintentable en L2. La herramienta de transferencia incluye un valor ETH en la transacción, que se utiliza para: 1) pagar la creación del ticket y 2) pagar por el gas para ejecutar el ticket en L2. Sin embargo, debido a que los precios del gas pueden variar durante el tiempo hasta que el ticket esté listo para ejecutarse en L2, es posible que este intento de autoejecución falle. Cuando eso sucede, el puente de Arbitrum mantendrá el ticket reintentable activo durante un máximo de 7 días, y cualquier persona puede intentar nuevamente "canjear" el ticket (lo que requiere una wallet con algo de ETH transferido a Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Esto es lo que llamamos el paso de "Confirmar" en todas las herramientas de transferencia. En la mayoría de los casos, se ejecutará automáticamente, ya que la autoejecución suele ser exitosa, pero es importante que vuelvas a verificar para asegurarte de que se haya completado. Si no tiene éxito y no hay reintentos exitosos en 7 días, el puente de Arbitrum descartará el ticket, y tus activos (subgrafo, stake, delegación o curación) se perderán y no podrán recuperarse. Los core devs de The Graph tienen un sistema de monitoreo para detectar estas situaciones e intentar canjear los tickets antes de que sea demasiado tarde, pero en última instancia, es tu responsabilidad asegurarte de que tu transferencia se complete a tiempo. Si tienes problemas para confirmar tu transacción, por favor comunícate a través de [este formulario](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) y los core devs estarán allí para ayudarte. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Comencé la transferencia de mi delegación/stake/curación y no estoy seguro de si se completó en L2, ¿cómo puedo confirmar que se transfirió correctamente? @@ -36,43 +36,43 @@ Si tienes el hash de la transacción en L1 (que puedes encontrar revisando las t ## Transferencia de Subgrafo -### ¿Cómo transfiero mi subgrafo? +### How do I transfer my Subgraph? -Para transferir tu subgrafo, tendrás que completar los siguientes pasos: +To transfer your Subgraph, you will need to complete the following steps: 1. Inicia la transferencia en Ethereum mainnet 2. Espera 20 minutos para la confirmación -3. Confirma la transferencia del subgrafo en Arbitrum +3. Confirm Subgraph transfer on Arbitrum\* -4. Termina de publicar el subgrafo en Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Actualiza la URL de consulta (recomendado) -\*Ten en cuenta que debes confirmar la transferencia dentro de los 7 días, de lo contrario, es posible que se pierda tu subgrafo. En la mayoría de los casos, este paso se ejecutará automáticamente, pero puede ser necesaria una confirmación manual si hay un aumento en el precio del gas en Arbitrum. Si surgen problemas durante este proceso, habrá recursos disponibles para ayudarte: ponte en contacto con el soporte en support@thegraph.com o en [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### ¿Desde dónde debo iniciar mi transferencia? -Puedes iniciar la transferencia desde el [Subgraph Studio](https://thegraph.com/studio/), [Explorer](https://thegraph.com/explorer) o desde cualquier página de detalles del subgrafo. Haz clic en el botón "Transferir Subgrafo" en la página de detalles del subgrafo para iniciar la transferencia. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### ¿Cuánto tiempo tengo que esperar hasta que se transfiera mi subgrafo? +### How long do I need to wait until my Subgraph is transferred El tiempo de transferencia demora aproximadamente 20 minutos. El puente de Arbitrum está trabajando en segundo plano para completar la transferencia automáticamente. En algunos casos, los costos de gas pueden aumentar y necesitarás confirmar la transacción nuevamente. -### ¿Mi subgrafo seguirá siendo accesible después de transferirlo a L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Tu subgrafo solo será accesible en la red donde esté publicado. Por ejemplo, si tu subgrafo está en Arbitrum One, solo podrás encontrarlo en el explorador de Arbitrum One y no podrás encontrarlo en Ethereum. Asegúrate de tener seleccionado Arbitrum One en el selector de redes en la parte superior de la página para asegurarte de estar en la red correcta. Después de la transferencia, el subgrafo en L1 aparecerá como obsoleto. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### ¿Es necesario publicar mi subgrafo para transferirlo? +### Does my Subgraph need to be published to transfer it? -Para aprovechar la herramienta de transferencia de subgrafos, tu subgrafo debe estar ya publicado en la red principal de Ethereum y debe tener alguna señal de curación propiedad de la wallet que posee el subgrafo. Si tu subgrafo no está publicado, se recomienda que lo publiques directamente en Arbitrum One, ya que las tarifas de gas asociadas serán considerablemente más bajas. Si deseas transferir un subgrafo ya publicado pero la cuenta del propietario no ha curado ninguna señal en él, puedes señalizar una pequeña cantidad (por ejemplo, 1 GRT) desde esa cuenta; asegúrate de elegir la opción de señal "auto-migración". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### ¿Qué ocurre con la versión de Ethereum mainnet de mi subgrafo después de transferirlo a Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Tras transferir tu subgrafo a Arbitrum, la versión de Ethereum mainnet quedará obsoleta. Te recomendamos que actualices tu URL de consulta en un plazo de 48 horas. Sin embargo, existe un periodo de gracia que mantiene tu URL de mainnet en funcionamiento para que se pueda actualizar cualquier soporte de dapp de terceros. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Después de la transferencia, ¿también tengo que volver a publicar en Arbitrum? @@ -80,21 +80,21 @@ Una vez transcurridos los 20 minutos de la ventana de transferencia, tendrás qu ### ¿Experimentará mi endpoint una interrupción durante la republicación? -Es poco probable, pero es posible experimentar una breve interrupción dependiendo de qué Indexadores estén respaldando el subgrafo en L1 y si continúan indexándolo hasta que el subgrafo esté completamente respaldado en L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### ¿Es lo mismo publicar y versionar en L2 que en Ethereum mainnet? -Sí. Asegúrate de seleccionar Arbitrum One como tu red para publicar cuando publiques en Subgraph Studio. En el Studio, estará disponible el último endpoint que apunta a la última versión actualizada del subgrafo. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### ¿Se moverá la curación de mi subgrafo junto con mi subgrafo? +### Will my Subgraph's curation move with my Subgraph? -Si has elegido auto-migrar la señal, el 100% de tu curación propia se moverá con tu subgrafo a Arbitrum One. Toda la señal de curación del subgrafo se convertirá a GRT en el momento de la transferencia, y el GRT correspondiente a tu señal de curación se utilizará para mintear señal en el subgrafo L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Otros Curadores pueden elegir si retiran su fracción de GRT, o también la transfieren a L2 para mintear señal en el mismo subgrafo. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### ¿Puedo mover mi subgrafo de nuevo a Ethereum mainnet después de la transferencia? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Una vez transferida, la versión en Ethereum mainnet de este subgrafo quedará obsoleta. Si deseas regresar a mainnet, deberás volver a deployar y publicar en mainnet. Sin embargo, se desaconseja firmemente volver a transferir a Ethereum mainnet, ya que las recompensas por indexación se distribuirán eventualmente por completo en Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### ¿Por qué necesito ETH bridgeado para completar mi transferencia? @@ -206,19 +206,19 @@ Para transferir tu curación, deberás completar los siguientes pasos: \*Si es necesario - i.e. si estás utilizando una dirección de contrato. -### ¿Cómo sabré si el subgrafo que he curado ha pasado a L2? +### How will I know if the Subgraph I curated has moved to L2? -Al ver la página de detalles del subgrafo, un banner te notificará que este subgrafo ha sido transferido. Puedes seguir la indicación para transferir tu curación. También puedes encontrar esta información en la página de detalles del subgrafo de cualquier subgrafo que se haya trasladado. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### ¿Qué ocurre si no deseo trasladar mi curación a L2? -Cuando un subgrafo queda obsoleto, tienes la opción de retirar tu señal. De manera similar, si un subgrafo se ha trasladado a L2, puedes elegir retirar tu señal en Ethereum mainnet o enviar la señal a L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### ¿Cómo sé si mi curación se ha transferido correctamente? Los detalles de la señal serán accesibles a través del Explorer aproximadamente 20 minutos después de iniciar la herramienta de transferencia a L2. -### ¿Puedo transferir mi curación en más de un subgrafo a la vez? +### Can I transfer my curation on more than one Subgraph at a time? En este momento no existe la opción de transferencia masiva. @@ -266,7 +266,7 @@ La herramienta de transferencia L2 tardará aproximadamente 20 minutos en comple ### ¿Tengo que indexar en Arbitrum antes de transferir mi stake? -En efecto, puedes transferir tu stake primero antes de configurar la indexación de manera efectiva, pero no podrás reclamar ninguna recompensa en L2 hasta que asignes a subgrafos en L2, los indexes y presentes POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### ¿Pueden los Delegadores trasladar su delegación antes de que yo traslade mi stake de Indexador? diff --git a/website/src/pages/es/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/es/archived/arbitrum/l2-transfer-tools-guide.mdx index 4ec61fdc3a7c..3d0d90acb9a9 100644 --- a/website/src/pages/es/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/es/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph ha facilitado la migración a L2 en Arbitrum One. Para cada participan Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Cómo transferir tu subgrafo a Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Beneficios de transferir tus subgrafos +## Benefits of transferring your Subgraphs La comunidad de The Graph y los core devs se han [estado preparando] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) para migrar a Arbitrum durante el último año. Arbitrum, una blockchain de capa 2 o "L2", hereda la seguridad de Ethereum pero ofrece tarifas de gas considerablemente más bajas. -Cuando publicas o actualizas tus subgrafos en The Graph Network, estás interactuando con contratos inteligentes en el protocolo, lo cual requiere pagar por gas utilizando ETH. Al mover tus subgrafos a Arbitrum, cualquier actualización futura de tu subgrafo requerirá tarifas de gas mucho más bajas. Las tarifas más bajas, y el hecho de que las bonding curves de curación en L2 son planas, también facilitan que otros Curadores realicen curación en tu subgrafo, aumentando las recompensas para los Indexadores en tu subgrafo. Este contexto con tarifas más económicas también hace que sea más barato para los Indexadores indexar y servir tu subgrafo. Las recompensas por indexación aumentarán en Arbitrum y disminuirán en Ethereum mainnet en los próximos meses, por lo que cada vez más Indexadores transferirán su stake y establecerán sus operaciones en L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Comprensión de lo que sucede con la señal, tu subgrafo de L1 y las URL de consulta +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferir un subgrafo a Arbitrum utiliza el puente de GRT de Arbitrum, que a su vez utiliza el puente nativo de Arbitrum para enviar el subgrafo a L2. La "transferencia" deprecará el subgrafo en mainnet y enviará la información para recrear el subgrafo en L2 utilizando el puente. También incluirá el GRT señalizado del propietario del subgrafo, el cual debe ser mayor que cero para que el puente acepte la transferencia. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Cuando eliges transferir el subgrafo, esto convertirá toda la señal de curación del subgrafo a GRT. Esto equivale a "deprecar" el subgrafo en mainnet. El GRT correspondiente a tu curación se enviará a L2 junto con el subgrafo, donde se utilizarán para emitir señal en tu nombre. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Otros Curadores pueden elegir si retirar su fracción de GRT o también transferirlo a L2 para emitir señal en el mismo subgrafo. Si un propietario de subgrafo no transfiere su subgrafo a L2 y lo depreca manualmente a través de una llamada de contrato, entonces los Curadores serán notificados y podrán retirar su curación. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Tan pronto como se transfiera el subgrafo, dado que toda la curación se convierte en GRT, los Indexadores ya no recibirán recompensas por indexar el subgrafo. Sin embargo, habrá Indexadores que 1) continuarán sirviendo los subgrafos transferidos durante 24 horas y 2) comenzarán inmediatamente a indexar el subgrafo en L2. Dado que estos Indexadores ya tienen el subgrafo indexado, no será necesario esperar a que se sincronice el subgrafo y será posible realizar consultas al subgrafo en L2 casi de inmediato. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Las consultas al subgrafo en L2 deberán realizarse a una URL diferente (en `arbitrum-gateway.thegraph.com`), pero la URL de L1 seguirá funcionando durante al menos 48 horas. Después de eso, la gateway de L1 redirigirá las consultas a la gateway de L2 (durante algún tiempo), pero esto agregará latencia, por lo que se recomienda cambiar todas las consultas a la nueva URL lo antes posible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Elección de tu wallet en L2 -Cuando publicaste tu subgrafo en mainnet, utilizaste una wallet conectada para crear el subgrafo, y esta wallet es la propietaria del NFT que representa este subgrafo y te permite publicar actualizaciones. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Al transferir el subgrafo a Arbitrum, puedes elegir una wallet diferente que será la propietaria del NFT de este subgrafo en L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Si estás utilizando una wallet "convencional" como MetaMask (una Cuenta de Propiedad Externa o EOA, es decir, una wallet que no es un contrato inteligente), esto es opcional y se recomienda para mantener la misma dirección del propietario que en L1. -Si estás utilizando una wallet de tipo smart contract, como una multisig (por ejemplo, una Safe), entonces elegir una dirección de wallet L2 diferente es obligatorio, ya que es muy probable que esta cuenta solo exista en mainnet y no podrás realizar transacciones en Arbitrum utilizando esta wallet. Si deseas seguir utilizando una wallet de tipo smart contract o multisig, crea una nueva wallet en Arbitrum y utiliza su dirección como propietario L2 de tu subgrafo. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Es muy importante utilizar una dirección de wallet que controles y que pueda realizar transacciones en Arbitrum. De lo contrario, el subgrafo se perderá y no podrá ser recuperado.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparándose para la transferencia: bridgeando algo de ETH -Transferir el subgrafo implica enviar una transacción a través del puente y luego ejecutar otra transacción en Arbitrum. La primera transacción utiliza ETH en la red principal e incluye cierta cantidad de ETH para pagar el gas cuando se recibe el mensaje en L2. Sin embargo, si este gas es insuficiente, deberás volver a intentar la transacción y pagar el gas directamente en L2 (esto es "Paso 3: Confirmando la transferencia" que se describe a continuación). Este paso **debe ejecutarse dentro de los 7 días desde el inicio de la transferencia**. Además, la segunda transacción ("Paso 4: Finalizando la transferencia en L2") se realizará directamente en Arbitrum. Por estas razones, necesitarás tener algo de ETH en una billetera de Arbitrum. Si estás utilizando una cuenta de firma múltiple o un contrato inteligente, el ETH debe estar en la billetera regular (EOA) que estás utilizando para ejecutar las transacciones, no en la billetera de firma múltiple en sí misma. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Puedes comprar ETH en algunos exchanges y retirarlo directamente a Arbitrum, o puedes utilizar el puente de Arbitrum para enviar ETH desde una billetera en la red principal a L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Dado que las tarifas de gas en Arbitrum son más bajas, solo necesitarás una pequeña cantidad. Se recomienda que comiences con un umbral bajo (por ejemplo, 0.01 ETH) para que tu transacción sea aprobada. -## Encontrando la herramienta de transferencia del subgrafo +## Finding the Subgraph Transfer Tool -Puedes encontrar la herramienta de transferencia a L2 cuando estás viendo la página de tu subgrafo en Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -También está disponible en Explorer si estás conectado con la wallet que es propietaria de un subgrafo y en la página de ese subgrafo en Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Al hacer clic en el botón "Transferir a L2" se abrirá la herramienta de transf ## Paso 1: Iniciar la transferencia -Antes de iniciar la transferencia, debes decidir qué dirección será la propietaria del subgrafo en L2 (ver "Elección de tu wallet en L2" anteriormente), y se recomienda encarecidamente tener ETH para gas ya transferido a Arbitrum (ver "Preparando para la transferencia: transferir ETH" anteriormente). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -También ten en cuenta que la transferencia del subgrafo requiere tener una cantidad distinta de cero de señal en el subgrafo con la misma cuenta que es propietaria del subgrafo; si no has emitido señal en el subgrafo, deberás agregar un poco de curación (añadir una pequeña cantidad como 1 GRT sería suficiente). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Después de abrir la herramienta de transferencia, podrás ingresar la dirección de la wallet L2 en el campo "Dirección de la wallet receptora" - asegúrate de ingresar la dirección correcta aquí. Al hacer clic en "Transferir Subgrafo", se te pedirá que ejecutes la transacción en tu wallet (ten en cuenta que se incluye un valor de ETH para pagar el gas de L2); esto iniciará la transferencia y deprecará tu subgrafo de L1 (consulta "Comprensión de lo que sucede con la señal, tu subgrafo de L1 y las URL de consulta" anteriormente para obtener más detalles sobre lo que ocurre detrás de escena). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Si ejecutas este paso, **asegúrate de completar el paso 3 en menos de 7 días, o el subgrafo y tu GRT de señal se perderán**. Esto se debe a cómo funciona la mensajería de L1 a L2 en Arbitrum: los mensajes que se envían a través del puente son "tickets reintentables" que deben ejecutarse dentro de los 7 días, y la ejecución inicial puede requerir un reintento si hay picos en el precio del gas en Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Paso 2: Esperarando a que el subgrafo llegue a L2 +## Step 2: Waiting for the Subgraph to get to L2 -Después de iniciar la transferencia, el mensaje que envía tu subgrafo de L1 a L2 debe propagarse a través del puente de Arbitrum. Esto tarda aproximadamente 20 minutos (el puente espera a que el bloque de mainnet que contiene la transacción sea "seguro" para evitar posibles reorganizaciones de la cadena). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Una vez que finalice este tiempo de espera, Arbitrum intentará ejecutar automáticamente la transferencia en los contratos de L2. @@ -80,7 +80,7 @@ Una vez que finalice este tiempo de espera, Arbitrum intentará ejecutar automá ## Paso 3: Confirmando la transferencia -En la mayoría de los casos, este paso se ejecutará automáticamente, ya que el gas de L2 incluido en el paso 1 debería ser suficiente para ejecutar la transacción que recibe el subgrafo en los contratos de Arbitrum. Sin embargo, en algunos casos, es posible que un aumento en el precio del gas en Arbitrum cause que esta autoejecución falle. En este caso, el "ticket" que envía tu subgrafo a L2 quedará pendiente y requerirá un reintento dentro de los 7 días. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Si este es el caso, deberás conectarte utilizando una wallet de L2 que tenga algo de ETH en Arbitrum, cambiar la red de tu wallet a Arbitrum y hacer clic en "Confirmar Transferencia" para volver a intentar la transacción. @@ -88,33 +88,33 @@ Si este es el caso, deberás conectarte utilizando una wallet de L2 que tenga al ## Paso 4: Finalizando la transferencia en L2 -En este punto, tu subgrafo y GRT se han recibido en Arbitrum, pero el subgrafo aún no se ha publicado. Deberás conectarte utilizando la wallet de L2 que elegiste como la wallet receptora, cambiar la red de tu wallet a Arbitrum y hacer clic en "Publicar Subgrafo". +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publicar el subgrafo](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Espera a que el subgrafo este publicado](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Esto publicará el subgrafo para que los Indexadores que estén operando en Arbitrum puedan comenzar a servirlo. También se emitirá señal de curación utilizando los GRT que se transfirieron desde L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Paso 5: Actualizando la URL de consulta -¡Tu subgrafo se ha transferido correctamente a Arbitrum! Para realizar consultas al subgrafo, la nueva URL será: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Ten en cuenta que el ID del subgrafo en Arbitrum será diferente al que tenías en mainnet, pero siempre podrás encontrarlo en Explorer o Studio. Como se mencionó anteriormente (ver "Comprensión de lo que sucede con la señal, tu subgrafo de L1 y las URL de consulta"), la antigua URL de L1 será compatible durante un corto período de tiempo, pero debes cambiar tus consultas a la nueva dirección tan pronto como el subgrafo se haya sincronizado en L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Cómo transferir tu curación a Arbitrum (L2) -## Comprensión de lo que sucede con la curación al transferir subgrafos a L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Cuando el propietario de un subgrafo transfiere un subgrafo a Arbitrum, toda la señal del subgrafo se convierte en GRT al mismo tiempo. Esto se aplica a la señal "migrada automáticamente", es decir, la señal que no está vinculada a una versión o deploy específico del subgrafo, sino que sigue la última versión del subgrafo. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Esta conversión de señal a GRT es similar a lo que sucedería si el propietario del subgrafo deprecara el subgrafo en L1. Cuando el subgrafo se depreca o se transfiere, toda la señal de curación se "quema" simultáneamente (utilizando la bonding curve de curación) y el GRT resultante se mantiene en el contrato inteligente de GNS (que es el contrato que maneja las actualizaciones de subgrafos y la señal auto-migrada). Cada Curador en ese subgrafo, por lo tanto, tiene un reclamo sobre ese GRT proporcional a la cantidad de participaciones que tenían para el subgrafo. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Una fracción de estos GRT correspondientes al propietario del subgrafo se envía a L2 junto con el subgrafo. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -En este punto, el GRT curado ya no acumulará más tarifas de consulta, por lo que los Curadores pueden optar por retirar su GRT o transferirlo al mismo subgrafo en L2, donde se puede utilizar para generar nueva señal de curación. No hay prisa para hacerlo, ya que el GRT se puede mantener indefinidamente y todos reciben una cantidad proporcional a sus participaciones, independientemente de cuándo lo hagan. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Elección de tu wallet en L2 @@ -130,9 +130,9 @@ Si estás utilizando una billetera de contrato inteligente, como una multisig (p Antes de comenzar la transferencia, debes decidir qué dirección será la propietaria de la curación en L2 (ver "Elegir tu wallet en L2" arriba), y se recomienda tener algo de ETH para el gas ya bridgeado en Arbitrum en caso de que necesites volver a intentar la ejecución del mensaje en L2. Puedes comprar ETH en algunos exchanges y retirarlo directamente a Arbitrum, o puedes utilizar el puente de Arbitrum para enviar ETH desde una wallet en la red principal a L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - dado que las tarifas de gas en Arbitrum son muy bajas, es probable que solo necesites una pequeña cantidad, por ejemplo, 0.01 ETH será más que suficiente. -Si un subgrafo al que has curado ha sido transferido a L2, verás un mensaje en Explorer que te indicará que estás curando hacia un subgrafo transferido. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -Cuando estás en la página del subgrafo, puedes elegir retirar o transferir la curación. Al hacer clic en "Transferir Señal a Arbitrum" se abrirá la herramienta de transferencia. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transferir señal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Si este es el caso, deberás conectarte utilizando una wallet de L2 que tenga al ## Retirando tu curacion en L1 -Si prefieres no enviar tu GRT a L2, o prefieres bridgear GRT de forma manual, puedes retirar tu GRT curado en L1. En el banner en la página del subgrafo, elige "Retirar Señal" y confirma la transacción; el GRT se enviará a tu dirección de Curador. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/es/archived/sunrise.mdx b/website/src/pages/es/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/es/archived/sunrise.mdx +++ b/website/src/pages/es/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/es/contracts.json b/website/src/pages/es/contracts.json index 35d93318521e..6de137f39dc3 100644 --- a/website/src/pages/es/contracts.json +++ b/website/src/pages/es/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "Contrato", "address": "Dirección" } diff --git a/website/src/pages/es/global.json b/website/src/pages/es/global.json index b9c8db5fa5fa..a35f826df076 100644 --- a/website/src/pages/es/global.json +++ b/website/src/pages/es/global.json @@ -1,35 +1,78 @@ { "navigation": { - "title": "Main navigation", - "show": "Show navigation", - "hide": "Hide navigation", + "title": "Navegación principal", + "show": "Mostrar navegación", + "hide": "Ocultar navegación", "subgraphs": "Subgrafos", "substreams": "Corrientes secundarias", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", - "resources": "Resources", - "archived": "Archived" + "sps": "Subgrafos impulsados por Substreams", + "tokenApi": "Token API", + "indexing": "Indexación", + "resources": "Recursos", + "archived": "Archivado" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "Última actualización", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "Tiempo de lectura", + "minutes": "minutos" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "Página anterior", + "next": "Página siguiente", + "edit": "Editar en GitHub", + "onThisPage": "En esta página", + "tableOfContents": "Tabla de contenidos", + "linkToThisSection": "Enlace a esta sección" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Descripción", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Estado", + "description": "Descripción", + "liveResponse": "Live Response", + "example": "Ejemplo" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "¡Ups! Esta página se ha perdido en el espacio...", + "subtitle": "Verifica que estés usando la dirección correcta o visita nuestro sitio web haciendo clic en el enlace de abajo.", + "back": "Ir a la página principal" } } diff --git a/website/src/pages/es/index.json b/website/src/pages/es/index.json index c980229ff3d5..2c1eeb105f26 100644 --- a/website/src/pages/es/index.json +++ b/website/src/pages/es/index.json @@ -1,99 +1,175 @@ { - "title": "Home", + "title": "Inicio", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", - "cta2": "Build your first subgraph" + "title": "Documentación de The Graph", + "description": "Inicia tu proyecto web3 con las herramientas para extraer, transformar y cargar datos de blockchain.", + "cta1": "Cómo funciona The Graph", + "cta2": "Crea tu primer subgrafo" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "Elige una solución que se ajuste a tus necesidades: interactúa con los datos de blockchain a tu manera.", "subgraphs": { "title": "Subgrafos", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "Extrae, procesa y consulta datos de blockchain con APIs abiertas.", + "cta": "Desarrollar un subgrafo" }, "substreams": { "title": "Corrientes secundarias", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "Obtén y consume datos de blockchain con ejecución paralela.", + "cta": "Desarrolla con Substreams" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "Subgrafos impulsados por Substreams", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "Configura un subgrafo impulsado por Substreams" }, "graphNode": { "title": "Graph Node", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "description": "Indexa datos de blockchain y sírvelos a través de consultas GraphQL.", + "cta": "Configura un nodo local de Graph" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "Extrae datos de blockchain en archivos planos para mejorar los tiempos de sincronización y las capacidades de transmisión.", + "cta": "Comienza con Firehose" } }, "supportedNetworks": { "title": "Redes Admitidas", + "details": "Network Details", + "services": "Services", + "type": "Tipo", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Documentación", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", + "base": "The Graph soporta {0}. Para agregar una nueva red, {1}", "networks": "networks", - "completeThisForm": "complete this form" + "completeThisForm": "completa este formulario" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Nombre", + "id": "ID", + "subgraphs": "Subgrafos", + "substreams": "Corrientes secundarias", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Corrientes secundarias", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Facturación", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { - "title": "Guides", + "title": "Guías", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "Buscar datos en Graph Explorer", + "description": "Aprovecha cientos de subgrafos públicos para datos existentes de blockchain." }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Publicar un Subgrafo", + "description": "Agrega tu subgrafo a la red descentralizada." }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "Publicar Substreams", + "description": "Lanza tu paquete de Substreams al Registro de Substreams." }, "queryingBestPractices": { "title": "Mejores Prácticas para Consultas", - "description": "Optimize your subgraph queries for faster, better results." + "description": "Optimiza tus consultas de subgrafo para obtener resultados más rápidos y mejores." }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "Series temporales optimizadas y agregaciones", + "description": "Optimiza tu subgrafo para mejorar la eficiencia." }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "Gestión de claves API", + "description": "Crea, gestiona y asegura fácilmente las claves API para tus subgrafos." }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "Transferir a The Graph", + "description": "Mejora tu subgrafo sin problemas desde cualquier plataforma." } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "Tutoriales en video", + "watchOnYouTube": "Ver en YouTube", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "The Graph explicado en 1 minuto", + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { - "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "title": "¿Qué es la delegación?", + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Cómo indexar Solana con un subgrafo impulsado por Substreams", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", + "reading": "Tiempo de lectura", + "duration": "Duración", "minutes": "min" } } diff --git a/website/src/pages/es/indexing/_meta-titles.json b/website/src/pages/es/indexing/_meta-titles.json index 42f4de188fd4..ee110b7adfe8 100644 --- a/website/src/pages/es/indexing/_meta-titles.json +++ b/website/src/pages/es/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Herramientas para Indexadores" } diff --git a/website/src/pages/es/indexing/chain-integration-overview.mdx b/website/src/pages/es/indexing/chain-integration-overview.mdx index 77141e82b34a..dfcb2a2442d7 100644 --- a/website/src/pages/es/indexing/chain-integration-overview.mdx +++ b/website/src/pages/es/indexing/chain-integration-overview.mdx @@ -1,5 +1,5 @@ --- -title: Chain Integration Process Overview +title: Descripción general del proceso de integración de cadena --- A transparent and governance-based integration process was designed for blockchain teams seeking [integration with The Graph protocol](https://forum.thegraph.com/t/gip-0057-chain-integration-process/4468). It is a 3-phase process, as summarised below. @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/es/indexing/new-chain-integration.mdx b/website/src/pages/es/indexing/new-chain-integration.mdx index 04aa90b6e5ae..7316741aa0e6 100644 --- a/website/src/pages/es/indexing/new-chain-integration.mdx +++ b/website/src/pages/es/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, en una solicitud por lotes JSON-RPC -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ ### 2. Firehose Integration @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Configuración del Graph Node -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/es/indexing/overview.mdx b/website/src/pages/es/indexing/overview.mdx index 43b74287044a..582962c94c0d 100644 --- a/website/src/pages/es/indexing/overview.mdx +++ b/website/src/pages/es/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i Los GRT que se depositan en stake en el protocolo está sujeto a un periodo de desbloqueo y puede incurrir en slashing (ser reducidos) si los Indexadores son maliciosos y sirven datos incorrectos a las aplicaciones o si indexan incorrectamente. Los Indexadores también obtienen recompensas por stake delegados de los Delegadores, para contribuir a la red. -Los Indexadores seleccionan subgrafos para indexar basados en la señal de curación del subgrafo, donde los Curadores realizan stake de sus GRT para indicar qué subgrafos son de mejor calidad y deben tener prioridad para ser indexados. Los consumidores (por ejemplo, aplicaciones, clientes) también pueden establecer parámetros para los cuales los Indexadores procesan consultas para sus subgrafos y establecen preferencias para el precio asignado a cada consulta. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/es/indexing/supported-network-requirements.mdx b/website/src/pages/es/indexing/supported-network-requirements.mdx index dfebec344880..95aad8f3c0ac 100644 --- a/website/src/pages/es/indexing/supported-network-requirements.mdx +++ b/website/src/pages/es/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Red | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Red | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/es/indexing/tap.mdx b/website/src/pages/es/indexing/tap.mdx index 36fb0939af81..024347d695c4 100644 --- a/website/src/pages/es/indexing/tap.mdx +++ b/website/src/pages/es/indexing/tap.mdx @@ -1,140 +1,21 @@ --- -title: |+ - Guía de Migración TAP - Aprende sobre el nuevo sistema de pagos de The Graph, el Protocolo de Agregación de Línea de Tiempo (TAP). Este sistema ofrece microtransacciones rápidas y eficientes con una confianza minimizada. - - Descripción General - TAP es un reemplazo directo del sistema de pagos Scalar actualmente en uso. Ofrece las siguientes características clave: - - Manejo eficiente de micropagos. - Agrega una capa de consolidación a las transacciones y costos en la cadena. - Permite a los Indexadores controlar los recibos y pagos, garantizando el pago por consultas. - Facilita puertas de enlace descentralizadas y sin confianza, mejorando el indexer-service para múltiples remitentes. - - Especificaciones - TAP permite que un remitente realice múltiples pagos a un receptor a través de TAP Receipts, los cuales agrupan estos pagos en un único pago denominado Receipt Aggregate Voucher (RAV). Este pago consolidado puede verificarse en la blockchain, reduciendo la cantidad de transacciones y simplificando el proceso de pago. - - Para cada consulta, la puerta de enlace te enviará un recibo firmado (signed receipt) que se almacenará en tu base de datos. Luego, estas consultas serán agrupadas por un tap-agent mediante una solicitud. Posteriormente, recibirás un RAV. Puedes actualizar un RAV enviándolo con recibos más recientes, lo que generará un nuevo RAV con un valor incrementado. - - Detalles del RAV - Es dinero que está pendiente de ser enviado a la blockchain. - Continuará enviando solicitudes para agrupar recibos y garantizar que el valor total de los recibos no agregados no supere la cantidad dispuesta a perder. - Cada RAV puede ser canjeado una sola vez en los contratos, por lo que se envían después de que la asignación se haya cerrado. - - Canjeo de RAV - Mientras ejecutes tap-agent e indexer-agent, todo el proceso se ejecutará automáticamente. A continuación, se presenta un desglose detallado del proceso: - - Proceso de Canjeo de RAV - 1. Un Indexador cierra la asignación. - 2. Durante el período , tap-agent toma todos los recibos pendientes de esa asignación específica y solicita su agregación en un RAV, marcándolo como el último. - 3. Indexer-agent toma todos los últimos RAVs y envía solicitudes de canje a la blockchain, lo que actualizará el valor de redeem_at. - 4. Durante el período , indexer-agent monitorea si la blockchain experimenta alguna reorganización que revierta la transacción. - Si la transacción es revertida, el RAV se reenvía a la blockchain. Si no es revertida, se marca como final. - - Blockchain Addresses - Contracts - Contract Arbitrum Mainnet (42161) Arbitrum Sepolia (421614) - TAP Verifier 0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a 0xfC24cE7a4428A6B89B52645243662A02BA734ECF - AllocationIDTracker 0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c 0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11 - Escrow 0x8f477709eF277d4A880801D01A140a9CF88bA0d3 0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02 - Gateway - Component Edge and Node Mainnet (Arbitrum Mainnet) Edge and Node Testnet (Arbitrum Sepolia) - Sender 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 0xC3dDf37906724732FfD748057FEBe23379b0710D - Signers 0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211 0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE - Aggregator https://tap-aggregator.network.thegraph.com https://tap-aggregator.testnet.thegraph.com - - Requisitos - Además de los requisitos habituales para ejecutar un indexador, necesitarás un endpoint tap-escrow-subgraph para consultar actualizaciones de TAP. Puedes utilizar The Graph Network para hacer consultas o alojarlo en tu propio graph-node. - - Subgrafo Graph TAP Arbitrum Sepolia (para la testnet de The Graph). - Subgrafo Graph TAP Arbitrum One (para la mainnet de The Graph). - - Nota: Actualmente, indexer-agent no gestiona la indexación de este subgrafo como lo hace con la implementación del subgrafo de la red. Por lo tanto, debes indexarlo manualmente. - - Guía de Migración - Versiones de Software - La versión requerida del software se puede encontrar aquí. - - Pasos - 1. Indexer Agent - Sigue el mismo proceso de configuración. - Agrega el nuevo argumento --tap-subgraph-endpoint para activar las rutas de código de TAP y habilitar el canje de RAVs de TAP. - 2. Indexer Service - Reemplaza completamente tu configuración actual con la nueva versión de Indexer Service rs. Se recomienda usar la imagen del contenedor. - Como en la versión anterior, puedes escalar Indexer Service horizontalmente con facilidad. Sigue siendo stateless. - 3. TAP Agent - Ejecuta una única instancia de TAP Agent en todo momento. Se recomienda usar la imagen del contenedor. - 4. Configura Indexer Service y TAP Agent mediante un archivo TOML compartido, suministrado con el argumento --config /path/to/config.toml. - Consulta la configuración completa y los valores predeterminados. - Para una configuración mínima, usa la siguiente plantilla: - - toml - Copy - Edit - [indexer] - indexer_address = "0x1111111111111111111111111111111111111111" - operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" - - [database] - postgres_url = "postgres://postgres@postgres:5432/postgres" - - [graph_node] - query_url = "http://graph-node:8000" - status_url = "http://graph-node:8000/graphql" - - [subgraphs.network] - query_url = "http://example.com/network-subgraph" - deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" - - [subgraphs.escrow] - query_url = "http://example.com/network-subgraph" - deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" - - [blockchain] - chain_id = 1337 - receipts_verifier_address = "0x2222222222222222222222222222222222222222" - - [tap] - max_amount_willing_to_lose_grt = 20 - - [tap.sender_aggregator_endpoints] - 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" - Notas Importantes - Los valores de tap.sender_aggregator_endpoints se encuentran en la sección de gateway. - El valor de blockchain.receipts_verifier_address debe coincidir con la sección de direcciones de Blockchain según el chain ID apropiado. - Nivel de Registro (Log Level) - Puedes establecer el nivel de registro con la variable de entorno RUST_LOG. Se recomienda: - - bash - Copy - Edit - RUST_LOG=indexer_tap_agent=debug,info - Monitoreo - Métricas - Todos los componentes exponen el puerto 7300, que puede ser consultado por Prometheus. - - Grafana Dashboard - Puedes descargar el Dashboard de Grafana e importarlo. - - Launchpad - Actualmente, hay una versión en desarrollo de indexer-rs y tap-agent, que puedes encontrar aquí. - +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Descripción -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -178,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -198,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -247,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/es/indexing/tooling/graph-node.mdx b/website/src/pages/es/indexing/tooling/graph-node.mdx index 7fadb2a27660..c2522201c5f5 100644 --- a/website/src/pages/es/indexing/tooling/graph-node.mdx +++ b/website/src/pages/es/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node es el componente que indexa los subgrafos, y hace que los datos resultantes estén disponibles para su consulta a través de una API GraphQL. Como tal, es fundamental para el stack del Indexador, y el correcto funcionamiento de Graph Node es crucial para ejecutar un Indexador con éxito. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### Base de datos PostgreSQL -El almacén principal para Graph Node, aquí es donde se almacenan los datos de los subgrafos, así como los metadatos de los subgrafos, y los datos de una red subgrafo-agnóstica como el caché de bloques, y el caché eth_call. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Clientes de red Para indexar una red, Graph Node necesita acceso a un cliente de red a través de una API JSON-RPC compatible con EVM. Esta RPC puede conectarse a un solo cliente o puede ser una configuración más compleja que equilibre la carga entre varios clientes. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### Nodos IPFS -Los metadatos de deploy del subgrafo se almacenan en la red IPFS. El Graph Node accede principalmente al nodo IPFS durante el deploy del subgrafo para obtener el manifiesto del subgrafo y todos los archivos vinculados. Los Indexadores de red no necesitan alojar su propio nodo IPFS. En https://ipfs.network.thegraph.com se aloja un nodo IPFS para la red. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Servidor de métricas Prometheus @@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit Cuando está funcionando, Graph Node muestra los siguientes puertos: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. ## Configuración avanzada de Graph Node -En su forma más simple, Graph Node puede funcionar con una única instancia de Graph Node, una única base de datos PostgreSQL, un nodo IPFS y los clientes de red que requieran los subgrafos a indexar. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Graph Nodes múltiples -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Ten en cuenta que varios Graph Nodes pueden configurarse para utilizar la misma base de datos, que a su vez puede escalarse horizontalmente mediante sharding. #### Reglas de deploy -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Ejemplo de configuración de reglas de deploy: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Cualquier nodo cuyo --node-id coincida con la expresión regular se configurará Para la mayoría de los casos de uso, una única base de datos Postgres es suficiente para soportar una instancia de graph-node. Cuando una instancia de graph-node supera una única base de datos Postgres, es posible dividir el almacenamiento de los datos de graph-node en varias bases de datos Postgres. Todas las bases de datos juntas forman el almacén de la instancia graph-node. Cada base de datos individual se denomina shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. El Sharding resulta útil cuando la base de datos existente no puede soportar la carga que le impone Graph Node y cuando ya no es posible aumentar el tamaño de la base de datos. -> En general, es mejor hacer una única base de datos lo más grande posible, antes de empezar con los shards. Una excepción es cuando el tráfico de consultas se divide de forma muy desigual entre los subgrafos; en esas situaciones puede ayudar dramáticamente si los subgrafos de alto volumen se mantienen en un shard y todo lo demás en otro, porque esa configuración hace que sea más probable que los datos de los subgrafos de alto volumen permanezcan en la caché interna de la base de datos y no sean reemplazados por datos que no se necesitan tanto de los subgrafos de bajo volumen. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. En términos de configuración de las conexiones, comienza con max_connections en postgresql.conf establecido en 400 (o tal vez incluso 200) y mira las métricas de Prometheus store_connection_wait_time_ms y store_connection_checkout_count. Tiempos de espera notables (cualquier cosa por encima de 5ms) es una indicación de que hay muy pocas conexiones disponibles; altos tiempos de espera allí también serán causados por la base de datos que está muy ocupada (como alta carga de CPU). Sin embargo, si la base de datos parece estable, los tiempos de espera elevados indican la necesidad de aumentar el número de conexiones. En la configuración, el número de conexiones que puede utilizar cada instancia de Graph Node es un límite superior, y Graph Node no mantendrá conexiones abiertas si no las necesita. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Soporte de múltiples redes -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Redes múltiples - Múltiples proveedores por red (esto puede permitir dividir la carga entre los proveedores, y también puede permitir la configuración de nodos completos, así como nodos de archivo, con Graph Node prefiriendo proveedores más baratos si una carga de trabajo dada lo permite). @@ -225,11 +225,11 @@ Los usuarios que están operando una configuración de indexación escalada con ### Operar Graph Node -Dado un Graph Node en funcionamiento (¡o Graph Nodes!), el reto consiste en gestionar los subgrafos deployados en esos nodos. Graph Node ofrece una serie de herramientas para ayudar a gestionar los subgrafos. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Trabajar con subgrafos +### Working with Subgraphs #### API de estado de indexación -Disponible por defecto en el puerto 8030/graphql, la API de estado de indexación expone una serie de métodos para comprobar el estado de indexación de diferentes subgrafos, comprobar pruebas de indexación, inspeccionar características de subgrafos y mucho más. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ El proceso de indexación consta de tres partes diferenciadas: - Procesar los eventos en orden con los handlers apropiados (esto puede implicar llamar a la cadena para obtener el estado y obtener datos del store) - Escribir los datos resultantes en el store -"Estas etapas están en serie (es decir, se pueden ejecutar en paralelo), pero dependen una de la otra. Cuando los subgrafos son lentos en indexarse, la causa subyacente dependerá del subgrafo específico. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Causas habituales de la lentitud de indexación: @@ -276,24 +276,24 @@ Causas habituales de la lentitud de indexación: - El proveedor en sí mismo se está quedando rezagado con respecto a la cabeza de la cadena - Lentitud en la obtención de nuevos recibos en la cabeza de la cadena desde el proveedor -Las métricas de indexación de subgrafos pueden ayudar a diagnosticar la causa raíz de la lentitud de la indexación. En algunos casos, el problema reside en el propio subgrafo, pero en otros, la mejora de los proveedores de red, la reducción de la contención de la base de datos y otras mejoras de configuración pueden mejorar notablemente el rendimiento de la indexación. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Subgrafos fallidos +#### Failed Subgraphs -Durante la indexación, los subgrafos pueden fallar si encuentran datos inesperados, si algún componente no funciona como se esperaba o si hay algún error en los event handlers o en la configuración. Hay dos tipos generales de fallo: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Fallos deterministas: son fallos que no se resolverán con reintentos - Fallos no deterministas: pueden deberse a problemas con el proveedor o a algún error inesperado de Graph Node. Cuando se produce un fallo no determinista, Graph Node reintentará los handlers que han fallado, retrocediendo en el tiempo. -En algunos casos, un fallo puede ser resuelto por el Indexador (por ejemplo, si el error es resultado de no tener el tipo correcto de proveedor, añadir el proveedor necesario permitirá continuar con la indexación). Sin embargo, en otros, se requiere un cambio en el código del subgrafo. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Caché de bloques y llamadas -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Si se sospecha de una inconsistencia en el caché de bloques, como un evento de falta de recepción tx: @@ -304,7 +304,7 @@ Si se sospecha de una inconsistencia en el caché de bloques, como un evento de #### Consulta de problemas y errores -Una vez que un subgrafo ha sido indexado, los Indexadores pueden esperar servir consultas a través del endpoint de consulta dedicado del subgrafo. Si el Indexador espera servir un volumen de consultas significativo, se recomienda un nodo de consulta dedicado, y en caso de volúmenes de consulta muy altos, los Indexadores pueden querer configurar shards de réplica para que las consultas no impacten en el proceso de indexación. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Sin embargo, incluso con un nodo de consulta dedicado y réplicas, ciertas consultas pueden llevar mucho tiempo para ejecutarse y, en algunos casos, aumentar el uso de memoria y afectar negativamente el tiempo de consulta de otros usuarios. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Análisis de consultas -Las consultas problemáticas suelen surgir de dos maneras. En algunos casos, los propios usuarios informan de que una consulta determinada es lenta. En ese caso, el reto consiste en diagnosticar el motivo de la lentitud, ya sea un problema general o específico de ese subgrafo o consulta. Y, por supuesto, resolverlo, si es posible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. En otros casos, el desencadenante puede ser un uso elevado de memoria en un nodo de consulta, en cuyo caso el reto consiste primero en identificar la consulta causante del problema. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Eliminar subgrafos +#### Removing Subgraphs > Se trata de una nueva funcionalidad, que estará disponible en Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/es/indexing/tooling/graphcast.mdx b/website/src/pages/es/indexing/tooling/graphcast.mdx index 3da74365af91..3fef530ae421 100644 --- a/website/src/pages/es/indexing/tooling/graphcast.mdx +++ b/website/src/pages/es/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ En la actualidad, el costo de transmitir información a otros participantes de l El Graphcast SDK (Kit de Desarrollo de Software) permite a los desarrolladores construir Radios, que son aplicaciones impulsadas por gossip que los Indexadores pueden utilizar con una finalidad específica. También queremos crear algunas Radios (o dar soporte a otros desarrolladores/equipos que deseen construir Radios) para los siguientes casos de uso: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Llevar a cabo subastas y coordinar warp-syncing de datos de subgrafos, substreams y Firehose de otros Indexadores. -- Autoinforme sobre análisis de consultas activas, incluidos volúmenes de consultas de subgrafos, volúmenes de tarifas, etc. -- Generar informes propios sobre análisis del proceso de indexación, que incluyan período de indexación de subgrafos, costos de gas handler, indexación de errores encontrados, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Generar informes propios sobre información de stack que incluyan versión del graph-node, la versión de Postgres, la versión del cliente de Ethereum, etc. ### Aprende más diff --git a/website/src/pages/es/resources/benefits.mdx b/website/src/pages/es/resources/benefits.mdx index e50969112dde..509c70f8a198 100644 --- a/website/src/pages/es/resources/benefits.mdx +++ b/website/src/pages/es/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Comparación de costos | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensual del servidor\* | $350 por mes | $0 | -| Costos de consulta | $0+ | $0 per month | -| Tiempo de ingeniería | $400 por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | -| Consultas por mes | Limitado a capacidades de infraestructura | 100,000 (Free Plan) | -| Costo por consulta | $0 | $0 | -| Infrastructure | Centralizado | Descentralizado | -| Redundancia geográfica | $750+ por nodo adicional | Incluido | -| Tiempo de actividad | Varía | 99.9%+ | -| Costos mensuales totales | $750+ | $0 | +| Comparación de costos | Self Hosted | The Graph Network | +| :------------------------------: | :---------------------------------------: | :-------------------------------------------------------------------: | +| Costo mensual del servidor\* | $350 por mes | $0 | +| Costos de consulta | $0+ | $0 per month | +| Tiempo de ingeniería | $400 por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | +| Consultas por mes | Limitado a capacidades de infraestructura | 100,000 (Free Plan) | +| Costo por consulta | $0 | $0 | +| Infrastructure | Centralizado | Descentralizado | +| Redundancia geográfica | $750+ por nodo adicional | Incluido | +| Tiempo de actividad | Varía | 99.9%+ | +| Costos mensuales totales | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Comparación de costos | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensual del servidor\* | $350 por mes | $0 | -| Costos de consulta | $500 por mes | $120 per month | -| Tiempo de ingeniería | $800 por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | -| Consultas por mes | Limitado a capacidades de infraestructura | ~3,000,000 | -| Costo por consulta | $0 | $0.00004 | -| Infrastructure | Centralizado | Descentralizado | -| Gastos de ingeniería | $200 por hora | Incluido | -| Redundancia geográfica | $1,200 en costos totales por nodo adicional | Incluido | -| Tiempo de actividad | Varía | 99.9%+ | -| Costos mensuales totales | $1,650+ | $120 | +| Comparación de costos | Self Hosted | The Graph Network | +| :------------------------------: | :-----------------------------------------: | :-------------------------------------------------------------------: | +| Costo mensual del servidor\* | $350 por mes | $0 | +| Costos de consulta | $500 por mes | $120 per month | +| Tiempo de ingeniería | $800 por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | +| Consultas por mes | Limitado a capacidades de infraestructura | ~3,000,000 | +| Costo por consulta | $0 | $0.00004 | +| Infrastructure | Centralizado | Descentralizado | +| Gastos de ingeniería | $200 por hora | Incluido | +| Redundancia geográfica | $1,200 en costos totales por nodo adicional | Incluido | +| Tiempo de actividad | Varía | 99.9%+ | +| Costos mensuales totales | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Comparación de costos | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensual del servidor\* | $1100 por mes, por nodo | $0 | -| Costos de consulta | $4000 | $1,200 per month | -| Número de nodos necesarios | 10 | No aplica | -| Tiempo de ingeniería | $6,000 o más por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | -| Consultas por mes | Limitado a capacidades de infraestructura | ~30,000,000 | -| Costo por consulta | $0 | $0.00004 | -| Infrastructure | Centralizado | Descentralizado | -| Redundancia geográfica | $1,200 en costos totales por nodo adicional | Incluido | -| Tiempo de actividad | Varía | 99.9%+ | -| Costos mensuales totales | $11,000+ | $1,200 | +| Comparación de costos | Self Hosted | The Graph Network | +| :------------------------------: | :-----------------------------------------: | :-------------------------------------------------------------------: | +| Costo mensual del servidor\* | $1100 por mes, por nodo | $0 | +| Costos de consulta | $4000 | $1,200 per month | +| Número de nodos necesarios | 10 | No aplica | +| Tiempo de ingeniería | $6,000 o más por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | +| Consultas por mes | Limitado a capacidades de infraestructura | ~30,000,000 | +| Costo por consulta | $0 | $0.00004 | +| Infrastructure | Centralizado | Descentralizado | +| Redundancia geográfica | $1,200 en costos totales por nodo adicional | Incluido | +| Tiempo de actividad | Varía | 99.9%+ | +| Costos mensuales totales | $11,000+ | $1,200 | \*incluidos los costos de copia de seguridad: $50-$100 por mes @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -La señal de curación en un subgrafo es una acción opcional de única vez y no tiene costo neto (por ejemplo, se pueden curar $1k en señales en un subgrafo y luego retirarlas, con el potencial de obtener retornos en el proceso). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/es/resources/glossary.mdx b/website/src/pages/es/resources/glossary.mdx index a3614062a63a..dfbe07decedf 100644 --- a/website/src/pages/es/resources/glossary.mdx +++ b/website/src/pages/es/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glosario - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glosario - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/es/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/es/resources/migration-guides/assemblyscript-migration-guide.mdx index 354d8c68a3e8..42a4b35e7677 100644 --- a/website/src/pages/es/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/es/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,49 +2,49 @@ title: Guía de Migración de AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Esto permitirá a los desarrolladores de subgrafos utilizar las nuevas características del lenguaje AS y la librería estándar. +That will enable Subgraph developers to use newer features of the AS language and standard library. -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +Esta guía es aplicable para cualquiera que use `graph-cli`/`graph-ts` bajo la versión `0.22.0`. Si ya estás en una versión superior (o igual) a esa, has estado usando la versión `0.19.10` de AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Características ### Nueva Funcionalidad -- `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) -- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) -- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) -- Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) -- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) -- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) -- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -- Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) -- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) -- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) -- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) -- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) +- `TypedArray`s ahora puede construirse desde `ArrayBuffer`s usando el [nuevo `wrap` método estático](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) +- Nuevas funciones de la biblioteca estándar: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Se agregó soporte para x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Se agregó `StaticArray`, una más eficiente variante de array ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) +- Se agregó `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Se implementó el argumento `radix` en `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) +- Se agregó soporte para los separadores en los literales de punto flotante ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) +- Se agregó soporte para las funciones de primera clase ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- Se agregaron builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- Se implementó `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Se agregó soporte para las plantillas de strings literales ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) +- Se agregó `encodeURI(Component)` y `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- Se agregó `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- Se agregó `toUTCString` para `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- Se agregó `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) ### Optimizaciones -- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) -- Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) -- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Funciones `Math` como `exp`, `exp2`, `log`, `log2` y `pow` fueron reemplazadas por variantes más rápidas ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Optimizar ligeramente `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- Caché de más accesos a campos en std Map y Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Optimizar para potencias de dos en `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) ### Otros -- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- El tipo de un de array literal ahora puede inferirse a partir de su contenido ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Actualizado stdlib a Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) ## ¿Cómo actualizar? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,11 +52,11 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` -2. Update the `graph-cli` you're using to the `latest` version by running: +2. Actualiza la `graph-cli` que usas a la `última` versión: ```bash # si lo tiene instalada de forma global @@ -66,14 +66,14 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: +3. Haz lo mismo con `graph-ts`, pero en lugar de instalarlo globalmente, guárdalo en tus dependencias principales: ```bash npm install --save @graphprotocol/graph-ts@latest ``` 4. Sigue el resto de la guía para arreglar los cambios que rompen el lenguaje. -5. Run `codegen` and `deploy` again. +5. Ejecuta `codegen` y `deploy` nuevamente. ## Rompiendo los esquemas @@ -106,11 +106,11 @@ let maybeValue = load()! // rompiendo el runtime si el valor es nulo maybeValue.aMethod() ``` -Si no estás seguro de cuál elegir, te recomendamos que utilices siempre la versión segura. Si el valor no existe, es posible que quieras hacer una declaración if temprana con un retorno en tu handler de subgrafo. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing -Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: +Antes podías hacer [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) y un código como este funcionaría: ```typescript let a = 10 @@ -132,7 +132,7 @@ Tendrás que cambiar el nombre de las variables duplicadas si tienes una variabl ### Comparaciones Nulas -Al hacer la actualización en un subgrafo, a veces pueden aparecer errores como estos: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -141,7 +141,7 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i in src/mappings/file.ts(41,21) ``` -To solve you can simply change the `if` statement to something like this: +Para solucionarlo puedes simplemente cambiar la declaración `if` por algo así: ```typescript if (!decimals) { @@ -155,7 +155,7 @@ Lo mismo ocurre si haces != en lugar de ==. ### Casting -The common way to do casting before was to just use the `as` keyword, like this: +La forma común de hacer el casting antes era simplemente usar la palabra clave `as`, de la siguiente forma: ```typescript let byteArray = new ByteArray(10) @@ -164,7 +164,7 @@ let uint8Array = byteArray as Uint8Array // equivalent to: byteArray Sin embargo, esto solo funciona en dos casos: -- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Casting de primitivas (entre tipos como `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); - Upcasting en la herencia de clases (subclase → superclase) Ejemplos: @@ -184,7 +184,7 @@ let bytes = new Bytes(2) // bytes // same as: bytes as Uint8Array ``` -There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: +Hay dos escenarios en los que puede querer cast, pero usando `as`/`var` **no es seguro**: - Downcasting en la herencia de clases (superclase → subclase) - Entre dos tipos que comparten una superclase @@ -206,7 +206,7 @@ let bytes = new Bytes(2) // bytes // breaks in runtime :( ``` -For those cases, you can use the `changetype` function: +Para esos casos, puedes usar la función `changetype`: ```typescript // downcasting on class inheritance @@ -217,7 +217,7 @@ changetype(uint8Array) // works :) ``` ```typescript -// between two types that share a superclass +// entre dos tipos que comparten un superclass class Bytes extends Uint8Array {} class ByteArray extends Uint8Array {} @@ -225,7 +225,7 @@ let bytes = new Bytes(2) changetype(bytes) // works :) ``` -If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. +Si solo quieres eliminar la anulabilidad, puedes seguir usando el `as` operador (o `variable`), pero asegúrate de que el valor no puede ser nulo, de lo contrario se romperá. ```typescript // eliminar anulabilidad @@ -238,7 +238,7 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 +Para el caso de la anulabilidad se recomienda echar un vistazo al [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), hará que tu código sea más limpio 🙂 También hemos añadido algunos métodos estáticos en algunos tipos para facilitar el casting, son: @@ -249,7 +249,7 @@ También hemos añadido algunos métodos estáticos en algunos tipos para facili ### Comprobación de anulabilidad con acceso a la propiedad -To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: +Para usar el [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) puedes usar la declaración `if` o el operador ternario (`?` and `:`) asi: ```typescript let something: string | null = 'data' @@ -267,7 +267,7 @@ if (something) { } ``` -However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: +Sin embargo eso solo funciona cuando estás haciendo el `if` / ternario en una variable, no en un acceso a una propiedad, como este: ```typescript class Container { @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -Hemos abierto un tema en el compilador de AssemblyScript para esto, pero por ahora si haces este tipo de operaciones en tus mapeos de subgrafos, deberías cambiarlos para hacer una comprobación de nulos antes de ello. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Compilará pero se romperá en tiempo de ejecución, eso ocurre porque el valor no ha sido inicializado, así que asegúrate de que tu subgrafo ha inicializado sus valores, así: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized @@ -381,7 +381,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +Tendrás que asegurarte de inicializar el valor `total.amount`, porque si intentas acceder como en la última línea para la suma, se bloqueará. Así que o bien la inicializas primero: ```typescript let total = Total.load('latest') @@ -394,7 +394,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 +O simplemente puedes cambiar tu esquema GraphQL para no usar un tipo anulable para esta propiedad, entonces la inicializaremos como cero en el paso `codegen` 😉 ```graphql type Total @entity { @@ -425,7 +425,7 @@ export class Something { } ``` -The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: +El compilador dará un error porque tienes que añadir un inicializador para las propiedades que son clases, o añadir el operador `!`: ```typescript export class Something { @@ -451,7 +451,7 @@ export class Something { ### Inicialización de Array -The `Array` class still accepts a number to initialize the length of the list, however you should take care because operations like `.push` will actually increase the size instead of adding to the beginning, for example: +La clase `Array` sigue aceptando un número para inicializar la longitud de la lista, sin embargo hay que tener cuidado porque operaciones como `.push` en realidad aumentarán el tamaño en lugar de añadirlo al principio, por ejemplo: ```typescript let arr = new Array(5) // ["", "", "", "", ""] @@ -465,7 +465,7 @@ Dependiendo de los tipos que estés utilizando, por ejemplo los anulables, y de ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/array.ts, line 110, column 40, with message: Element type must be nullable if array is holey wasm backtrace: 0: 0x19c4 - !~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type ``` -To actually push at the beginning you should either, initialize the `Array` with size zero, like this: +Para realmente empujar al principio deberías o bien, inicializar el `Array` con tamaño cero, así: ```typescript let arr = new Array(0) // [] @@ -483,7 +483,7 @@ arr[0] = 'something' // ["something", "", "", "", ""] ### Esquema GraphQL -This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. +Esto no es un cambio directo de AssemblyScript, pero es posible que tengas que actualizar tu archivo `schema.graphql`. Ahora ya no puedes definir campos en tus tipos que sean Listas No Anulables. Si tienes un esquema como este: @@ -498,7 +498,7 @@ type MyEntity @entity { } ``` -You'll have to add an `!` to the member of the List type, like this: +Tendrás que añadir un `!` al miembro del tipo Lista, así: ```graphql type Something @entity { @@ -511,14 +511,14 @@ type MyEntity @entity { } ``` -This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). +Esto cambió debido a las diferencias de anulabilidad entre las versiones de AssemblyScript, y está relacionado con el archivo `src/generated/schema.ts` (ruta por defecto, puede que lo hayas cambiado). ### Otros -- Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) -- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) -- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) -- Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Alineado `Map#set` y `Set#add` con el spec, devolviendo `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Las arrays ya no heredan de ArrayBufferView, sino que son distintas ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Las clases inicializadas a partir de objetos literales ya no pueden definir un constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- El resultado de una operación binaria `**` es ahora el entero denominador común si ambos operandos son enteros. Anteriormente, el resultado era un flotante como si se llamara a `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- Coerción `NaN` a `false` cuando casting a `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) +- Al desplazar un valor entero pequeño de tipo `i8`/`u8` o `i16`/`u16`, sólo los 3 o 4 bits menos significativos del valor RHS afectan al resultado, de forma análoga al resultado de un `i32.shl` que sólo se ve afectado por los 5 bits menos significativos del valor RHS. Ejemplo: `someI8 << 8` previamente producía el valor `0`, pero ahora produce `someI8` debido a enmascarar el RHS como `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- Corrección de errores en las comparaciones de strings relacionales cuando los tamaños difieren ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) diff --git a/website/src/pages/es/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/es/resources/migration-guides/graphql-validations-migration-guide.mdx index 55801738ddca..e6d159b3f84c 100644 --- a/website/src/pages/es/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/es/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Guía de migración de Validaciones GraphQL +title: GraphQL Validations Migration Guide --- Pronto `graph-node` admitirá una cobertura del 100% de la [especificación de validaciones GraphQL](https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ Para ser compatible con esas validaciones, por favor sigue la guía de migració Puedes utilizar la herramienta de migración CLI para encontrar cualquier problema en tus operaciones GraphQL y solucionarlo. Alternativamente, puedes actualizar el endpoint de tu cliente GraphQL para usar el endpoint `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Probar tus consultas contra este endpoint te ayudará a encontrar los problemas en tus consultas. -> No todos los subgrafos deberán migrarse, si estás utilizando [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) o [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), ya se aseguran de que tus consultas sean válidas. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Herramienta de migración de la línea de comandos @@ -406,6 +406,7 @@ query { user { id image # 'image' requiere un conjunto de selección para subcampos! + } } ``` diff --git a/website/src/pages/es/resources/roles/curating.mdx b/website/src/pages/es/resources/roles/curating.mdx index da189f62bf69..a3ec7ae0ce5e 100644 --- a/website/src/pages/es/resources/roles/curating.mdx +++ b/website/src/pages/es/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curación --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Cómo señalar -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Un curador puede optar por señalar una versión especifica de un subgrafo, o puede optar por que su señal migre automáticamente a la versión de producción mas reciente de ese subgrafo. Ambas son estrategias válidas y tienen sus pros y sus contras. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Hacer que tu señal migre automáticamente a la más reciente compilación de producción puede ser valioso para asegurarse de seguir acumulando tarifas de consulta. Cada vez que curas, se incurre en un impuesto de curación del 1%. También pagarás un impuesto de curación del 0,5% en cada migración. Se desaconseja a los desarrolladores de Subgrafos que publiquen con frecuencia nuevas versiones - tienen que pagar un impuesto de curación del 0,5% en todas las acciones de curación auto-migradas. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Riesgos 1. El mercado de consultas es inherentemente joven en The Graph y existe el riesgo de que su APY (Rentabilidad anualizada) sea más bajo de lo esperado debido a la dinámica del mercado que recién está empezando. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Un subgrafo puede fallar debido a un error. Un subgrafo fallido no acumula tarifas de consulta. Como resultado, tendrás que esperar hasta que el desarrollador corrija el error e implemente una nueva versión. - - Si estás suscrito a la versión más reciente de un subgrafo, tus acciones se migrarán automáticamente a esa nueva versión. Esto incurrirá un impuesto de curación del 0.5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Preguntas frecuentes sobre Curación ### 1. ¿Qué porcentaje de las tasas de consulta ganan los curadores? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. ¿Cómo decido qué subgrafos son de alta calidad para señalar? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. ¿Puedo vender mis acciones de curación? diff --git a/website/src/pages/es/resources/roles/delegating/undelegating.mdx b/website/src/pages/es/resources/roles/delegating/undelegating.mdx index 792d140be411..4d06fe0e1b37 100644 --- a/website/src/pages/es/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/es/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## Recursos Adicionales diff --git a/website/src/pages/es/resources/subgraph-studio-faq.mdx b/website/src/pages/es/resources/subgraph-studio-faq.mdx index 14174cc468bf..1d2ebbae57a6 100644 --- a/website/src/pages/es/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/es/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Preguntas Frecuentes sobre Subgraph Studio ## 1. ¿Qué es Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. ¿Cómo creo una clave API? @@ -12,20 +12,20 @@ To create an API, navigate to Subgraph Studio and connect your wallet. You will ## 3. ¿Puedo crear múltiples claves de API? -Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +¡Sí! Puedes crear varias claves de API para usar en diferentes proyectos. Consulta el enlace [aquí](https://thegraph.com/studio/apikeys/). ## 4. ¿Cómo restrinjo un dominio para una clave API? Después de crear una clave de API, en la sección Seguridad, puedes definir los dominios que pueden consultar una clave de API específica. -## 5. ¿Puedo transferir mi subgrafo a otro propietario? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Ten en cuenta que ya no podrás ver o editar el subgrafo en Studio una vez que haya sido transferido. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. ¿Cómo encuentro URLs de consulta para subgrafos si no soy el desarrollador del subgrafo que quiero usar? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Recuerda que puedes crear una clave API y consultar cualquier subgrafo publicado en la red, incluso si tú mismo construyes un subgrafo. Estas consultas a través de la nueva clave API, son consultas pagadas como cualquier otra en la red. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/es/resources/tokenomics.mdx b/website/src/pages/es/resources/tokenomics.mdx index cd30274637ea..a15d15155fd5 100644 --- a/website/src/pages/es/resources/tokenomics.mdx +++ b/website/src/pages/es/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Descripción -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curadores - Encuentran los mejores subgrafos para los Indexadores +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexadores: Son la columna vertebral de los datos de la blockchain @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creación de un subgrafo +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Consulta de un subgrafo existente +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/es/sps/introduction.mdx b/website/src/pages/es/sps/introduction.mdx index 344648a4c8a4..4340733cfc84 100644 --- a/website/src/pages/es/sps/introduction.mdx +++ b/website/src/pages/es/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introducción a los Subgrafos Impulsados por Substreams sidebarTitle: Introducción --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Descripción -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics Existen dos métodos para habilitar esta tecnología: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Recursos Adicionales @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/es/sps/sps-faq.mdx b/website/src/pages/es/sps/sps-faq.mdx index 592bdff3db63..dd7685e1a4be 100644 --- a/website/src/pages/es/sps/sps-faq.mdx +++ b/website/src/pages/es/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## ¿Qué son los subgrafos impulsados por Substreams? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## ¿Cómo se diferencian los subgrafos impulsados por Substreams de los subgrafos tradicionales? +## How are Substreams-powered Subgraphs different from Subgraphs? Los subgrafos están compuestos por fuentes de datos que especifican eventos en la cadena de bloques, y cómo esos eventos deben ser transformados mediante controladores escritos en AssemblyScript. Estos eventos se procesan de manera secuencial, según el orden en el que ocurren los eventos onchain. -En cambio, los subgrafos potenciados por Substreams tienen una única fuente de datos que hace referencia a un paquete de Substreams, que es procesado por Graph Node. Los Substreams tienen acceso a datos más granulares onchain en comparación con los subgrafos convencionales, y también pueden beneficiarse de un procesamiento masivamente paralelizado, lo que puede significar tiempos de procesamiento mucho más rápidos. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## ¿Cuáles son los beneficios de usar subgrafos potenciados por Substreams? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## ¿Cuáles son los beneficios de Substreams? @@ -35,7 +35,7 @@ Hay muchos beneficios al usar Substreams, incluyendo: - Indexación de alto rendimiento: Indexación mucho más rápida mediante grandes clústeres de operaciones en paralelo (piensa en BigQuery). -- Almacenamiento en cualquier lugar: Envía tus datos a donde quieras: PostgreSQL, MongoDB, Kafka, subgrafos, archivos planos, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programable: Usa código para personalizar la extracción, realizar agregaciones en tiempo de transformación y modelar tu salida para múltiples destinos. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## ¿Dónde pueden los desarrolladores acceder a más información sobre los subgrafos potenciados por Substreams y Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? La [documentación de Substreams] (/substreams/introduction/) te enseñará cómo construir módulos de Substreams. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. La [última herramienta de Substreams Codegen] (https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) te permitirá iniciar un proyecto de Substreams sin necesidad de escribir código. ## ¿Cuál es el papel de los módulos de Rust en Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst Cuando se usa Substreams, la composición ocurre en la capa de transformación, lo que permite que los módulos en caché sean reutilizados. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## ¿Dónde puedo encontrar ejemplos de Substreams y Subgrafos potenciados por Substreams? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -Puedes visitar [este repositorio de Github] (https://github.com/pinax-network/awesome-substreams) para encontrar ejemplos de Substreams y Subgrafos potenciados por Substreams. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## ¿Qué significan los Substreams y los subgrafos impulsados por Substreams para The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/es/sps/triggers.mdx b/website/src/pages/es/sps/triggers.mdx index a0b15ced3b13..16db4057a732 100644 --- a/website/src/pages/es/sps/triggers.mdx +++ b/website/src/pages/es/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Descripción -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -El siguiente código demuestra cómo definir una función 'handleTransactions' en un controlador de subgraph. Esta función recibe bytes sin procesar de Substreams como parámetro y los decodifica en un objeto 'Transactions'. Para cada transacción, se crea una nueva entidad en el subgrafo. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. Los bytes que contienen los datos de Substreams se decodifican en el objeto 'Transactions' generado, y este objeto se utiliza como cualquier otro objeto de AssemblyScript. 2. Iterando sobre las transacciones -3. Crear una nueva entidad de subgrafo para cada transacción. +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Recursos Adicionales diff --git a/website/src/pages/es/sps/tutorial.mdx b/website/src/pages/es/sps/tutorial.mdx index 0c289f179d4b..d989932c87e1 100644 --- a/website/src/pages/es/sps/tutorial.mdx +++ b/website/src/pages/es/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Configurar un Subgrafo Potenciado por Substreams en Solana' +title: "Tutorial: Configurar un Subgrafo Potenciado por Substreams en Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Comenzar @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Paso 2: Generar el Manifiesto del Subgrafo -Una vez que el proyecto esté inicializado, genera un manifiesto de subgraph ejecutando el siguiente comando en el Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgrafo @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Paso 3: Definir Entidades en schema.graphql -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ Este esquema define una entidad 'MyTransfer' con campos como 'id', 'amount', 'so With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ Para generar objetos Protobuf en AssemblyScript, ejecuta el siguiente comando: npm run protogen ``` -Este comando convierte las definiciones de Protobuf en AssemblyScript, lo que te permite usarlas en el controlador del subgrafo. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/es/subgraphs/_meta-titles.json b/website/src/pages/es/subgraphs/_meta-titles.json index 0556abfc236c..3fd405eed29a 100644 --- a/website/src/pages/es/subgraphs/_meta-titles.json +++ b/website/src/pages/es/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", + "guides": "How-to Guides", "best-practices": "Best Practices" } diff --git a/website/src/pages/es/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/es/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/es/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/es/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/es/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/es/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/es/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/es/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/es/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/es/subgraphs/best-practices/grafting-hotfix.mdx index 0f85dfc8acf6..39750e51189d 100644 --- a/website/src/pages/es/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/es/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Descripción -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Recursos Adicionales - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/es/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/es/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..c2bd2e50b23c 100644 --- a/website/src/pages/es/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/es/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -22,7 +22,7 @@ type Transfer @entity(immutable: true) { By making the `Transfer` entity immutable, graph-node is able to process the entity more efficiently, improving indexing speeds and query responsiveness. -Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging onchain event data, such as a `Transfer` event being logged as a `Transfer` entity. +Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging on-chain event data, such as a `Transfer` event being logged as a `Transfer` entity. ### Under the hood @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/es/subgraphs/best-practices/pruning.mdx b/website/src/pages/es/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/es/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/es/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/es/subgraphs/best-practices/timeseries.mdx b/website/src/pages/es/subgraphs/best-practices/timeseries.mdx index 991ac69c38b7..bfda432f7555 100644 --- a/website/src/pages/es/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/es/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Descripción @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Ejemplo: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Ejemplo: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/es/subgraphs/billing.mdx b/website/src/pages/es/subgraphs/billing.mdx index b2210285e434..d8535da9fcb7 100644 --- a/website/src/pages/es/subgraphs/billing.mdx +++ b/website/src/pages/es/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Facturación ## Planes de consultas -Existen dos planes para usar al consultar subgrafos en The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Plan Gratuito**: El Plan Gratuito incluye 100.000 consultas mensuales gratuitas con acceso completo al entorno de pruebas de Subgraph Studio. Este plan está diseñado para aficionados, participantes de hackatones y aquellos con proyectos paralelos que deseen probar The Graph antes de escalar su dapp. - Plan de Expansión: El Plan de Expansión incluye todo lo que ofrece el Plan Gratuito, pero todas las consultas que excedan las 100.000 consultas mensuales requieren pagos con GRT o tarjeta de crédito. El Plan de Expansión es lo suficientemente flexible como para cubrir las necesidades de equipos con dapps consolidadas en una variedad de casos de uso. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Pagos de consultas con tarjeta de crédito @@ -59,7 +61,7 @@ Una vez que transfieras GRT, puedes agregarlo a tu saldo de facturación. 5. Selecciona "Cripto". Actualmente, GRT es la única criptomoneda aceptada en The Graph Network. 6. Selecciona la cantidad de meses que deseas pagar por adelantado. - Pagar por adelantado no te compromete a un uso futuro. Solo se te cobrará por lo que utilices, y puedes retirar tu saldo en cualquier momento. -7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. +7. Elige la red desde la cual vas a depositar tu GRT. GRT en Arbitrum o Ethereum ambas opciones son aceptables. 8. Haz clic en "Permitir acceso a GRT" y luego especifica la cantidad de GRT que se puede tomar de tu wallet. - Si estás pagando por adelantado varios meses, debes permitirle acceso a la cantidad que corresponde con ese monto. Esta interacción no tendrá costo de gas. 9. Por último, haz clic en "Agregar GRT al saldo de facturación". Esta transacción requerirá ETH en Arbitrum para cubrir los costos de gas. @@ -103,70 +105,70 @@ Esta será una guía paso a paso para comprar GRT en Coinbase. 2. Una vez que hayas creado una cuenta, necesitarás verificar tu identidad a través de un proceso conocido como KYC (o Conoce a tu Cliente). Este es un procedimiento estándar para todos los intercambios de criptomonedas centralizados o con custodia de activos. 3. Una vez que hayas verificado tu identidad, puedes comprar GRT. Para hacerlo, haz clic en el botón "Comprar/Vender" en la parte superior derecha de la página. 4. Selecciona la moneda que deseas comprar. Selecciona GRT. -5. Select the payment method. Select your preferred payment method. -6. Select the amount of GRT you want to purchase. -7. Review your purchase. Review your purchase and click "Buy GRT". -8. Confirm your purchase. Confirm your purchase and you will have successfully purchased GRT. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - To transfer the GRT to your wallet, click on the "Accounts" button on the top right of the page. - - Click on the "Send" button next to the GRT account. - - Enter the amount of GRT you want to send and the wallet address you want to send it to. - - Click "Continue" and confirm your transaction. -Please note that for larger purchase amounts, Coinbase may require you to wait 7-10 days before transferring the full amount to a wallet. - -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +5. Selecciona el método de pago. Elige tu método de pago preferido. +6. Selecciona la cantidad de GRT que deseas comprar. +7. Revisa tu compra. Revisa los detalles de tu compra y haz clic en "Comprar GRT". +8. Confirma tu compra. Confirma tu compra y habrás adquirido GRT con éxito. +9. Puedes transferir el GRT desde tu cuenta a tu billetera, como [MetaMask](https://metamask.io/). + - Para transferir el GRT a tu billetera, haz clic en el botón "Cuentas" en la parte superior derecha de la página. + - Haz clic en el botón "Enviar" junto a la cuenta de GRT. + - Ingresa la cantidad de GRT que deseas enviar y la dirección de la wallet a la que quieres enviarlo. + - Haz clic en "Continuar" y confirma tu transacción. Ten en cuenta que, para montos de compra más grandes, Coinbase puede requerir que esperes de 7 a 10 días antes de transferir la cantidad completa a una wallet. + +Puedes obtener más información sobre cómo obtener GRT en Coinbase [aquí[(https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance -This will be a step by step guide for purchasing GRT on Binance. +Esta será una guía paso a paso para comprar GRT en Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Ve a [Binance](https://www.binance.com/en) y crea una cuenta. 2. Una vez que hayas creado una cuenta, necesitarás verificar tu identidad a través de un proceso conocido como KYC (o Conoce a tu Cliente). Este es un procedimiento estándar para todos los intercambios de criptomonedas centralizados o con custodia de activos. -3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy Now" button on the homepage banner. -4. You will be taken to a page where you can select the currency you want to purchase. Select GRT. -5. Select your preferred payment method. You'll be able to pay with different fiat currencies such as Euros, US Dollars, and more. -6. Select the amount of GRT you want to purchase. -7. Review your purchase and click "Buy GRT". -8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - - Click on the "wallet" button, click withdraw, and select GRT. - - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - - Click "Continue" and confirm your transaction. - -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +3. Una vez que hayas verificado tu identidad, puedes comprar GRT. Para hacerlo, haz clic en el botón "Comprar ahora" en el banner de la página de inicio. +4. Serás redirigido a una página donde podrás seleccionar la moneda que deseas comprar. Selecciona GRT. +5. Selecciona tu método de pago preferido. Podrás pagar con diferentes monedas fiduciarias, como euros, dólares estadounidenses y más. +6. Selecciona la cantidad de GRT que deseas comprar. +7. Revisa tu compra y haz clic en "Comprar GRT". +8. Confirma tu compra y podrás ver tu GRT en tu wallet Spot de Binance. +9. Puedes retirar el ETH de tu cuenta a tu wallet, como [MetaMask](https://metamask.io/). + - Para [retirar](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) el GRT a tu wallet, añade la dirección de tu wallet a la lista de retiros autorizados. + - Haz clic en el botón "wallet", haz clic en retirar y selecciona GRT. + - Ingresa la cantidad de GRT que deseas enviar y la dirección de wallet autorizada a la que quieres enviarlo. + - Haz clic en "Continuar" y confirma tu transacción. + +Puedes obtener más información sobre cómo obtener GRT en Binance [aquí](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ### Uniswap -This is how you can purchase GRT on Uniswap. +Así es como puedes comprar GRT en Uniswap. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. -2. Select the token you want to swap from. Select ETH. -3. Select the token you want to swap to. Select GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -4. Enter the amount of ETH you want to swap. -5. Click "Swap". -6. Confirm the transaction in your wallet and you wait for the transaction to process. +1. Ve a [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) y conecta tu wallet. +2. Selecciona el token del que deseas intercambiar. Selecciona ETH. +3. Selecciona el token al que deseas intercambiar. Selecciona GRT. + - Asegúrate de que estás intercambiando por el token correcto. La dirección del contrato inteligente de GRT en Arbitrum One es: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +4. Ingresa la cantidad de ETH que deseas intercambiar. +5. Haz clic en "Intercambiar". +6. Confirma la transacción en tu wallet y espera a que la transacción se procese. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Puedes obtener más información sobre cómo obtener GRT en Uniswap [aquí](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). -## Getting Ether +## Obtener Ether -This section will show you how to get Ether (ETH) to pay for transaction fees or gas costs. ETH is necessary to execute operations on the Ethereum network such as transferring tokens or interacting with contracts. +Esta sección te mostrará cómo obtener Ether (ETH) para pagar las tarifas de transacción o los costos de gas. ETH es necesario para ejecutar operaciones en la red de Ethereum, como transferir tokens o interactuar con contratos. ### Coinbase -This will be a step by step guide for purchasing ETH on Coinbase. +Esta será una guía paso a paso para comprar ETH en Coinbase. 1. Ve a [Coinbase](https://www.coinbase.com/) y crea una cuenta. 2. Una vez que hayas creado una cuenta, verifica tu identidad a través de un proceso conocido como KYC (o Conoce a tu Cliente). Este es un procedimiento estándar para todos los exchanges centralizados o que mantienen custodia de criptomonedas. -3. Once you have verified your identity, purchase ETH by clicking on the "Buy/Sell" button on the top right of the page. +3. Una vez que hayas verificado tu identidad, compra ETH haciendo clic en el botón "Comprar/Vender" en la esquina superior derecha de la página. 4. Selecciona la moneda que deseas comprar. Elige ETH. 5. Selecciona tu método de pago preferido. 6. Ingresa la cantidad de ETH que deseas comprar. 7. Revisa tu compra y haz clic en "Comprar ETH". -8. Confirm your purchase and you will have successfully purchased ETH. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). - - To transfer the ETH to your wallet, click on the "Accounts" button on the top right of the page. +8. Confirma tu compra y habrás adquirido ETH con éxito. +9. Puedes transferir el ETH desde tu cuenta de Coinbase a tu billetera, como [MetaMask](https://metamask.io/). + - Para transferir el ETH a tu billetera, haz clic en el botón "Cuentas" en la esquina superior derecha de la página. - Haz clic en el botón "Enviar" junto a la cuenta de ETH. - Ingresa la cantidad de ETH que deseas enviar y la dirección de la wallet a la que quieres enviarlo. - Asegúrate de que estás enviando a la dirección de tu wallet de Ethereum en Arbitrum One. @@ -178,18 +180,18 @@ Puedes obtener más información sobre cómo adquirir ETH en Coinbase [aquí](ht Esta será una guía paso a paso para comprar ETH en Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Ve a [Binance](https://www.binance.com/en) y crea una cuenta. 2. Una vez que hayas creado una cuenta, verifica tu identidad a través de un proceso conocido como KYC (o Conoce a tu Cliente). Este es un procedimiento estándar para todos los exchanges centralizados o que mantienen custodia de criptomonedas. 3. Una vez que hayas verificado tu identidad, compra ETH haciendo clic en el botón "Comprar ahora" en el banner de la página de inicio. 4. Selecciona la moneda que deseas comprar. Elige ETH. -5. Selecciona tu método de pago preferido. +5. Selecciona tu método de pago de preferencia. 6. Ingresa la cantidad de ETH que deseas comprar. 7. Revisa tu compra y haz clic en "Comprar ETH". -8. Confirm your purchase and you will see your ETH in your Binance Spot Wallet. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). +8. Confirma tu compra y verás tu ETH en tu Wallet Spot de Binance. +9. Puedes retirar el ETH de tu cuenta a tu wallet, como [MetaMask](https://metamask.io/). - Para retirar el ETH a tu wallet, añade la dirección de tu wallet a la lista de direcciones autorizadas para retiros. - - Click on the "wallet" button, click withdraw, and select ETH. - - Enter the amount of ETH you want to send and the whitelisted wallet address you want to send it to. + - Haz clic en el botón "wallet", luego en "retirar" y selecciona ETH. + - Ingresa la cantidad de ETH que deseas enviar y la dirección de wallet autorizada a la que quieres enviarlo. - Asegúrate de que estás enviando a la dirección de tu wallet de Ethereum en Arbitrum One. - Haz clic en "Continuar" y confirma tu transacción. diff --git a/website/src/pages/es/subgraphs/cookbook/arweave.mdx b/website/src/pages/es/subgraphs/cookbook/arweave.mdx index c0333e3dadf8..71c58f8afabd 100644 --- a/website/src/pages/es/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/es/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Construyendo Subgrafos en Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! En esta guía, aprenderás a construir y deployar subgrafos para indexar la blockchain de Arweave. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are Para poder construir y deployar subgrafos Arweave, necesita dos paquetes: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Componentes del subgrafo -Hay tres componentes de un subgrafo: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,37 +40,37 @@ Define las fuentes de datos de interés y cómo deben ser procesadas. Arweave es Aquí defines qué datos quieres poder consultar después de indexar tu Subgrafo usando GraphQL. Esto es en realidad similar a un modelo para una API, donde el modelo define la estructura de un cuerpo de solicitud. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` Esta es la lógica que determina cómo los datos deben ser recuperados y almacenados cuando alguien interactúa con las fuentes de datos que estás escuchando. Los datos se traducen y se almacenan basándose en el esquema que has listado. -Durante el desarrollo del subgrafo hay dos comandos clave: +During Subgraph development there are two key commands: ``` -$ graph codegen # genera tipos a partir del archivo de esquema identificado en el manifiesto -$ graph build # genera Web Assembly a partir de los archivos de AssemblyScript y prepara todos los archivos de subgrafo en una carpeta /build +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Definición de manifiesto del subgrafo -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 -descripción: Indexación de bloques Arweave -esquema: - file: ./schema.graphql # link to the schema file +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file dataSources: - kind: arweave name: arweave-blocks network: arweave-mainnet # The Graph only supports Arweave Mainnet - source: + source: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Las fuentes de datos de Arweave introducen un campo opcional "source.owner", que es la clave pública de una billetera Arweave @@ -99,7 +99,7 @@ Las fuentes de datos de Arweave admiten dos tipos de handlers: ## Definición de esquema -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## Asignaciones de AssemblyScript @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Consultando un subgrafo de Arweave -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Subgrafos de ejemplo -A continuación se muestra un ejemplo de subgrafo como referencia: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### ¿Puede un subgrafo indexar Arweave y otras cadenas? +### Can a Subgraph index Arweave and other chains? -No, un subgrafo sólo puede admitir fuentes de datos de una cadena/red. +No, a Subgraph can only support data sources from one chain/network. ### ¿Puedo indexar los archivos almacenados en Arweave? Actualmente, The Graph sólo indexa Arweave como blockchain (sus bloques y transacciones). -### ¿Puedo identificar los paquetes de Bundlr en mi subgrafo? +### Can I identify Bundlr bundles in my Subgraph? Actualmente no se admite. @@ -188,7 +188,7 @@ El source.owner puede ser la clave pública del usuario o la dirección de la cu ### ¿Cuál es el formato actual de encriptación? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/es/subgraphs/cookbook/enums.mdx b/website/src/pages/es/subgraphs/cookbook/enums.mdx index 29b5b2d0bf38..8a3da763d6e2 100644 --- a/website/src/pages/es/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/es/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/es/subgraphs/cookbook/grafting.mdx b/website/src/pages/es/subgraphs/cookbook/grafting.mdx index 4a98c7ab352b..3717e35b3d8a 100644 --- a/website/src/pages/es/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/es/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Reemplazar un contrato y mantener su historia con el grafting --- -En esta guía, aprenderás a construir y deployar nuevos subgrafos mediante grafting (injerto) de subgrafos existentes. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## ¿Qué es el Grafting? -El grafting reutiliza los datos de un subgrafo existente y comienza a indexarlo en un bloque posterior. Esto es útil durante el desarrollo para superar rápidamente errores simples en los mapeos o para hacer funcionar temporalmente un subgrafo existente después de que haya fallado. También se puede utilizar cuando se añade un feature a un subgrafo que tarda en indexarse desde cero. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -El subgrafo grafteado puede utilizar un esquema GraphQL que no es idéntico al del subgrafo base, sino simplemente compatible con él. Tiene que ser un esquema de subgrafo válido por sí mismo, pero puede diferir del esquema del subgrafo base de las siguientes maneras: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Agrega o elimina tipos de entidades - Elimina los atributos de los tipos de entidad @@ -22,38 +22,38 @@ Para más información, puedes consultar: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## Construcción de un subgrafo existente -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Definición de manifiesto del subgrafo -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Definición del manifiesto de grafting -El grafting requiere añadir dos nuevos items al manifiesto original del subgrafo: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Deploy del subgrafo base -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Una vez que hayas terminado, verifica que el subgrafo se está indexando correctamente. Si ejecutas el siguiente comando en The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ Devuelve algo como esto: } ``` -Una vez que hayas verificado que el subgrafo se está indexando correctamente, puedes actualizar rápidamente el subgrafo con grafting. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Deploy del subgrafo grafting El subgraph.yaml de sustitución del graft tendrá una nueva dirección de contrato. Esto podría ocurrir cuando actualices tu dApp, vuelvas a deployar un contrato, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Una vez que hayas terminado, verifica que el subgrafo se está indexando correctamente. Si ejecutas el siguiente comando en The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ Debería devolver lo siguiente: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Recursos Adicionales diff --git a/website/src/pages/es/subgraphs/cookbook/near.mdx b/website/src/pages/es/subgraphs/cookbook/near.mdx index 67db2b1278cb..f22a497db7e1 100644 --- a/website/src/pages/es/subgraphs/cookbook/near.mdx +++ b/website/src/pages/es/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Construcción de subgrafos en NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## ¿Qué es NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## ¿Qué son los subgrafos NEAR? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Handlers de bloques: se ejecutan en cada nuevo bloque - Handlers de recibos: se realizan cada vez que se ejecuta un mensaje en una cuenta específica @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Construcción de un subgrafo NEAR -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Construir un subgrafo NEAR es muy similar a construir un subgrafo que indexa Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -Hay tres aspectos de la definición de subgrafo: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -Durante el desarrollo del subgrafo hay dos comandos clave: +During Subgraph development there are two key commands: ```bash -$ graph codegen # genera tipos a partir del archivo de esquema identificado en el manifiesto -$ graph build # genera Web Assembly a partir de los archivos de AssemblyScript y prepara todos los archivos de subgrafo en una carpeta /build +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Definición de manifiesto del subgrafo -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ Las fuentes de datos NEAR admiten dos tipos de handlers: ### Definición de esquema -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### Asignaciones de AssemblyScript @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Deployando un subgrafo NEAR -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -La configuración del nodo dependerá de dónde se implemente el subgrafo. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Una vez que se haya implementado su subgrafo, Graph Node lo indexará. Puede comprobar su progreso consultando el propio subgrafo: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ Pronto proporcionaremos más información sobre cómo ejecutar los componentes a ## Consultando un subgrafo NEAR -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Subgrafos de ejemplo -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### ¿Cómo funciona la beta? -El soporte NEAR está en versión beta, lo que significa que puede haber cambios en la API a medida que continuamos trabajando para mejorar la integración. Envíe un correo electrónico a near@thegraph.com para que podamos ayudarlo a crear subgrafos NEAR y mantenerte actualizado sobre los últimos desarrollos! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### ¿Puede un subgrafo indexar las cadenas NEAR y EVM? +### Can a Subgraph index both NEAR and EVM chains? -No, un subgrafo sólo puede admitir fuentes de datos de una cadena/red. +No, a Subgraph can only support data sources from one chain/network. -### ¿Pueden los subgrafos reaccionar a activadores más específicos? +### Can Subgraphs react to more specific triggers? Actualmente, solo se admiten los activadores de Bloque y Recibo. Estamos investigando activadores para llamadas a funciones a una cuenta específica. También estamos interesados en admitir activadores de eventos, una vez que NEAR tenga soporte nativo para eventos. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### ¿Pueden los subgrafos NEAR realizar view calls a cuentas NEAR durante las asignaciones? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? Esto no es compatible. Estamos evaluando si esta funcionalidad es necesaria para la indexación. -### ¿Puedo usar plantillas de fuente de datos en mi subgrafo NEAR? +### Can I use data source templates in my NEAR Subgraph? Esto no es compatible actualmente. Estamos evaluando si esta funcionalidad es necesaria para la indexación. -### Los subgrafos de Ethereum admiten versiones "pendientes" y "actuales", ¿cómo puedo implementar una versión "pendiente" de un subgrafo NEAR? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -La funcionalidad pendiente aún no es compatible con los subgrafos NEAR. Mientras tanto, puedes implementar una nueva versión en un subgrafo "nombrado" diferente y luego, cuando se sincroniza con el encabezado de la cadena, puedes volver a implementarlo en su subgrafo principal "nombrado", que usará el mismo ID de implementación subyacente, por lo que el subgrafo principal se sincronizará instantáneamente. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### Mi pregunta no ha sido respondida, ¿dónde puedo obtener más ayuda para crear subgrafos NEAR? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## Referencias diff --git a/website/src/pages/es/subgraphs/cookbook/polymarket.mdx b/website/src/pages/es/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/es/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/es/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/es/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/es/subgraphs/cookbook/secure-api-keys-nextjs.mdx index 07b297aff006..f6b5193787c9 100644 --- a/website/src/pages/es/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/es/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## Descripción -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/es/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/es/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..2a0e82074855 --- /dev/null +++ b/website/src/pages/es/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Descripción + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Comenzar + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Recursos Adicionales + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/es/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/es/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..20282f97594c --- /dev/null +++ b/website/src/pages/es/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Introducción + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Comenzar + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Recursos Adicionales + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/es/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/es/subgraphs/cookbook/subgraph-debug-forking.mdx index 163a16d59e00..145fe815ede1 100644 --- a/website/src/pages/es/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/es/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Debugging rápido y sencillo de subgrafos mediante Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## ¿Bien, qué es? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## ¡¿Qué?! ¿Cómo? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## ¡Por favor, muéstrame algo de código! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. La forma habitual de intentar una solución es: 1. Realiza un cambio en la fuente de mapeos, que crees que resolverá el problema (aunque sé que no lo hará). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Espera a que se sincronice. 4. Si se vuelve a romper vuelve a 1, de lo contrario: ¡Hurra! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Realiza un cambio en la fuente de mapeos, que crees que resolverá el problema. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. Si se vuelve a romper, vuelve a 1, de lo contrario: ¡Hurra! Ahora, puedes tener 2 preguntas: @@ -69,18 +69,18 @@ Ahora, puedes tener 2 preguntas: Y yo respondo: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Bifurcar es fácil, no hay necesidad de sudar: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! Entonces, esto es lo que hago: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/es/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/es/subgraphs/cookbook/subgraph-uncrashable.mdx index 59b33568a1f2..2794e6ab66d8 100644 --- a/website/src/pages/es/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/es/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Generador de código de subgrafo seguro --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## ¿Por qué integrarse con Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - El marco también incluye una forma (a través del archivo de configuración) para crear funciones de establecimiento personalizadas, pero seguras, para grupos de variables de entidad. De esta forma, es imposible que el usuario cargue/utilice una entidad gráfica obsoleta y también es imposible olvidarse de guardar o configurar una variable requerida por la función. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable se puede ejecutar como un indicador opcional mediante el comando codegen Graph CLI. @@ -26,4 +26,4 @@ Subgraph Uncrashable se puede ejecutar como un indicador opcional mediante el co graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/es/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/es/subgraphs/cookbook/transfer-to-the-graph.mdx index 339032915f35..03e79bc34d36 100644 --- a/website/src/pages/es/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/es/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Ejemplo -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Recursos Adicionales -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/es/subgraphs/developing/_meta-titles.json b/website/src/pages/es/subgraphs/developing/_meta-titles.json index 01a91b09ed77..ba2fe22a0c4d 100644 --- a/website/src/pages/es/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/es/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "deploying": "Deployando", + "publishing": "Publicando", + "managing": "Administrando" } diff --git a/website/src/pages/es/subgraphs/developing/creating/advanced.mdx b/website/src/pages/es/subgraphs/developing/creating/advanced.mdx index 63cf8f312906..eec792c562e4 100644 --- a/website/src/pages/es/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Descripción -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Errores no fatales -Los errores de indexación en subgrafos ya sincronizados provocarán, por defecto, que el subgrafo falle y deje de sincronizarse. Los subgrafos pueden ser configurados de manera alternativa para continuar la sincronización en presencia de errores, ignorando los cambios realizados por el handler que provocó el error. Esto da a los autores de los subgrafos tiempo para corregir sus subgrafos mientras las consultas continúan siendo servidas contra el último bloque, aunque los resultados serán posiblemente inconsistentes debido al bug que provocó el error. Nótese que algunos errores siguen siendo siempre fatales, para que el error no sea fatal debe saberse que es deterministico. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Para activar los errores no fatales es necesario establecer el siguiente indicador en el manifiesto del subgrafo: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Esto también establece las bases para la indexación determinista de datos off-chain, así como la posible introducción de datos arbitrarios procedentes de HTTP. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Ejemplo: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file ¡Felicitaciones, estás utilizando fuentes de datos de archivos! -#### Deploy de tus subgrafos +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitaciones -Los handlers y entidades de fuentes de datos de archivos están aislados de otras entidades del subgrafo, asegurando que son deterministas cuando se ejecutan, y asegurando que no se contaminan las fuentes de datos basadas en cadenas. En concreto: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Las entidades creadas por File Data Sources son inmutables y no pueden actualizarse - Los handlers de File Data Source no pueden acceder a entidades de otras fuentes de datos de archivos - Los handlers basados en cadenas no pueden acceder a las entidades asociadas a File Data Sources -> Aunque esta restricción no debería ser problemática para la mayoría de los casos de uso, puede introducir complejidad para algunos. Si tienes problemas para modelar tus datos basados en archivos en un subgrafo, ponte en contacto con nosotros a través de Discord! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Además, no es posible crear fuentes de datos a partir de una File Data Source, ya sea una fuente de datos on-chain u otra File Data Source. Es posible que esta restricción se elimine en el futuro. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Debido a que el grafting copia en lugar de indexar los datos base, es mucho más rápido llevar el subgrafo al bloque deseado que indexar desde cero, aunque la copia inicial de los datos aún puede llevar varias horas para subgrafos muy grandes. Mientras se inicializa el subgrafo grafted, Graph Node registrará información sobre los tipos de entidad que ya han sido copiados. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -El subgrafo grafteado puede utilizar un esquema GraphQL que no es idéntico al del subgrafo base, sino simplemente compatible con él. Tiene que ser un esquema de subgrafo válido por sí mismo, pero puede diferir del esquema del subgrafo base de las siguientes maneras: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Agrega o elimina tipos de entidades - Elimina los atributos de los tipos de entidad @@ -560,4 +560,4 @@ El subgrafo grafteado puede utilizar un esquema GraphQL que no es idéntico al d - Agrega o elimina interfaces - Cambia para qué tipos de entidades se implementa una interfaz -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/es/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/es/subgraphs/developing/creating/assemblyscript-mappings.mdx index 792a6521f82d..520914f913f6 100644 --- a/website/src/pages/es/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Generación de código -Para que trabajar con contratos inteligentes, eventos y entidades sea fácil y seguro desde el punto de vista de los tipos, Graph CLI puede generar tipos AssemblyScript a partir del esquema GraphQL del subgrafo y de las ABIs de los contratos incluidas en las fuentes de datos. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Esto se hace con @@ -80,7 +80,7 @@ Esto se hace con graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/es/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx index 67ec89027c6b..4479673b2af3 100644 --- a/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versiones -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Version | Notas del lanzamiento | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Notas del lanzamiento | +| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Tipos Incorporados @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creacion de entidades @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ La API de Ethereum proporciona acceso a los contratos inteligentes, a las variab #### Compatibilidad con los tipos de Ethereum -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -El siguiente ejemplo lo ilustra. Dado un esquema de subgrafos como +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Acceso al Estado del Contrato Inteligente -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. Un patrón común es acceder al contrato desde el que se origina un evento. Esto se consigue con el siguiente código: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Cualquier otro contrato que forme parte del subgrafo puede ser importado desde el código generado y puede ser vinculado a una dirección válida. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Tratamiento de las Llamadas Revertidas @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### API Cripto @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/es/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/es/subgraphs/developing/creating/graph-ts/common-issues.mdx index 9b540b6d07d4..6d2a39b9e67b 100644 --- a/website/src/pages/es/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Problemas comunes de AssemblyScript --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/es/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/es/subgraphs/developing/creating/install-the-cli.mdx index 5a0e73fd0bbd..d968a59b17ff 100644 --- a/website/src/pages/es/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Instalar The Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Descripción -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Empezando @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Crear un Subgrafo ### Desde un Contrato Existente -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### De un Subgrafo de Ejemplo -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is Los archivos ABI deben coincidir con tu(s) contrato(s). Hay varias formas de obtener archivos ABI: - Si estás construyendo tu propio proyecto, es probable que tengas acceso a tus ABIs más actuales. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Notas del lanzamiento | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/es/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/es/subgraphs/developing/creating/ql-schema.mdx index 09924401ce11..a8b800ca5635 100644 --- a/website/src/pages/es/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Descripción -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| Tipo | Descripción | -| --- | --- | -| `Bytes` | Byte array, representado como un string hexadecimal. Comúnmente utilizado para los hashes y direcciones de Ethereum. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Tipo | Descripción | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, representado como un string hexadecimal. Comúnmente utilizado para los hashes y direcciones de Ethereum. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -En el caso de las relaciones one-to-many, la relación debe almacenarse siempre en el lado "one", y el lado "many" debe derivarse siempre. Almacenar la relación de esta manera, en lugar de almacenar una array de entidades en el lado "many", resultará en un rendimiento dramáticamente mejor tanto para la indexación como para la consulta del subgrafo. En general, debe evitarse, en la medida de lo posible, el almacenamiento de arrays de entidades. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Ejemplo @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Esta forma más elaborada de almacenar las relaciones many-to-many se traducirá en menos datos almacenados para el subgrafo y, por tanto, en un subgrafo que suele ser mucho más rápido de indexar y consultar. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Agregar comentarios al esquema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Idiomas admitidos @@ -318,7 +318,7 @@ Diccionarios de idiomas admitidos: Algoritmos admitidos para ordenar los resultados: -| Algorithm | Description | -| --- | --- | -| rank | Usa la calidad de coincidencia (0-1) de la consulta de texto completo para ordenar los resultados. | -| rango de proximidad | Similar to rank but also includes the proximity of the matches. | +| Algorithm | Description | +| ------------------- | -------------------------------------------------------------------------------------------------- | +| rank | Usa la calidad de coincidencia (0-1) de la consulta de texto completo para ordenar los resultados. | +| rango de proximidad | Similar to rank but also includes the proximity of the matches. | diff --git a/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx index 76ff7db16bba..669a29583ee8 100644 --- a/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Descripción -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Notas del lanzamiento | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/es/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/es/subgraphs/developing/creating/subgraph-manifest.mdx index c825906fef29..896eccddb6ea 100644 --- a/website/src/pages/es/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Descripción -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). Las entradas importantes a actualizar para el manifiesto son: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Los call handlers solo se activarán en uno de estos dos casos: cuando la función especificada sea llamada por una cuenta distinta del propio contrato o cuando esté marcada como externa en Solidity y sea llamada como parte de otra función en el mismo contrato. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Definición de un Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Función mapeo -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Handlers de bloques -Además de suscribirse a eventos del contracto o calls de funciones, un subgrafo puede querer actualizar sus datos a medida que se añaden nuevos bloques a la cadena. Para ello, un subgrafo puede ejecutar una función después de cada bloque o después de los bloques que coincidan con un filtro predefinido. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Filtros admitidos @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. La ausencia de un filtro para un handler de bloque asegurará que el handler sea llamado en cada bloque. Una fuente de datos solo puede contener un handler de bloque para cada tipo de filtro. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Función mapeo -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Bloques iniciales -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Notas del lanzamiento | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/es/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/es/subgraphs/developing/creating/unit-testing-framework.mdx index a9ab2a9ef384..7be3dfb08f89 100644 --- a/website/src/pages/es/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/es/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Marco de Unit Testing --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Empezando @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### Opciones CLI @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Subgrafo de demostración +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Tutoriales en vídeo -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im Ahí vamos: ¡hemos creado nuestra primera prueba! 👏 -Ahora, para ejecutar nuestras pruebas, simplemente necesitas ejecutar lo siguiente en la carpeta raíz de tu subgrafo: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Cobertura de prueba -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Recursos Adicionales -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Comentario diff --git a/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx index c206beeb8fb3..a96efc430a61 100644 --- a/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/es/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Desplegando el subgráfo en múltiples redes +## Deploying the Subgraph to multiple networks -En algunos casos, querrás desplegar el mismo subgrafo en múltiples redes sin duplicar todo su código. El principal reto que conlleva esto es que las direcciones de los contratos en estas redes son diferentes. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Política de archivo de subgrafos en Subgraph Studio +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Cada subgrafo afectado por esta política tiene una opción para recuperar la versión en cuestión. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Comprobando la salud del subgrafo +## Checking Subgraph health -Si un subgrafo se sincroniza con éxito, es una buena señal de que seguirá funcionando bien para siempre. Sin embargo, los nuevos activadores en la red pueden hacer que tu subgrafo alcance una condición de error no probada o puede comenzar a retrasarse debido a problemas de rendimiento o problemas con los operadores de nodos. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx index 11e4e4c22495..29eed7358005 100644 --- a/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/es/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Crear y gestionar sus claves API para subgrafos específicos +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Compatibilidad de los Subgrafos con The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- No debe utilizar ninguna de las siguientes funciones: - - ipfs.cat & ipfs.map - - Errores no fatales - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Archivado Automático de Versiones de Subgrafos -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/es/subgraphs/developing/developer-faq.mdx b/website/src/pages/es/subgraphs/developing/developer-faq.mdx index 0a3bad37fd09..6bf2d3eb2199 100644 --- a/website/src/pages/es/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/es/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. ¿Qué es un subgrafo? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. ¿Puedo cambiar la cuenta de GitHub asociada con mi subgrafo? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Tienes que volver a realizar el deploy del subgrafo, pero si el ID del subgrafo (hash IPFS) no cambia, no tendrá que sincronizarse desde el principio. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Dentro de un subgrafo, los eventos se procesan siempre en el orden en que aparecen en los bloques, independientemente de que sea a través de múltiples contratos o no. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? ¡Sí es posible! Prueba el siguiente comando, sustituyendo "organization/subgraphName" por la organización bajo la que se publica y el nombre de tu subgrafo: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/es/subgraphs/developing/introduction.mdx b/website/src/pages/es/subgraphs/developing/introduction.mdx index 7d4760cb4c35..facd793fde33 100644 --- a/website/src/pages/es/subgraphs/developing/introduction.mdx +++ b/website/src/pages/es/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/es/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/es/subgraphs/developing/managing/deleting-a-subgraph.mdx index 972a4f552c25..b8c2330ca49d 100644 --- a/website/src/pages/es/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/es/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Los Curadores ya no podrán señalar en el subgrafo. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/es/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/es/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/es/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/es/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx index d37d8bf2ed62..67c076d0a156 100644 --- a/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/es/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publicación de un subgrafo en la Red Descentralizada +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Actualización de los metadatos de un subgrafo publicado +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/es/subgraphs/developing/subgraphs.mdx b/website/src/pages/es/subgraphs/developing/subgraphs.mdx index f7046bd367c7..97429af0208d 100644 --- a/website/src/pages/es/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/es/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgrafos ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Ciclo de vida de un Subgrafo -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/es/subgraphs/explorer.mdx b/website/src/pages/es/subgraphs/explorer.mdx index a64b3d4188ae..e7d1980ac05d 100644 --- a/website/src/pages/es/subgraphs/explorer.mdx +++ b/website/src/pages/es/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Descripción -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Señalar/dejar de señalar un subgrafo +- Signal/Un-signal on Subgraphs - Ver más detalles como gráficos, ID de implementación actual y otros metadatos -- Cambiar de versión para explorar iteraciones pasadas del subgrafo -- Consultar subgrafos a través de GraphQL -- Probar subgrafos en el playground -- Ver los Indexadores que están indexando en un subgrafo determinado +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Estadísticas de subgrafo (asignaciones, Curadores, etc.) -- Ver la entidad que publicó el subgrafo +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity: la cantidad máxima de participación delegada que el Indexador puede aceptar de forma productiva. Un exceso de participación delegada no puede utilizarse para asignaciones o cálculos de recompensas. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curadores -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Pestaña de subgrafos -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Pestaña de indexación -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Esta sección también incluirá detalles sobre las recompensas netas que obtienes como Indexador y las tarifas netas que recibes por cada consulta. Verás las siguientes métricas: @@ -223,13 +223,13 @@ Con los botones situados al lado derecho de la tabla, puedes administrar tu dele ### Pestaña de curación -En la pestaña Curación, encontrarás todos los subgrafos a los que estás señalando (lo que te permite recibir tarifas de consulta). La señalización permite a los Curadores destacar un subgrafo importante y fiable a los Indexadores, dándoles a entender que debe ser indexado. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Dentro de esta pestaña, encontrarás una descripción general de: -- Todos los subgrafos que estás curando con detalles de la señalización actual -- Participaciones totales en cada subgrafo -- Recompensas de consulta por cada subgrafo +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Actualizaciones de los subgrafos ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/es/subgraphs/guides/arweave.mdx b/website/src/pages/es/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..71c58f8afabd --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Construyendo Subgrafos en Arweave +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +En esta guía, aprenderás a construir y deployar subgrafos para indexar la blockchain de Arweave. + +## ¿Qué es Arweave? + +El protocolo Arweave permite a los developers almacenar datos de forma permanente y esa es la principal diferencia entre Arweave e IPFS, donde IPFS carece de la característica; permanencia, y los archivos almacenados en Arweave no pueden ser modificados o eliminados. + +Arweave ya ha construido numerosas bibliotecas para integrar el protocolo en varios lenguajes de programación. Para más información puede consultar: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## ¿Qué son los subgrafos Arweave? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Construcción de un subgrafo Arweave + +Para poder construir y deployar subgrafos Arweave, necesita dos paquetes: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## Componentes del subgrafo + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +Define las fuentes de datos de interés y cómo deben ser procesadas. Arweave es un nuevo tipo de fuente de datos. + +### 2. Schema - `schema.graphql` + +Aquí defines qué datos quieres poder consultar después de indexar tu Subgrafo usando GraphQL. Esto es en realidad similar a un modelo para una API, donde el modelo define la estructura de un cuerpo de solicitud. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +Esta es la lógica que determina cómo los datos deben ser recuperados y almacenados cuando alguien interactúa con las fuentes de datos que estás escuchando. Los datos se traducen y se almacenan basándose en el esquema que has listado. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## Definición de manifiesto del subgrafo + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Las fuentes de datos de Arweave introducen un campo opcional "source.owner", que es la clave pública de una billetera Arweave + +Las fuentes de datos de Arweave admiten dos tipos de handlers: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> El source.owner puede ser la dirección del propietario o su clave pública. +> +> Las transacciones son los bloques de construcción de la permaweb de Arweave y son objetos creados por los usuarios finales. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## Definición de esquema + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## Asignaciones de AssemblyScript + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Consultando un subgrafo de Arweave + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Subgrafos de ejemplo + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### ¿Puedo indexar los archivos almacenados en Arweave? + +Actualmente, The Graph sólo indexa Arweave como blockchain (sus bloques y transacciones). + +### Can I identify Bundlr bundles in my Subgraph? + +Actualmente no se admite. + +### ¿Cómo puedo filtrar las transacciones a una cuenta específica? + +El source.owner puede ser la clave pública del usuario o la dirección de la cuenta. + +### ¿Cuál es el formato actual de encriptación? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/es/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/es/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..d0a60dc8ee83 --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Descripción + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +o + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/es/subgraphs/guides/enums.mdx b/website/src/pages/es/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..8a3da763d6e2 --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Recursos Adicionales + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/es/subgraphs/guides/grafting.mdx b/website/src/pages/es/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..3717e35b3d8a --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Reemplazar un contrato y mantener su historia con el grafting +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## ¿Qué es el Grafting? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- Agrega o elimina tipos de entidades +- Elimina los atributos de los tipos de entidad +- Agrega atributos anulables a los tipos de entidad +- Convierte los atributos no anulables en atributos anulables +- Añade valores a los enums +- Agrega o elimina interfaces +- Cambia para qué tipos de entidades se implementa una interfaz + +Para más información, puedes consultar: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Important Note on Grafting When Upgrading to the Network + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Why Is This Important? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Best Practices + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +By adhering to these guidelines, you minimize risks and ensure a smoother migration process. + +## Construcción de un subgrafo existente + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## Definición de manifiesto del subgrafo + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Definición del manifiesto de grafting + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## Deploy del subgrafo base + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Devuelve algo como esto: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## Deploy del subgrafo grafting + +El subgraph.yaml de sustitución del graft tendrá una nueva dirección de contrato. Esto podría ocurrir cuando actualices tu dApp, vuelvas a deployar un contrato, etc. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Debería devolver lo siguiente: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## Recursos Adicionales + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/es/subgraphs/guides/near.mdx b/website/src/pages/es/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..f22a497db7e1 --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Construcción de subgrafos en NEAR +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## ¿Qué es NEAR? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- Handlers de bloques: se ejecutan en cada nuevo bloque +- Handlers de recibos: se realizan cada vez que se ejecuta un mensaje en una cuenta específica + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> Un recibo es el único objeto procesable del sistema. Cuando hablamos de "procesar una transacción" en la plataforma NEAR, esto significa eventualmente "aplicar recibos" en algún momento. + +## Construcción de un subgrafo NEAR + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### Definición de manifiesto del subgrafo + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +Las fuentes de datos NEAR admiten dos tipos de handlers: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### Definición de esquema + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### Asignaciones de AssemblyScript + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## Deployando un subgrafo NEAR + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Graph Node Local (basado en la configuración predeterminada) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexación NEAR con un Graph Node local + +Ejecutar un Graph Node que indexa NEAR tiene los siguientes requisitos operativos: + +- NEAR Indexer Framework con instrumentación Firehose +- Componente(s) NEAR Firehose +- Graph Node con endpoint de Firehose configurado + +Pronto proporcionaremos más información sobre cómo ejecutar los componentes anteriores. + +## Consultando un subgrafo NEAR + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Subgrafos de ejemplo + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### ¿Cómo funciona la beta? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +Actualmente, solo se admiten los activadores de Bloque y Recibo. Estamos investigando activadores para llamadas a funciones a una cuenta específica. También estamos interesados en admitir activadores de eventos, una vez que NEAR tenga soporte nativo para eventos. + +### ¿Se activarán los handlers de recibos para las cuentas y sus subcuentas? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +Esto no es compatible. Estamos evaluando si esta funcionalidad es necesaria para la indexación. + +### Can I use data source templates in my NEAR Subgraph? + +Esto no es compatible actualmente. Estamos evaluando si esta funcionalidad es necesaria para la indexación. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## Referencias + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/es/subgraphs/guides/polymarket.mdx b/website/src/pages/es/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..2edab84a377b --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/es/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/es/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..772a00ad317d --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## Descripción + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/cookbook/upgrading-a-subgraph/#securing-your-api-key) to increase your API key security even further. diff --git a/website/src/pages/es/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/es/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..3f5fc5e44cca --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introducción + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Comenzar + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Recursos Adicionales + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/es/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/es/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..e979d752b4b7 --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Debugging rápido y sencillo de subgrafos mediante Forks +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! + +## ¿Bien, qué es? + +**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). + +In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## ¡¿Qué?! ¿Cómo? + +When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## ¡Por favor, muéstrame algo de código! + +To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Estos son los handlers definidos para indexar `Gravatar`s, sin errores de ningún tipo: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +La forma habitual de intentar una solución es: + +1. Realiza un cambio en la fuente de mapeos, que crees que resolverá el problema (aunque sé que no lo hará). +2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. Espera a que se sincronice. +4. Si se vuelve a romper vuelve a 1, de lo contrario: ¡Hurra! + +De hecho, es bastante familiar para un proceso de depuración ordinario, pero hay un paso que ralentiza terriblemente el proceso: _3. Espera a que se sincronice._ + +Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. Realiza un cambio en la fuente de mapeos, que crees que resolverá el problema. +2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +3. Si se vuelve a romper, vuelve a 1, de lo contrario: ¡Hurra! + +Ahora, puedes tener 2 preguntas: + +1. fork-base que??? +2. Bifurcando a quien?! + +Y yo respondo: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +2. Bifurcar es fácil, no hay necesidad de sudar: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +Entonces, esto es lo que hago: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. Después de una inspección cuidadosa, noté que hay una falta de coincidencia en las representaciones de `id` utilizadas al indexar `Gravatar` en mis dos handlers. Mientras que `handleNewGravatar` lo convierte en hexadecimal (`event.params.id.toHex()`), `handleUpdatedGravatar` usa un int32 (`event. params.id.toI32()`) que hace que `handleUpdatedGravatar` entre en pánico con "¡Gravatar no encontrado!". Hago que ambos conviertan el `id` en un hexadecimal. +3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/es/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/es/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..59b33568a1f2 --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Generador de código de subgrafo seguro +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. + +## ¿Por qué integrarse con Subgraph Uncrashable? + +- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- El marco también incluye una forma (a través del archivo de configuración) para crear funciones de establecimiento personalizadas, pero seguras, para grupos de variables de entidad. De esta forma, es imposible que el usuario cargue/utilice una entidad gráfica obsoleta y también es imposible olvidarse de guardar o configurar una variable requerida por la función. + +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. + +Subgraph Uncrashable se puede ejecutar como un indicador opcional mediante el comando codegen Graph CLI. + +```sh +graph codegen -u [options] [] +``` + +Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. diff --git a/website/src/pages/es/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/es/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..ec6a7079ee75 --- /dev/null +++ b/website/src/pages/es/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transferir a The Graph +--- + +Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### Ejemplo + +[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### Recursos Adicionales + +- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/es/subgraphs/querying/best-practices.mdx b/website/src/pages/es/subgraphs/querying/best-practices.mdx index c3340b65f4b2..eb9567990435 100644 --- a/website/src/pages/es/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/es/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Mejores Prácticas para Consultas The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Manejo de subgrafos cross-chain: Consulta de varios subgrafos en una sola consulta +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Resultado completamente tipificado @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/es/subgraphs/querying/from-an-application.mdx b/website/src/pages/es/subgraphs/querying/from-an-application.mdx index b36ffabaa3e6..df6f5f381dda 100644 --- a/website/src/pages/es/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/es/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Consultar desde una Aplicación +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Manejo de subgrafos cross-chain: Consulta de varios subgrafos en una sola consulta +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Resultado completamente tipificado @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Paso 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Paso 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Paso 1 diff --git a/website/src/pages/es/subgraphs/querying/graph-client/README.md b/website/src/pages/es/subgraphs/querying/graph-client/README.md index 416cadc13c6f..b6e6726bbed6 100644 --- a/website/src/pages/es/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/es/subgraphs/querying/graph-client/README.md @@ -14,25 +14,25 @@ This library is intended to simplify the network aspect of data consumption for > The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! -| Status | Feature | Notes | +| Estado | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## Empezando You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### Ejemplos You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/es/subgraphs/querying/graph-client/live.md b/website/src/pages/es/subgraphs/querying/graph-client/live.md index e6f726cb4352..4ccf6ee7eda1 100644 --- a/website/src/pages/es/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/es/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Empezando Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/es/subgraphs/querying/graphql-api.mdx b/website/src/pages/es/subgraphs/querying/graphql-api.mdx index 018abd046e72..726d7e84884d 100644 --- a/website/src/pages/es/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/es/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -Esto puede ser útil si buscas obtener solo las entidades que han cambiado, por ejemplo, desde la última vez que realizaste una encuesta. O, alternativamente, puede ser útil para investigar o depurar cómo cambian las entidades en tu subgrafo (si se combina con un filtro de bloque, puedes aislar solo las entidades que cambiaron en un bloque específico). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### Consultas de Búsqueda de Texto Completo -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Operadores de búsqueda de texto completo: -| Símbolo | Operador | Descripción | -| --- | --- | --- | -| `&` | `And` | Para combinar varios términos de búsqueda en un filtro para entidades que incluyen todos los términos proporcionados | -| | | `Or` | Las consultas con varios términos de búsqueda separados por o el operador devolverá todas las entidades que coincidan con cualquiera de los términos proporcionados | -| `<->` | `Follow by` | Especifica la distancia entre dos palabras. | -| `:*` | `Prefix` | Utilice el término de búsqueda del prefijo para encontrar palabras cuyo prefijo coincida (se requieren 2 caracteres.) | +| Símbolo | Operador | Descripción | +| ------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | Para combinar varios términos de búsqueda en un filtro para entidades que incluyen todos los términos proporcionados | +| | | `Or` | Las consultas con varios términos de búsqueda separados por o el operador devolverá todas las entidades que coincidan con cualquiera de los términos proporcionados | +| `<->` | `Follow by` | Especifica la distancia entre dos palabras. | +| `:*` | `Prefix` | Utilice el término de búsqueda del prefijo para encontrar palabras cuyo prefijo coincida (se requieren 2 caracteres.) | #### Ejemplos @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Metadatos del subgrafo -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Si se proporciona un bloque, los metadatos corresponden a ese bloque; de lo contrario, se utiliza el bloque indexado más reciente. Si es proporcionado, el bloque debe ser posterior al bloque de inicio del subgrafo y menor o igual que el bloque indexado más reciente. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ Si se proporciona un bloque, los metadatos corresponden a ese bloque; de lo cont - hash: el hash del bloque - número: el número de bloque -- timestamp: la marca de tiempo del bloque, en caso de estar disponible (actualmente solo está disponible para subgrafos que indexan redes EVM) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/es/subgraphs/querying/introduction.mdx b/website/src/pages/es/subgraphs/querying/introduction.mdx index ae3afee41ded..40935a799eed 100644 --- a/website/src/pages/es/subgraphs/querying/introduction.mdx +++ b/website/src/pages/es/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Descripción -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx index cdbad6cb7c81..50c2fbab7883 100644 --- a/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/es/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: Administración de tus claves API +title: Managing API keys --- ## Descripción -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Cantidad de GRT gastado 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Ver y administrar los nombres de dominio autorizados a utilizar tu clave API - - Asignar subgrafos que puedan ser consultados con tu clave API + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/es/subgraphs/querying/python.mdx b/website/src/pages/es/subgraphs/querying/python.mdx index d51fd5deb007..4f2ad9280b58 100644 --- a/website/src/pages/es/subgraphs/querying/python.mdx +++ b/website/src/pages/es/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/es/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/es/subgraphs/quick-start.mdx b/website/src/pages/es/subgraphs/quick-start.mdx index 4ccb601e3948..57d13e479ba2 100644 --- a/website/src/pages/es/subgraphs/quick-start.mdx +++ b/website/src/pages/es/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Comienzo Rapido --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Instala the graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -Ve la siguiente captura para un ejemplo de que debes de esperar cuando inicializes tu subgrafo: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Una vez escrito tu subgrafo, ejecuta los siguientes comandos: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/es/substreams/developing/dev-container.mdx b/website/src/pages/es/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/es/substreams/developing/dev-container.mdx +++ b/website/src/pages/es/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/es/substreams/developing/sinks.mdx b/website/src/pages/es/substreams/developing/sinks.mdx index 3900895e2871..3f4dfb2ed995 100644 --- a/website/src/pages/es/substreams/developing/sinks.mdx +++ b/website/src/pages/es/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,14 +8,14 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks > Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. - [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. +- [Subgraph](./sps/introduction.mdx): Configure an API to meet your data needs and host it on The Graph Network. - [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. - [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. - [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Nombre | Soporte | Maintainer | Source Code | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Nombre | Soporte | Maintainer | Source Code | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/es/substreams/developing/solana/account-changes.mdx b/website/src/pages/es/substreams/developing/solana/account-changes.mdx index b7fd1cc260b2..87f3d384f9e2 100644 --- a/website/src/pages/es/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/es/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/es/substreams/developing/solana/transactions.mdx b/website/src/pages/es/substreams/developing/solana/transactions.mdx index 17c285b7f53c..9ec56a15e187 100644 --- a/website/src/pages/es/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/es/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgrafo 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/es/substreams/introduction.mdx b/website/src/pages/es/substreams/introduction.mdx index 1b9de410b165..a952aeeb594b 100644 --- a/website/src/pages/es/substreams/introduction.mdx +++ b/website/src/pages/es/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/es/substreams/publishing.mdx b/website/src/pages/es/substreams/publishing.mdx index 169f12bff0ef..5787b254df98 100644 --- a/website/src/pages/es/substreams/publishing.mdx +++ b/website/src/pages/es/substreams/publishing.mdx @@ -1,6 +1,6 @@ --- title: Publishing a Substreams Package -sidebarTitle: Publishing +sidebarTitle: Publicando --- Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/es/substreams/quick-start.mdx b/website/src/pages/es/substreams/quick-start.mdx index 897daa5fe502..4e6ec88c0c0e 100644 --- a/website/src/pages/es/substreams/quick-start.mdx +++ b/website/src/pages/es/substreams/quick-start.mdx @@ -1,5 +1,5 @@ --- -title: Substreams Quick Start +title: Introducción rápida a Substreams sidebarTitle: Comienzo Rapido --- diff --git a/website/src/pages/es/supported-networks.json b/website/src/pages/es/supported-networks.json index 2a2714cf9f30..b7cb2a985577 100644 --- a/website/src/pages/es/supported-networks.json +++ b/website/src/pages/es/supported-networks.json @@ -1,5 +1,5 @@ { - "name": "Name", + "name": "Nombre", "id": "ID", "subgraphs": "Subgrafos", "substreams": "Corrientes secundarias", diff --git a/website/src/pages/es/supported-networks.mdx b/website/src/pages/es/supported-networks.mdx index bef1d1cb3aa1..93a003ce8005 100644 --- a/website/src/pages/es/supported-networks.mdx +++ b/website/src/pages/es/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Redes Admitidas hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/es/token-api/_meta-titles.json b/website/src/pages/es/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/es/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/es/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/es/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/es/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/es/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/es/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/es/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/es/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/es/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/es/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/es/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/es/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/es/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/es/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/es/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/es/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/es/token-api/faq.mdx b/website/src/pages/es/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/es/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/es/token-api/mcp/claude.mdx b/website/src/pages/es/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..ae0f71aa800b --- /dev/null +++ b/website/src/pages/es/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Configuración + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/es/token-api/mcp/cline.mdx b/website/src/pages/es/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..085427f14744 --- /dev/null +++ b/website/src/pages/es/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Configuración + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/es/token-api/mcp/cursor.mdx b/website/src/pages/es/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..70e68aaf8d33 --- /dev/null +++ b/website/src/pages/es/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Configuración + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/es/token-api/monitoring/get-health.mdx b/website/src/pages/es/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/es/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/es/token-api/monitoring/get-networks.mdx b/website/src/pages/es/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/es/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/es/token-api/monitoring/get-version.mdx b/website/src/pages/es/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/es/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/es/token-api/quick-start.mdx b/website/src/pages/es/token-api/quick-start.mdx new file mode 100644 index 000000000000..8488268e1356 --- /dev/null +++ b/website/src/pages/es/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Comienzo Rapido +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/fr/about.mdx b/website/src/pages/fr/about.mdx index 0740a57e71c5..1cce1a4218ea 100644 --- a/website/src/pages/fr/about.mdx +++ b/website/src/pages/fr/about.mdx @@ -30,25 +30,25 @@ Les spécificités de la blockchain, comme la finalité des transactions, les r ## The Graph apporte une solution -The Graph répond à ce défi grâce à un protocole décentralisé qui indexe les données de la blockchain et permet de les interroger de manière efficace et performantes. Ces API (appelées "subgraphs" indexés) peuvent ensuite être interrogées via une API standard GraphQL. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Aujourd'hui, il existe un protocole décentralisé soutenu par l'implémentation open source de [Graph Node](https://github.com/graphprotocol/graph-node) qui permet ce processus. ### Comment fonctionne The Graph⁠ -Indexer les données de la blockchain est une tâche complexe, mais The Graph la simplifie. Il apprend à indexer les données d'Ethereum en utilisant des subgraphs. Les subgraphs sont des API personnalisées construites sur les données de la blockchain qui extraient, traitent et stockent ces données pour qu'elles puissent être interrogées facilement via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Spécificités⁠ -- The Graph utilise des descriptions de subgraph, qui sont connues sous le nom de "manifeste de subgraph" à l'intérieur du subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- Ce manifeste définit les contrats intelligents intéressants pour un subgraph, les événements spécifiques à surveiller au sein de ces contrats, et la manière de mapper les données de ces événements aux données que The Graph stockera dans sa base de données. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- Lors de la création d'un subgraph, vous devez rédiger ce manifeste. +- When creating a Subgraph, you need to write a Subgraph manifest. -- Une fois le `manifeste du subgraph` écrit, vous pouvez utiliser l'outil en ligne de commande Graph CLI pour stocker la définition en IPFS et demander à un Indexeur de commencer à indexer les données pour ce subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -Le schéma ci-dessous illustre plus en détail le flux de données après le déploiement d'un manifeste de subgraph avec des transactions Ethereum. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Un graphique expliquant comment The Graph utilise Graph Node pour répondre aux requêtes des consommateurs de données](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ La description des étapes du flux : 1. Une dapp ajoute des données à Ethereum via une transaction sur un contrat intelligent. 2. Le contrat intelligent va alors produire un ou plusieurs événements lors du traitement de la transaction. -3. Parallèlement, Le nœud de The Graph scanne continuellement Ethereum à la recherche de nouveaux blocs et de nouvelles données intéressantes pour votre subgraph. -4. The Graph Node trouve alors les événements Ethereum d'intérêt pour votre subgraph dans ces blocs et vient exécuter les corrélations correspondantes que vous avez fournies. Le gestionnaire de corrélation se définit comme un module WASM qui crée ou met à jour les entités de données que le nœud de The Graph stocke en réponse aux événements Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Le dapp interroge le Graph Node pour des données indexées à partir de la blockchain, à l'aide du [point de terminaison GraphQL](https://graphql.org/learn/) du noeud. À son tour, le Graph Node traduit les requêtes GraphQL en requêtes pour sa base de données sous-jacente afin de récupérer ces données, en exploitant les capacités d'indexation du magasin. Le dapp affiche ces données dans une interface utilisateur riche pour les utilisateurs finaux, qui s'en servent pour émettre de nouvelles transactions sur Ethereum. Le cycle se répète. ## Les Étapes suivantes -Les sections suivantes proposent une exploration plus approfondie des subgraphs, de leur déploiement et de la manière d'interroger les données. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Avant de créer votre propre subgraph, il est conseillé de visiter Graph Explorer et d'examiner certains des subgraphs déjà déployés. Chaque page de subgraph comprend un playground (un espace de test) GraphQL, vous permettant d'interroger ses données. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/fr/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/fr/archived/arbitrum/arbitrum-faq.mdx index b2f6d7382c61..3aeb3de89d39 100644 --- a/website/src/pages/fr/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/fr/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ Grâce à la mise à l'échelle de The Graph sur la L2, les participants du rés - La sécurité héritée d'Ethereum -La mise à l'échelle des contrats intelligents du protocole sur la L2 permet aux participants du réseau d'interagir plus fréquemment pour un coût réduit en termes de frais de gaz. Par exemple, les Indexeurs peuvent ouvrir et fermer des allocations plus fréquemment pour indexer un plus grand nombre de subgraphs. Les développeurs peuvent déployer et mettre à jour des subgraphs plus facilement, et les Déléguateurs peuvent déléguer des GRT plus fréquemment. Les Curateurs peuvent ajouter ou supprimer des signaux dans un plus grand nombre de subgraphs - des actions auparavant considérées comme trop coûteuses pour être effectuées fréquemment en raison des frais de gaz. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. La communauté Graph a décidé d'avancer avec Arbitrum l'année dernière après le résultat de la discussion [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ Pour tirer parti de l'utilisation de The Graph sur L2, utilisez ce sélecteur d [Sélecteur déroulant pour activer Arbitrum](/img/arbitrum-screenshot-toggle.png) -## En tant que développeur de subgraphs, consommateur de données, indexeur, curateur ou délégateur, que dois-je faire maintenant ? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ Tous les contrats intelligents ont été soigneusement [vérifiés](https://gith Tout a été testé minutieusement et un plan d’urgence est en place pour assurer une transition sûre et fluide. Les détails peuvent être trouvés [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and- considérations de sécurité-20). -## Les subgraphs existants sur Ethereum fonctionnent  t-ils? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## GRT a-t-il un nouveau contrat intelligent déployé sur Arbitrum ? diff --git a/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-faq.mdx index d4edd391bed6..b445b410ec55 100644 --- a/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ Une exception concerne les portefeuilles de smart contracts comme les multisigs Les outils de transfert L2 utilisent le mécanisme natif d’Arbitrum pour envoyer des messages de L1 à L2. Ce mécanisme s’appelle un « billet modifiable » et est utilisé par tous les ponts de jetons natifs, y compris le pont GRT Arbitrum. Vous pouvez en savoir plus sur les billets retryables dans le [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Lorsque vous transférez vos actifs (subgraph, enjeu, délégation ou curation) vers L2, un message est envoyé par le pont GRT Arbitrum qui crée un ticket modifiable en L2. L’outil de transfert inclut une certaine valeur ETH dans la transaction, qui est utilisée pour 1) payer la création du ticket et 2) payer pour le gaz utile à l'exécution du ticket en L2. Cependant, comme le prix du gaz peut varier durant le temps nécessaire à l'exécution du ticket en L2, il est possible que cette tentative d’exécution automatique échoue. Lorsque cela se produit, le pont Arbitrum maintient le billet remboursable en vie pendant 7 jours, et tout le monde peut réessayer de « racheter » le billet (ce qui nécessite un portefeuille avec des ETH liés à Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -C'est ce que nous appelons l'étape « Confirmer » dans tous les outils de transfert : elle s'exécute automatiquement dans la plupart des cas et l'exécution automatique réussit le plus souvent. Il est tout de même important de vérifier que le transfert se soit bien déroulé. Si cela échoue et qu'aucune autre tentative n'est confirmé dans les 7 jours, le pont Arbitrum rejettera le ticket et vos actifs (subgraph, participation, délégation ou curation) ne pourront pas être récupérés. Les développeurs principaux de Graph ont mis en place un système de surveillance pour détecter ces situations et essayer d'échanger les billets avant qu'il ne soit trop tard, mais il en reste de votre responsabilité de vous assurer que votre transfert est terminé à temps. Si vous rencontrez des difficultés pour confirmer votre transaction, veuillez nous contacter en utilisant [ce formulaire](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) et les développeurs seront là pour vous aider. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### J'ai commencé le transfert de ma délégation/enjeu/curation et je ne suis pas sûr qu'il soit parvenu jusqu'à L2, comment puis-je confirmer qu'il a été transféré correctement ? @@ -36,43 +36,43 @@ Si vous disposez du hachage de transaction L1 (que vous pouvez trouver en consul ## Subgraph transfert -### Comment transférer mon subgraph ? +### How do I transfer my Subgraph? -Pour transférer votre subgraph, suivez les étapes qui suivent : +To transfer your Subgraph, you will need to complete the following steps: 1. Initier le transfert sur le mainnet Ethereum 2. Attendre 20 minutes pour une confirmation -3. Vérifier le transfert de subgraph sur Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Terminer la publication du sous-graphe sur Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Mettre à jour l’URL de requête (recommandé) -\*Notez que vous devez confirmer le transfert dans un délai de 7 jours, faute de quoi votre subgraph pourrait être perdu. Dans la plupart des cas, cette étape s'exécutera automatiquement, mais une confirmation manuelle peut être nécessaire en cas de hausse du prix du gaz sur Arbitrum. En cas de problème au cours de ce processus, des ressources seront disponibles pour vous aider : contactez le service d'assistance à l'adresse support@thegraph.com ou sur [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### D’où dois-je initier mon transfert ? -Vous pouvez effectuer votre transfert à partir de la [Subgraph Studio] (https://thegraph.com/studio/), [Explorer,] (https://thegraph.com/explorer) ou de n’importe quelle page de détails de subgraph. Cliquez sur le bouton "Transférer le subgraph" dans la page de détails du subgraph pour démarrer le transfert. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Combien de temps dois-je attendre avant que mon subgraph soit transféré ? +### How long do I need to wait until my Subgraph is transferred Le temps de transfert prend environ 20 minutes. Le pont Arbitrum fonctionne en arrière-plan pour terminer automatiquement le transfert du pont. Dans certains cas, les coûts du gaz peuvent augmenter et vous devrez confirmer à nouveau la transaction. -### Mon subgraph sera-t-il toujours repérable après le transfert à L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Votre subgraph ne sera détectable que sur le réseau sur lequel il est publié. Par exemple, si votre subgraph est sur Arbitrum One, vous ne pouvez le trouver que dans Explorer sur Arbitrum One et vous ne pourrez pas le trouver sur Ethereum. Assurez-vous que vous avez Arbitrum One sélectionné dans le commutateur de réseau en haut de la page pour vous assurer que vous êtes sur le bon réseau.  Après le transfert, le subgraph L1 apparaîtra comme obsolète. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Mon subgraph doit-il être publié afin d'être transférer ? +### Does my Subgraph need to be published to transfer it? -Pour profiter de l’outil de transfert de subgraph, votre subgraph doit déjà être publié sur Ethereum mainnet et doit avoir un signal de curation appartenant au portefeuille qui possède le subgraph. Si votre subgraph n’est pas publié, il est recommandé de publier simplement directement sur Arbitrum One - les frais de gaz associés seront considérablement moins élevés. Si vous souhaitez transférer un subgraph publié mais que le compte propriétaire n’a pas sélectionné de signal, vous pouvez signaler un petit montant (par ex. 1 GRT) à partir de ce compte; assurez-vous de choisir le signal de “migration automatique”. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Que se passe-t-il pour la version Ethereum mainnet de mon subgraph après que j'ai transféré sur Arbitrum ? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Après avoir transféré votre subgraph vers Arbitrum, la version du réseau principal Ethereum deviendra obsolète. Nous vous recommandons de mettre à jour votre URL de requête dans les 48 heures. Cependant, il existe une période de grâce qui maintient le fonctionnement de votre URL mainnet afin que tout support dapp tiers puisse être mis à jour. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Après le transfert, dois-je également republier sur Arbitrum ? @@ -80,21 +80,21 @@ Après la fenêtre de transfert de 20 minutes, vous devrez confirmer le transfer ### Mon point de terminaison subira-t-il un temps d'arrêt lors de la republication ? -Il est peu probable, mais possible, de subir un bref temps d'arrêt selon les indexeurs qui prennent en charge le subgraph sur L1 et s'ils continuent à l'indexer jusqu'à ce que le subgraph soit entièrement pris en charge sur L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### La publication et la gestion des versions sont-elles les mêmes sur L2 que sur le mainnet Ethereum Ethereum ? -Oui. Sélectionnez Arbitrum One comme réseau publié lors de la publication dans le Subgraph Studio. Dans le Studio, le dernier point de terminaison sera disponible et vous dirigera vers la dernière version mise à jour du subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### La curation de mon subgraph sera-t-elle déplacée avec mon subgraph? +### Will my Subgraph's curation move with my Subgraph? -Si vous avez choisi le signal de migration automatique, 100% de votre propre curation se déplacera avec votre subgraph vers Arbitrum One. Tout le signal de curation du subgraph sera converti en GTR au moment du transfert, et le GRT correspondant à votre signal de curation sera utilisé pour frapper le signal sur le subgraph L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -D’autres conservateurs peuvent choisir de retirer leur fraction de GRT ou de la transférer à L2 pour créer un signal neuf sur le même subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Puis-je déplacer mon subgraph vers le mainnet Ethereum après le transfert? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Une fois transféré, votre version mainnet Ethereum de ce subgraph deviendra obsolète. Si vous souhaitez revenir au mainnet, vous devrez redéployer et publier à nouveau sur le mainnet. Cependant, le transfert vers le mainnet Ethereumt est fortement déconseillé car les récompenses d’indexation seront distribuées entièrement sur Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Pourquoi ai-je besoin d’un pont ETH pour finaliser mon transfert ? @@ -206,19 +206,19 @@ Pour transférer votre curation, vous devrez compléter les étapes suivantes : \*Si nécessaire, c'est-à-dire que vous utilisez une adresse contractuelle. -### Comment saurai-je si le subgraph que j'ai organisé a été déplacé vers L2 ? +### How will I know if the Subgraph I curated has moved to L2? -Lors de la visualisation de la page de détails du subgraph, une bannière vous informera que ce subgraph a été transféré. Vous pouvez suivre l'invite pour transférer votre curation. Vous pouvez également trouver ces informations sur la page de détails du subgraph de tout subgraph déplacé. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Que se passe-t-il si je ne souhaite pas déplacer ma curation en L2 ? -Lorsqu’un subgraph est déprécié, vous avez la possibilité de retirer votre signal. De même, si un subgraph est passé à L2, vous pouvez choisir de retirer votre signal dans Ethereum mainnet ou d’envoyer le signal à L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Comment puis-je savoir si ma curation a été transférée avec succès? Les détails du signal seront accessibles via Explorer environ 20 minutes après le lancement de l'outil de transfert L2. -### Puis-je transférer ma curation sur plus d’un subgraph à la fois? +### Can I transfer my curation on more than one Subgraph at a time? Il n’existe actuellement aucune option de transfert groupé. @@ -266,7 +266,7 @@ Il faudra environ 20 minutes à l'outil de transfert L2 pour achever le transfer ### Dois-je indexer sur Arbitrum avant de transférer ma mise ? -Vous pouvez effectivement transférer votre mise d’abord avant de mettre en place l’indexation, mais vous ne serez pas en mesure de réclamer des récompenses sur L2 jusqu’à ce que vous allouez à des sous-graphes sur L2, les indexer, et présenter des points d’intérêt. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Les délégués peuvent-ils déplacer leur délégation avant que je ne déplace ma participation à l'indexation ? @@ -339,11 +339,13 @@ Si vous n’avez transféré aucun solde de contrat de vesting à L2 et que votr ### J’utilise mon contrat de vesting pour investir dans mainnet. Puis-je transférer ma participation à Arbitrum? -Oui, mais si votre contrat est toujours acquis, vous ne pouvez transférer la participation que pour qu’elle soit détenue par votre contrat d’acquisition L2. Vous devez d’abord initialiser ce contrat L2 en transférant un solde de GRT à l’aide de l’outil de transfert de contrat d’acquisition dans Explorer. Si votre contrat est entièrement acquis, vous pouvez transférer votre participation à n’importe quelle adresse en L2, mais vous devez le définir au préalable et déposer des GRT pour l’outil de transfert L2 pour payer le gaz L2. +Oui, mais si votre contrat est toujours acquis, vous ne pouvez transférer la participation que pour qu’elle soit détenue par votre contrat d’acquisition L2. Vous devez d’abord initialiser ce contrat L2 en transférant un solde de GRT à l’aide de l’outil de transfert de contrat d’acquisition dans Explorer. Si +votre contrat est entièrement acquis, vous pouvez transférer votre participation à n’importe quelle adresse en L2, mais vous devez le définir au préalable et déposer des GRT pour l’outil de transfert L2 pour payer le gaz L2. ### J’utilise mon contrat de vesting pour déléguer sur mainnet. Puis-je transférer mes délégations à Arbitrum? -Oui, mais si votre contrat est toujours acquis, vous ne pouvez transférer la participation que pour qu’elle soit détenue par votre contrat de vesting L2. Vous devez d’abord initialiser ce contrat L2 en transférant un solde de GRT à l’aide de l’outil de transfert de contrat de vesting dans Explorer. Si votre contrat est entièrement acquis, vous pouvez transférer votre participation à n’importe quelle adresse en L2, mais vous devez le définir au préalable et déposer des GRT pour l’outil de transfert L2 pour payer le gaz L2. +Oui, mais si votre contrat est toujours acquis, vous ne pouvez transférer la participation que pour qu’elle soit détenue par votre contrat de vesting L2. Vous devez d’abord initialiser ce contrat L2 en transférant un solde de GRT à l’aide de l’outil de transfert de contrat de vesting dans Explorer. Si +votre contrat est entièrement acquis, vous pouvez transférer votre participation à n’importe quelle adresse en L2, mais vous devez le définir au préalable et déposer des GRT pour l’outil de transfert L2 pour payer le gaz L2. ### Puis-je spécifier un bénéficiaire différent pour mon contrat de vesting sur L2? diff --git a/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-guide.mdx index 6d59607442b4..d6014f6d5dac 100644 --- a/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/fr/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph a facilité le passage à L2 sur Arbitrum One. Pour chaque participant Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Comment transférer votre subgraph vers Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Avantages du transfert de vos subgraphs +## Benefits of transferring your Subgraphs La communauté et les développeurs du Graph se sont préparés (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) à passer à Arbitrum au cours de l'année écoulée. Arbitrum, une blockchain de couche 2 ou "L2", hérite de la sécurité d'Ethereum mais offre des frais de gaz considérablement réduits. -Lorsque vous publiez ou mettez à niveau votre subgraph sur The Graph Network, vous interagissez avec des contrats intelligents sur le protocole, ce qui nécessite de payer le gaz avec ETH. En déplaçant vos subgraphs vers Arbitrum, toute mise à jour future de votre subgraph nécessitera des frais de gaz bien inférieurs. Les frais inférieurs et le fait que les courbes de liaison de curation sur L2 soient plates facilitent également la curation pour les autres conservateurs sur votre subgraph, augmentant ainsi les récompenses des indexeurs sur votre subgraph. Cet environnement moins coûteux rend également moins cher pour les indexeurs l'indexation et la diffusion de votre subgraph. Les récompenses d'indexation augmenteront sur Arbitrum et diminueront sur le réseau principal Ethereum au cours des prochains mois, de sorte que de plus en plus d'indexeurs transféreront leur participation et établiront leurs opérations sur L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Comprendre ce qui se passe avec le signal, votre subgraph L1 et les URL de requête +## Understanding what happens with signal, your L1 Subgraph and query URLs -Le transfert d'un subgraph vers Arbitrum utilise le pont GRT sur Arbitrum, qui à son tour utilise le pont natif d'Arbitrum, pour envoyer le subgraph vers L2. Le 'transfert' va déprécier le subgraph sur le mainnet et envoyer les informations pour recréer le subgraph sur L2 en utilisant le pont. Il inclura également les GRT signalés par le propriétaire du subgraph, qui doivent être supérieurs à zéro pour que le pont accepte le transfert. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Lorsque vous choisissez de transférer le subgraph, cela convertira tous les signaux de curation du subgraph en GRT. Cela équivaut à "déprécier" le subgraph sur le mainnet. Les GRT correspondant à votre curation seront envoyés à L2 avec le subgraph, où ils seront utilisés pour monnayer des signaux en votre nom. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Les autres curateurs peuvent choisir de retirer leur fraction de GRT ou de la transférer également à L2 pour le signal de monnayage sur le même subgraph. Si un propriétaire de subgraph ne transfère pas son subgraph à L2 et le déprécie manuellement via un appel de contrat, les curateurs en seront informés et pourront retirer leur curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Dès que le subgraph est transféré, puisque toute la curation est convertie en GRT, les indexeurs ne recevront plus de récompenses pour l'indexation du subgraph. Cependant, certains indexeurs 1) continueront à servir les subgraphs transférés pendant 24 heures et 2) commenceront immédiatement à indexer le subgraph sur L2. Comme ces indexeurs ont déjà indexé le subgraph, il ne devrait pas être nécessaire d'attendre la synchronisation du subgraph, et il sera possible d'interroger le subgraph L2 presque immédiatement. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Les requêtes vers le subgraph L2 devront être effectuées vers une URL différente (sur `arbitrum-gateway.thegraph.com`), mais l'URL L1 continuera à fonctionner pendant au moins 48 heures. Après cela, la passerelle L1 transmettra les requêtes à la passerelle L2 (pendant un certain temps), mais cela augmentera la latence. Il est donc recommandé de basculer toutes vos requêtes vers la nouvelle URL dès que possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Choisir son portefeuille L2 -Lorsque vous avez publié votre subgraph sur le mainnet, vous avez utilisé un portefeuille connecté pour créer le subgraph, et ce portefeuille possède le NFT qui représente ce subgraph et vous permet de publier des mises à jour. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Lors du transfert du subgraph vers Arbitrum, vous pouvez choisir un autre portefeuille qui possédera ce subgraph NFT sur L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Si vous utilisez un portefeuille "normal" comme MetaMask (un Externally Owned Account ou EOA, c'est-à-dire un portefeuille qui n'est pas un smart contract), cette étape est facultative et il est recommandé de conserver la même adresse de propriétaire que dans L1.portefeuille. -Si vous utilisez un portefeuille de smart contrat, comme un multisig (par exemple un Safe), alors choisir une adresse de portefeuille L2 différente est obligatoire, car il est très probable que ce compte n'existe que sur le mainnet et vous ne pourrez pas faire de transactions sur Arbitrum en utilisant ce portefeuille. Si vous souhaitez continuer à utiliser un portefeuille de contrat intelligent ou un multisig, créez un nouveau portefeuille sur Arbitrum et utilisez son adresse comme propriétaire L2 de votre subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Il est très important d'utiliser une adresse de portefeuille que vous contrôlez, et qui peut effectuer des transactions sur Arbitrum. Dans le cas contraire, le subgraph sera perdu et ne pourra pas être récupéré** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Préparer le transfert : faire le pont avec quelques EPF -Le transfert du subgraph implique l'envoi d'une transaction à travers le pont, puis l'exécution d'une autre transaction sur Arbitrum. La première transaction utilise de l'ETH sur le mainnet, et inclut de l'ETH pour payer le gaz lorsque le message est reçu sur L2. Cependant, si ce gaz est insuffisant, vous devrez réessayer la transaction et payer le gaz directement sur L2 (c'est "l'étape 3 : Confirmer le transfert" ci-dessous). Cette étape **doit être exécutée dans les 7 jours suivant le début du transfert**. De plus, la deuxième transaction ("Etape 4 : Terminer le transfert sur L2") se fera directement sur Arbitrum. Pour ces raisons, vous aurez besoin de quelques ETH sur un portefeuille Arbitrum. Si vous utilisez un compte multisig ou smart contract, l'ETH devra être dans le portefeuille régulier (EOA) que vous utilisez pour exécuter les transactions, et non sur le portefeuille multisig lui-même. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Vous pouvez acheter de l'ETH sur certains échanges et le retirer directement sur Arbitrum, ou vous pouvez utiliser le pont Arbitrum pour envoyer de l'ETH d'un portefeuille du mainnet vers L2 : [bridge.arbitrum.io](http://bridge.arbitrum.io). Étant donné que les frais de gaz sur Arbitrum sont moins élevés, vous ne devriez avoir besoin que d'une petite quantité. Il est recommandé de commencer par un seuil bas (0,par exemple 01 ETH) pour que votre transaction soit approuvée. -## Trouver l'outil de transfert de subgraph +## Finding the Subgraph Transfer Tool -Vous pouvez trouver l'outil de transfert L2 lorsque vous consultez la page de votre subgraph dans le Subgraph Studio : +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![outil de transfert](/img/L2-transfer-tool1.png) -Elle est également disponible sur Explorer si vous êtes connecté au portefeuille qui possède un subgraph et sur la page de ce subgraph sur Explorer : +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transfert vers L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ En cliquant sur le bouton Transférer vers L2, vous ouvrirez l'outil de transfer ## Étape 1 : Démarrer le transfert -Avant de commencer le transfert, vous devez décider quelle adresse sera propriétaire du subgraph sur L2 (voir "Choisir votre portefeuille L2" ci-dessus), et il est fortement recommandé d'avoir quelques ETH pour le gaz déjà bridgé sur Arbitrum (voir "Préparer le transfert : brider quelques ETH" ci-dessus). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Veuillez également noter que le transfert du subgraph nécessite d'avoir un montant de signal non nul sur le subgraph avec le même compte qui possède le subgraph ; si vous n'avez pas signalé sur le subgraph, vous devrez ajouter un peu de curation (ajouter un petit montant comme 1 GRT suffirait). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Après avoir ouvert l'outil de transfert, vous pourrez saisir l'adresse du portefeuille L2 dans le champ "Adresse du portefeuille destinataire" - **assurez-vous que vous avez saisi la bonne adresse ici**. En cliquant sur Transférer le subgraph, vous serez invité à exécuter la transaction sur votre portefeuille (notez qu'une certaine valeur ETH est incluse pour payer le gaz L2) ; cela lancera le transfert et dépréciera votre subgraph L1 (voir "Comprendre ce qui se passe avec le signal, votre subgraph L1 et les URL de requête" ci-dessus pour plus de détails sur ce qui se passe en coulisses). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Si vous exécutez cette étape, **assurez-vous de continuer jusqu'à terminer l'étape 3 en moins de 7 jours, sinon le subgraph et votre signal GRT seront perdus.** Cela est dû au fonctionnement de la messagerie L1-L2 sur Arbitrum : les messages qui sont envoyés via le pont sont des « tickets réessayables » qui doivent être exécutés dans les 7 jours, et l'exécution initiale peut nécessiter une nouvelle tentative s'il y a des pics dans le prix du gaz sur Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Démarrer le transfert vers la L2](/img/startTransferL2.png) -## Étape 2 : Attendre que le subgraphe atteigne L2 +## Step 2: Waiting for the Subgraph to get to L2 -Après avoir lancé le transfert, le message qui envoie votre subgraph de L1 à L2 doit se propager à travers le pont Arbitrum. Cela prend environ 20 minutes (le pont attend que le bloc du réseau principal contenant la transaction soit "sûr" face aux potentielles réorganisations de la chaîne). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Une fois ce temps d'attente terminé, le réseau Arbitrum tentera d'exécuter automatiquement le transfert sur les contrats L2. @@ -80,7 +80,7 @@ Une fois ce temps d'attente terminé, le réseau Arbitrum tentera d'exécuter au ## Étape 3 : Confirmer le transfert -Dans la plupart des cas, cette étape s'exécutera automatiquement car le gaz L2 inclus dans l'étape 1 devrait être suffisant pour exécuter la transaction qui reçoit le subgraph sur les contrats Arbitrum. Cependant, dans certains cas, il est possible qu'une hausse soudaine des prix du gaz sur Arbitrum entraîne l'échec de cette exécution automatique. Dans ce cas, le "ticket" qui envoie votre subgraphe vers L2 sera en attente et nécessitera une nouvelle tentative dans les 7 jours. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Si c'est le cas, vous devrez vous connecter en utilisant un portefeuille L2 qui contient de l'ETH sur Arbitrum, changer le réseau de votre portefeuille vers Arbitrum, et cliquer sur "Confirmer le transfert" pour retenter la transaction. @@ -88,33 +88,33 @@ Si c'est le cas, vous devrez vous connecter en utilisant un portefeuille L2 qui ## Étape 4 : Terminer le transfert sur L2 -À ce stade, votre subgraph et vos GRT ont été reçus sur Arbitrum, mais le subgraph n'est pas encore publié. Vous devrez vous connecter à l'aide du portefeuille L2 que vous avez choisi comme portefeuille de réception, basculer votre réseau de portefeuille sur Arbitrum et cliquer sur « Publier le subgraph.» +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publier le subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Attendez que le subgraph soit publié](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Cela permettra de publier le subgraph afin que les indexeurs opérant sur Arbitrum puissent commencer à le servir. Il va également modifier le signal de curation en utilisant les GRT qui ont été transférés de L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Étape 5 : Mise à jour de l'URL de la requête -Votre subgraph a été transféré avec succès vers Arbitrum ! Pour interroger le subgraph, la nouvelle URL sera : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]\` -Notez que l'ID du subgraph sur Arbitrum sera différent de celui que vous aviez sur le mainnet, mais vous pouvez toujours le trouver sur Explorer ou Studio. Comme mentionné ci-dessus (voir "Comprendre ce qui se passe avec le signal, votre subgraph L1 et les URL de requête"), l'ancienne URL L1 sera prise en charge pendant une courte période, mais vous devez basculer vos requêtes vers la nouvelle adresse dès que le subgraph aura été synchronisé. sur L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Comment transférer votre curation vers Arbitrum (L2) -## Comprendre ce qui arrive à la curation lors des transferts de subgraphs vers L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Lorsque le propriétaire d'un subgraph transfère un subgraph vers Arbitrum, tout le signal du subgraph est converti en GRT en même temps. Cela s'applique au signal "auto-migré", c'est-à-dire au signal qui n'est pas spécifique à une version de subgraph ou à un déploiement, mais qui suit la dernière version d'un subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Cette conversion du signal en GRT est identique à ce qui se produirait si le propriétaire du subgraph dépréciait le subgraph en L1. Lorsque le subgraph est déprécié ou transféré, tout le signal de curation est "brûlé" simultanément (en utilisant la courbe de liaison de curation) et le GRT résultant est détenu par le contrat intelligent GNS (c'est-à-dire le contrat qui gère les mises à niveau des subgraphs et le signal auto-migré). Chaque Curateur de ce subgraph a donc droit à ce GRT de manière proportionnelle à la quantité de parts qu'il détenait pour le subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Une fraction de ces GRT correspondant au propriétaire du subgraph est envoyée à L2 avec le subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -À ce stade, le GRT organisé n'accumulera plus de frais de requête, les conservateurs peuvent donc choisir de retirer leur GRT ou de le transférer vers le même subgraph sur L2, où il pourra être utilisé pour créer un nouveau signal de curation. Il n'y a pas d'urgence à le faire car le GRT peut être utile indéfiniment et chacun reçoit un montant proportionnel à ses actions, quel que soit le moment où il le fait. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Choisir son portefeuille L2 @@ -130,9 +130,9 @@ Si vous utilisez un portefeuille de contrat intelligent, comme un multisig (par Avant de commencer le transfert, vous devez décider quelle adresse détiendra la curation sur L2 (voir "Choisir votre portefeuille L2" ci-dessus), et il est recommandé d'avoir des ETH pour le gaz déjà pontés sur Arbitrum au cas où vous auriez besoin de réessayer l'exécution du message sur L2. Vous pouvez acheter de l'ETH sur certaines bourses et le retirer directement sur Arbitrum, ou vous pouvez utiliser le pont Arbitrum pour envoyer de l'ETH depuis un portefeuille du mainnet vers L2 : [bridge.arbitrum.io](http://bridge.arbitrum.io) - étant donné que les frais de gaz sur Arbitrum sont si bas, vous ne devriez avoir besoin que d'un petit montant, par ex. 0,01 ETH sera probablement plus que suffisant. -Si un subgraph que vous organisez a été transféré vers L2, vous verrez un message sur l'Explorateur vous indiquant que vous organisez un subgraph transféré. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -En consultant la page du subgraph, vous pouvez choisir de retirer ou de transférer la curation. En cliquant sur "Transférer le signal vers Arbitrum", vous ouvrirez l'outil de transfert. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. Signal de transfert](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Si c'est le cas, vous devrez vous connecter en utilisant un portefeuille L2 qui ## Retrait de la curation sur L1 -Si vous préférez ne pas envoyer votre GRT vers L2, ou si vous préférez combler le GRT manuellement, vous pouvez retirer votre GRT organisé sur L1. Sur la bannière de la page du subgraph, choisissez « Retirer le signal » et confirmez la transaction ; le GRT sera envoyé à votre adresse de conservateur. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/fr/archived/sunrise.mdx b/website/src/pages/fr/archived/sunrise.mdx index 575d138c0f55..dc20e31aee77 100644 --- a/website/src/pages/fr/archived/sunrise.mdx +++ b/website/src/pages/fr/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## À propos de l'indexeur de mise à niveau > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Pourquoi Edge & Node exécutent-ils l'indexeur de mise à niveau ? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### Que signifie la mise à niveau de l'indexeur pour les indexeurs existants ? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ L'indexeur de mise à niveau active les chaînes sur le réseau qui n'étaient a The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/fr/global.json b/website/src/pages/fr/global.json index 71ccdec34af5..42719abe3b7b 100644 --- a/website/src/pages/fr/global.json +++ b/website/src/pages/fr/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Paramètres de requête", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Description", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Description", + "liveResponse": "Live Response", + "example": "Exemple" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/fr/index.json b/website/src/pages/fr/index.json index 48c1a676da74..ee19877c78e6 100644 --- a/website/src/pages/fr/index.json +++ b/website/src/pages/fr/index.json @@ -7,7 +7,7 @@ "cta2": "Construisez votre premier subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -37,10 +37,86 @@ }, "supportedNetworks": { "title": "Réseaux pris en charge", + "details": "Network Details", + "services": "Services", + "type": "Type", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Docs", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { "base": "The Graph supports {0}. To add a new network, {1}", "networks": "networks", "completeThisForm": "complete this form" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Name", + "id": "ID", + "subgraphs": "Subgraphs", + "substreams": "Substreams", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Substreams", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Facturation", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { @@ -80,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "Qu'est-ce que la délégation ?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/fr/indexing/_meta-titles.json b/website/src/pages/fr/indexing/_meta-titles.json index 42f4de188fd4..29c95ac126cd 100644 --- a/website/src/pages/fr/indexing/_meta-titles.json +++ b/website/src/pages/fr/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Outillage de l'indexeur" } diff --git a/website/src/pages/fr/indexing/chain-integration-overview.mdx b/website/src/pages/fr/indexing/chain-integration-overview.mdx index 4bbb83bdc4a9..48787263c1af 100644 --- a/website/src/pages/fr/indexing/chain-integration-overview.mdx +++ b/website/src/pages/fr/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ Ce processus est lié au service de données Subgraph, applicable uniquement aux ### 2. Que se passe-t-il si la prise en charge de Firehose et Substreams intervient après que le réseau est pris en charge sur le mainnet ? -Cela n’aurait un impact que sur la prise en charge du protocole pour l’indexation des récompenses sur les subgraphs alimentés par Substreams. La nouvelle implémentation de Firehose nécessiterait des tests sur testnet, en suivant la méthodologie décrite pour l'étape 2 de ce GIP. De même, en supposant que l'implémentation soit performante et fiable, un PR sur la [Matrice de support des fonctionnalités](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) serait requis ( Fonctionnalité de sous-graphe « Sous-flux de sources de données »), ainsi qu'un nouveau GIP pour la prise en charge du protocole pour l'indexation des récompenses. N'importe qui peut créer le PR et le GIP ; la Fondation aiderait à obtenir l'approbation du Conseil. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. Combien de temps faudra-t-il pour parvenir à la prise en charge complète du protocole ? diff --git a/website/src/pages/fr/indexing/new-chain-integration.mdx b/website/src/pages/fr/indexing/new-chain-integration.mdx index b5b6fa8ccd73..20c9e5710b6a 100644 --- a/website/src/pages/fr/indexing/new-chain-integration.mdx +++ b/website/src/pages/fr/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: Intégration d'une Nouvelle Chaîne --- -Les chaînes peuvent apporter le support des subgraphs à leur écosystème en démarrant une nouvelle intégration `graph-node`. Les subgraphs sont un outil d'indexation puissant qui ouvre un monde de possibilités pour les développeurs. Graph Node indexe déjà les données des chaînes listées ici. Si vous êtes intéressé par une nouvelle intégration, il existe 2 stratégies d'intégration : +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose** : toutes les solutions d'intégration Firehose incluent Substreams, un moteur de streaming à grande échelle basé sur Firehose avec prise en charge native de `graph-node`, permettant des transformations parallélisées. @@ -25,7 +25,7 @@ Afin que Graph Node puisse ingérer des données provenant d'une chaîne EVM, le - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, in a JSON-RPC batch request -- `trace_filter` *(traçage limité et optionnellement requis pour Graph Node)* +- `trace_filter` _(traçage limité et optionnellement requis pour Graph Node)_ ### 2. Intégration Firehose @@ -47,15 +47,15 @@ Pour les chaînes EVM, il existe un niveau de données plus approfondi qui peut ## Considérations sur EVM - Différence entre JSON-RPC et Firehose -Bien que le JSON-RPC et le Firehose soient tous deux adaptés aux subgraphs, un Firehose est toujours nécessaire pour les développeurs qui souhaitent construire avec [Substreams](https://substreams.streamingfast.io). La prise en charge de Substreams permet aux développeurs de construire des [subgraphs alimentés par Substreams](/subgraphs/cookbook/substreams-powered-subgraphs/) pour la nouvelle chaîne, et a le potentiel d'améliorer les performances de vos subgraphs. De plus, Firehose - en tant que remplacement direct de la couche d'extraction JSON-RPC de `graph-node` - réduit de 90% le nombre d'appels RPC requis pour l'indexation générale. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- Tous ces appels et allers-retours `getLogs` sont remplacés par un seul flux arrivant au cœur de `graph-node` ; un modèle de bloc unique pour tous les subgraphs qu'il traite. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTEZ: une intégration basée sur Firehose pour les chaînes EVM nécessitera toujours que les indexeurs exécutent le nœud RPC d'archivage de la chaîne pour indexer correctement les subgraphs. Cela est dû à l'incapacité de Firehose à fournir un état de contrat intelligent généralement accessible par la méthode RPC `eth_calls`. (Il convient de rappeler que les `eth_call` ne sont pas une bonne pratique pour les développeurs) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Configuration Graph Node -La configuration de Graph Node est aussi simple que la préparation de votre environnement local. Une fois votre environnement local défini, vous pouvez tester l'intégration en déployant localement un subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ La configuration de Graph Node est aussi simple que la préparation de votre env ## Subgraphs alimentés par des substreams -Pour les intégrations Firehose/Substreams pilotées par StreamingFast, la prise en charge de base des modules Substreams fondamentaux (par exemple, les transactions décodées, les logs et les événements smart-contract) et les outils codegen Substreams sont inclus. Ces outils permettent d'activer des [subgraphs alimentés par Substreams](/substreams/sps/introduction/). Suivez le [Guide pratique](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) et exécutez `substreams codegen subgraph` pour expérimenter les outils codegen par vous-même. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/fr/indexing/overview.mdx b/website/src/pages/fr/indexing/overview.mdx index aedc3415a442..be87dbd79d9a 100644 --- a/website/src/pages/fr/indexing/overview.mdx +++ b/website/src/pages/fr/indexing/overview.mdx @@ -7,41 +7,41 @@ Les indexeurs sont des opérateurs de nœuds dans The Graph Network qui mettent Le GRT intégré au protocole est soumis à une période de décongélation et peut être réduit si les indexeurs sont malveillants et fournissent des données incorrectes aux applications ou s'ils indexent de manière incorrecte. Les indexeurs gagnent également des récompenses pour la participation déléguée des délégués, afin de contribuer au réseau. -Les indexeurs sélectionnent les subgraphs à indexer en fonction du signal de curation du subgraph, où les curateurs misent du GRT afin d'indiquer quels subgraphs sont de haute qualité et doivent être priorisés. Les consommateurs (par exemple les applications) peuvent également définir les paramètres pour lesquels les indexeurs traitent les requêtes pour leurs subgraphs et définir les préférences pour la tarification des frais de requête. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ -### What is the minimum stake required to be an Indexer on the network? +### Quelle est le staking minimal requise pour être Indexeur sur le réseau ? -The minimum stake for an Indexer is currently set to 100K GRT. +Le staking minimal pour un Indexeur est actuellement fixée à 100 000 GRT. -### What are the revenue streams for an Indexer? +### Quelles sont les sources de revenus d'un Indexeur ? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Rabais de frais de requête** - Paiements pour servir les requêtes sur le réseau. Ces paiements sont effectués par l'intermédiaire de canaux d'état entre un Indexeur et une passerelle. Chaque demande de requête provenant d'une passerelle contient un paiement et la réponse correspondante est une preuve de la validité du résultat de la requête. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. -### How are indexing rewards distributed? +### Comment les récompenses d'indexation sont-elles distribuées ? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +De nombreux outils ont été créés par la communauté pour calculer les récompenses ; vous en trouverez une collection organisée dans la [Collection des guides de la communauté](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). Vous pouvez également trouver une liste actualisée d'outils dans les canaux #Delegators et #Indexers sur le [serveur Discord](https://discord.gg/graphprotocol). Nous présentons ici un lien vers un [optimiseur d'allocation recommandé](https://github.com/graphprotocol/allocation-optimizer) intégré à la pile logicielle de l'Indexeur. -### What is a proof of indexing (POI)? +### Qu'est-ce qu'une preuve d'indexation (POI) ? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. -### When are indexing rewards distributed? +### Quand les récompenses d'indexation sont-elles distribuées ? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +Les allocations accumulent continuellement des récompenses tant qu'elles sont actives et allouées dans un délai de 28 époques. Les récompenses sont collectées par les Indexeurs et distribuées lorsque leurs allocations sont fermées. Cela se fait soit manuellement, lorsque l'Indexeur veut forcer la fermeture, soit après 28 époques, un Déléguateur peut fermer l'allocation pour l'Indexeur, mais cela n'entraîne pas de récompenses. 28 époques est la durée de vie maximale d'une allocation (actuellement, une époque dure environ 24 heures). -### Can pending indexing rewards be monitored? +### Les récompenses d'indexation en attente peuvent-elles être surveillées ? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +Le contrat RewardsManager dispose d'une fonction en lecture seule [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) qui peut être utilisée pour vérifier les récompenses en attente pour une allocation spécifique. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +De nombreux tableaux de bord élaborés par la communauté comprennent des valeurs de récompenses en attente et il est facile de les vérifier manuellement en suivant les étapes suivantes : -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Utilisez Etherscan pour appeler `getRewards()` : -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- Naviguer vers [l'Interface Etherscan pour le contrat de récompenses] (https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Pour appeler `getRewards()` : + - Développez la liste déroulante **9. getRewards**. + - Saisissez l'**allocationID** dans le champ de saisie. + - Cliquez sur le bouton **Query**. -### What are disputes and where can I view them? +### Que sont les litiges et où puis-je les consulter ? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +Les requêtes et les allocations des Indexeurs peuvent toutes deux être contestées sur The Graph pendant la période de contestation. La période de contestation varie en fonction du type de contestation. Les requêtes/attestations ont une fenêtre de contestation de 7 époques, tandis que les attributions ont une fenêtre de 56 époques. Une fois ces périodes écoulées, il n'est plus possible d'ouvrir un litige contre une allocation ou une requête. Lorsqu'un litige est ouvert, un dépôt d'un minimum de 10 000 GRT est exigé par les Fisherman, qui sera bloqué jusqu'à ce que le litige soit finalisé et qu'une résolution ait été donnée. Les Fisherman sont tous les participants au réseau qui ouvrent des litiges. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +Les litiges ont **trois** issues possibles, tout comme le dépôt des Fishermen. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- Si le litige est rejeté, les GRT déposées par le Fishermen seront brûlées et l’Indexeur accusé n’est pas « slashed » (aucune pénalité). +- Si le litige se solde par un match nul, la caution du Fisherman sera restituée et l’Indexeur mis en cause n’est pas pénalisé. +- Si le litige est accepté, les GRT déposés par le Fisherman lui seront restitués, l’Indexeur mis en cause sera pénalisé (slashed) et le Fisherman recevra 50 % des GRT ainsi confisqués. -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +Les litiges peuvent être consultés dans l'interface utilisateur sur la page de profil d'un indexeur sous l'onglet `Disputes`. -### What are query fee rebates and when are they distributed? +### Que sont les query fee rebates et quand sont-ils distribués ? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +Les frais de requête sont collectés par la passerelle et distribués aux Indexeurs selon la fonction de ristourne exponentielle (exponential rebate function, voir GIP à ce sujet [ici](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). Cette fonction est proposée comme un moyen de garantir que les Indexeurs obtiennent le meilleur résultat en répondant fidèlement aux requêtes. Elle incite les Indexeurs à allouer un montant élevé de staking (qui peut être réduit en cas d'erreur lors du service d'une requête) par rapport au montant des frais de requête qu'ils peuvent percevoir. -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +Une fois l’allocation clôturée, les ristournes peuvent être réclamées par l’Indexeur. Une fois réclamées, ces ristournes sur les frais de requête sont partagées entre l’Indexeur et ses Délégateurs, conformément au query fee cut et à la fonction de ristourne exponentielle. -### What is query fee cut and indexing reward cut? +### Que sont les query fee cut et l’indexing reward cut ? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +Les valeurs `queryFeeCut` et `indexingRewardCut` sont des paramètres de délégation que l'Indexeur peut définir avec les cooldownBlocks pour contrôler la distribution des GRT entre l'Indexeur et ses Déléguateurs. Voir les dernières étapes de [Staking dans le Protocol](/indexing/overview/#stake-in-the-protocol) pour les instructions sur la définition des paramètres de délégation. -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **queryFeeCut** - le pourcentage des remises sur les frais de requête qui sera distribué à l'Indexeur. Si cette valeur est fixée à 95 %, l'indexeur recevra 95 % des frais de requête perçus lors de la clôture d'une allocation, les 5 % restants revenant aux Déléguateurs. -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **indexingRewardCut** - le pourcentage des récompenses d'indexation qui sera distribué à l'Indexeur. Si cette valeur est fixée à 95 %, l'Indexeur recevra 95 % des récompenses d'indexation lorsqu'une allocation est clôturée et les Déléguateurs se partageront les 5 % restants. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. -### What are the hardware requirements? +### Quelles sont les exigences en matière de matériel ? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. +- **Standard** - Configuration par défaut, c'est ce qui est utilisé dans les manifestes de déploiement de l'exemple k8s/terraform. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Configuration | Postgres
(CPUs) | Postgres
(mémoire en Go) | Postgres
(disque en To) | VMs
(CPUs) | VMs
(mémoire en Go) | +| ------------- | :------------------: | :---------------------------: | :--------------------------: | :-------------: | :----------------------: | +| Petit | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Moyen | 16 | 64 | 2 | 32 | 64 | +| Grand | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### Quelles sont les précautions de sécurité de base qu'un Indexeur doit prendre ? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **Portefeuille de l'opérateur** - La mise en place d'un portefeuille de l'opérateur est une précaution importante car elle permet à un Indexeur de maintenir une séparation entre les clés qui contrôlent le stajing et celles qui contrôlent les opérations quotidiennes. Voir [Staking dans le Protocol](/indexing/overview/#stake-in-the-protocol) pour les instructions. -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Firewall** - Seul le service de l'Indexeur doit être exposé publiquement et une attention particulière doit être portée au verrouillage des ports d'administration et de l'accès à la base de données : l'endpoint JSON-RPC de Graph Node (port par défaut : 8030), l'endpoint de l'API de gestion de l'Indexeur (port par défaut : 18000), et l'enpoint de la base de données Postgres (port par défaut : 5432) ne doivent pas être exposés. ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Service de l'Indexeur** - Gère toutes les communications externes requises avec le réseau. Il partage les modèles de coûts et les états d'indexation, transmet les requêtes des passerelles à un Graph Node et gère les paiements des requêtes via des canaux d'état avec la passerelle. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Serveur de métriques Prometheus** - Les composants Graph Node et Indexeur enregistrent leurs métriques sur le serveur de métriques. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +Remarque : pour permettre une mise à l'échelle souple, il est recommandé de séparer les préoccupations de requête et d'indexation soient séparés différents ensembles de nœuds : les nœuds de requête et les nœuds d'indexation. -### Ports overview +### Vue d'ensemble des ports -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **Important** : Attention à ne pas exposer les ports publiquement - les **ports d'administration** doivent être verrouillés. Cela inclut les endpoints JSON-RPC de Graph Node et les endpoints de gestion de l'Indexeur détaillés ci-dessous. #### Nœud de The Graph -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Objectif | Routes | Argument CLI | Variable d'Environment | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | ---------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(pour gérer les déploiements) | / | \--admin-port | - | +| 8030 | API du statut de l'indexation des subgraphs | /graphql | \--index-node-port | - | +| 8040 | Métriques Prometheus | /metrics | \--metrics-port | - | -#### Indexer Service +#### Service d'Indexeur -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Objectif | Routes | Argument CLI | Variable D'environment | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Métriques Prometheus | /metrics | \--metrics-port | - | #### Indexer Agent -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Port | Objectif | Routes | Argument CLI | Variable D'environment | +| ---- | ---------------------------- | ------ | -------------------------- | ----------------------------------------- | +| 8000 | API de gestion des Indexeurs | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Mise en place d'une infrastructure de serveurs à l'aide de Terraform sur Google Cloud -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> Remarque : les indexeurs peuvent également utiliser AWS, Microsoft Azure ou Alibaba. -#### Install prerequisites +#### Installer les prérequis -- Google Cloud SDK -- Kubectl command line tool +- SDK Google Cloud +- Outil en ligne de commande Kubectl - Terraform -#### Create a Google Cloud Project +#### Créer un projet Google Cloud -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Cloner ou naviguer vers la [repo de l'Indexeur](https://github.com/graphprotocol/indexer). -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- Naviguez jusqu'au répertoire `./terraform`, c'est là que toutes les commandes doivent être exécutées. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Authentifiez-vous auprès de Google Cloud et créez un nouveau projet. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Utilisez la page de facturation de la Google Cloud Console pour activer la facturation du nouveau projet. -- Create a Google Cloud configuration. +- Créez une configuration Google Cloud. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Activer les API Google Cloud requises. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Créer un compte de service. ```sh svc_name= @@ -225,7 +225,7 @@ gcloud iam service-accounts create $svc_name \ --description="Service account for Terraform" \ --display-name="$svc_name" gcloud iam service-accounts list -# Get the email of the service account from the list +# Obtenir l'email du compte de service à partir de la liste svc=$(gcloud iam service-accounts list --format='get(email)' --filter="displayName=$svc_name") gcloud iam service-accounts keys create .gcloud-credentials.json \ @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Activer le peering entre la base de données et le cluster Kubernetes qui sera créé à l'étape suivante. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Créer un fichier de configuration minimal pour terraform (mettre à jour si nécessaire). ```sh indexer= @@ -260,24 +260,24 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### Utiliser Terraform pour créer une infrastructure -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +Avant de lancer une commande, lisez [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) et créez un fichier `terraform.tfvars` dans ce répertoire (ou modifiez celui que nous avons créé à la dernière étape). Pour chaque variable pour laquelle vous voulez remplacer la valeur par défaut, ou pour laquelle vous avez besoin de définir une valeur, entrez un paramètre dans `terraform.tfvars`. -- Run the following commands to create the infrastructure. +- Exécutez les commandes suivantes pour créer l'infrastructure. ```sh -# Install required plugins +# Installer les plugins nécessaires terraform init -# View plan for resources to be created +# Visualiser le plan des ressources à créer terraform plan -# Create the resources (expect it to take up to 30 minutes) +# Créer les ressources (cela peut prendre jusqu'à 30 minutes) terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +Téléchargez les informations d'identification du nouveau cluster dans `~/.kube/config` et définissez-le comme contexte par défaut. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the Indexer +#### Création des composants Kubernetes pour l'Indexeur -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- Copiez le répertoire `k8s/overlays` dans un nouveau répertoire `$dir,` et ajustez l'entrée `bases` dans `$dir/kustomization.yaml` pour qu'elle pointe vers le répertoire `k8s/base`. -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- Lisez tous les fichiers de `$dir` et ajustez les valeurs indiquées dans les commentaires. -Deploy all resources with `kubectl apply -k $dir`. +Déployer toutes les ressources avec `kubectl apply -k $dir`. ### Nœud de The Graph -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. -#### Getting started from source +#### Démarrer à partir des sources -#### Install prerequisites +#### Installer les prérequis - **Rust** @@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Exigences supplémentaires pour les utilisateurs d'Ubuntu** - Pour faire fonctionner un Graph Node sur Ubuntu, quelques packages supplémentaires peuvent être nécessaires. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### Configuration -1. Start a PostgreSQL database server +1. Démarrer un serveur de base de données PostgreSQL ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Clonez la repo [Graph Node](https://github.com/graphprotocol/graph-node) et compilez les sources en lançant `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Maintenant que toutes les dépendances sont installées, démarrez Graph Node : ```sh cargo run -p graph-node --release -- \ @@ -334,28 +334,28 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### Commencer à utiliser Docker -#### Prerequisites +#### Prérequis -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Nœud Ethereum** - Par défaut, l'installation de docker compose utilisera mainnet : [http://host.docker.internal:8545](http://host.docker.internal:8545) pour se connecter au nœud Ethereum sur votre machine hôte. Vous pouvez remplacer ce nom de réseau et cette url en mettant à jour `docker-compose.yaml`. -#### Setup +#### Configuration -1. Clone Graph Node and navigate to the Docker directory: +1. Clonez Graph Node et accédez au répertoire Docker : ```sh git clone https://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml `using the included script: +2. Pour les utilisateurs linux uniquement - Utilisez l'adresse IP de l'hôte au lieu de `host.docker.internal` dans le `docker-compose.yaml ` en utilisant le script inclus : ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. Démarrez un Graph Node local qui se connectera à votre endpoint Ethereum : ```sh docker-compose up @@ -363,25 +363,25 @@ docker-compose up ### Indexer components -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: +Pour participer avec succès au réseau, il faut une surveillance et une interaction presque constantes. C'est pourquoi nous avons créé une suite d'applications Typescript pour faciliter la participation d'un Indexeur au réseau. Il y a trois Indexer components : -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. -- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. +- **Indexer CLI** - L'interface en ligne de commande pour la gestion de l'agent Indexer. Elle permet aux Indexeurs de gérer les modèles de coûts, les allocations manuelles, la file d'attente des actions et les règles d'indexation. -#### Getting started +#### Pour commencer -The Indexer agent and Indexer service should be co-located with your Graph Node infrastructure. There are many ways to set up virtual execution environments for your Indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://discord.gg/graphprotocol)! Remember to [stake in the protocol](/indexing/overview/#stake-in-the-protocol) before starting up your Indexer components! +L'Indexer agent et l'Indexer service doivent être situés au même endroit que votre infrastructure Graph Node. Il existe de nombreuses façons de mettre en place des environnements d'exécution virtuels pour vos Indexer components; nous expliquerons ici comment les exécuter sur baremetal en utilisant les packages NPM ou les sources, ou via kubernetes et docker sur Google Cloud Kubernetes Engine. Si ces exemples de configuration ne s'appliquent pas à votre infrastructure, il y aura probablement un guide communautaire à consulter, venez nous dire bonjour sur [Discord](https://discord.gg/graphprotocol) ! N'oubliez pas de [staker sur le protocole](/indexing/overview/#stake-in-the-protocol) avant de démarrer vos Indexer components ! -#### From NPM packages +#### A partir des packages NPM ```sh npm install -g @graphprotocol/indexer-service npm install -g @graphprotocol/indexer-agent -# Indexer CLI is a plugin for Graph CLI, so both need to be installed: +# Indexer CLI est un plugin pour Graph CLI, les deux doivent donc être installés : npm install -g @graphprotocol/graph-cli npm install -g @graphprotocol/indexer-cli @@ -392,16 +392,16 @@ graph-indexer-service start ... graph-indexer-agent start ... # Indexer CLI -#Forward the port of your agent pod if using Kubernetes +#Transférer le port de votre pod agent si vous utilisez Kubernetes. kubectl port-forward pod/POD_ID 18000:8000 graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### Depuis la source ```sh -# From Repo root directory +# Depuis le répertoire racine de Repo yarn # Indexer Service @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### Utilisation de Docker -- Pull images from the registry +- Extraire des images du registre ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +Ou construire des images localement à partir des sources ```sh # Indexer service @@ -442,22 +442,22 @@ docker build \ -t indexer-agent:latest \ ``` -- Run the components +- Exécuter les composants ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/). +**NOTE** : Après le démarrage des conteneurs, le service Indexer doit être accessible à l'adresse [http://localhost:7600](http://localhost:7600) et l'agent Indexer doit exposer l'API de gestion de l'Indexeur à l'adresse [http://localhost:18000/](http://localhost:18000/). -#### Using K8s and Terraform +#### En utilisant K8s et Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section +Voir la section [Configuration de l'infrastructure du serveur à l'aide de Terraform sur Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) #### Usage -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **NOTE** : Toutes les variables de configuration d'exécution peuvent être appliquées soit en tant que paramètres de la commande au démarrage, soit en utilisant des variables d'environnement du format `COMPONENT_NAME_VARIABLE_NAME` (ex. `INDEXER_AGENT_ETHEREUM`). #### Indexer agent @@ -516,56 +516,56 @@ graph-indexer-service start \ #### Indexer CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +L'Indexer CLI est un plugin pour [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible dans le terminal à `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### Gestion de l'Indexeur à l'aide de l'Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +L'**Indexer CLI** se connecte à l'Indexer agent, généralement par le biais d'une redirection de port, de sorte que le CLI n'a pas besoin d'être exécuté sur le même serveur ou cluster. Pour vous aider à démarrer, et pour vous donner un peu de contexte, nous allons décrire brièvement le CLI. -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Se connecter à l'API de gestion de l'Indexeur. Généralement, la connexion au serveur est ouverte via une redirection de port, afin que le CLI puisse être facilement utilisé à distance. (Exemple : `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph indexer rules get [options] [ ...]` - Obtenir une ou plusieurs règles d'indexation en utilisant `all` comme `` pour obtenir toutes les règles, ou `global` pour obtenir les valeurs par défaut globales. Un argument supplémentaire `--merged` peut être utilisé pour spécifier que les règles spécifiques au déploiement sont fusionnées avec la règle globale. C'est ainsi qu'elles sont appliquées dans l'Indexer agent. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - Définir une ou plusieurs règles d'indexation. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - Arrête l'indexation d'un déploiement et met sa `decisionBasis` à never, de sorte qu'il ignorera ce déploiement lorsqu'il décidera des déploiements à indexer. -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — Définit la `decisionBasis` pour un déploiement à `rules`, afin que l'agent d'Indexeur utilise les règles d'indexation pour décider d'indexer ou non ce déploiement. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Récupère une ou plusieurs actions en utilisant `all` ou laisse `action-id` vide pour obtenir toutes les actions. Un argument supplémentaire `--status` peut être utilisé pour afficher toutes les actions d'un certain statut. -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer action queue allocate ` - Action d'allocation de la file d'attente -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer action queue reallocate ` - Action de réallocation de la file d'attente -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer action queue unallocate ` - Action de désallocation de la file d'attente -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` - Annule toutes les actions de la file d'attente si id n'est pas spécifié, sinon annule un tableau d'id avec un espace comme séparateur -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` - Approuver l'exécution de plusieurs actions -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` - Force le worker à exécuter immédiatement les actions approuvées -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Toutes les commandes qui affichent des règles dans la sortie peuvent choisir entre les formats de sortie supportés (`table`, `yaml`, et `json`) en utilisant l'argument `-output`. -#### Indexing rules +#### Règles d'indexation -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. -Data model: +Modèle de données : ```graphql type IndexingRule { @@ -599,7 +599,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +Exemple d'utilisation de la règle d'indexation : ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### File d'attente d'actions CLI -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +L'indexer-cli fournit un module `actions` pour travailler manuellement avec la file d'attente des actions. Il utilise l' **API Graphql** hébergée par le serveur de gestion de l'Indexeur pour interagir avec la file d'attente des actions. -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +L'agent d'exécution des actions ne récupère les éléments de la file d'attente pour les exécuter que s'ils ont un `ActionStatus = approved`. Dans le chemin recommandé, les actions sont ajoutées à la file d'attente avec ActionStatus = queued, de ce fait elles doivent donc être approuvées pour être exécutées onchain. Le flux général se présente comme suit : -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- Action ajoutée à la file d'attente par l'outil d'optimisation tiers ou l'utilisateur d'indexer-cli +- L'Indexeur peut utiliser `indexer-cli` pour visualiser toutes les actions en attente +- L'Indexeur (ou un autre logiciel) peut approuver ou annuler des actions dans la file d'attente en utilisant l'`indexer-cli`. Les commandes approve et cancel prennent en entrée un tableau d'identifiants d'actions. +- L'agent d'exécution interroge régulièrement la file d'attente pour les actions approuvées. Il récupère les actions `approuvées` de la file d'attente, tente de les exécuter, et met à jour les valeurs dans la base de données en fonction du statut de l'exécution, `success` ou `failed`. +- Si une action est réussie, le worker s'assurera qu'il y a une règle d'indexation présente qui indique à l'agent comment gérer l'allocation à l'avenir, ce qui est utile pour prendre des actions manuelles lorsque l'agent est en mode `auto` ou `oversight`. +- L'indexeur peut surveiller la file d'attente des actions pour voir l'historique de l'exécution des actions et, si nécessaire, réapprouver et mettre à jour les éléments d'action dont l'exécution a échoué. La file d'attente des actions fournit un historique de toutes les actions mises en attente et exécutées. -Data model: +Modèle de données : ```graphql Type ActionInput { @@ -657,7 +657,7 @@ ActionType { } ``` -Example usage from source: +Exemple d'utilisation de la source : ```bash graph indexer actions get all @@ -677,141 +677,141 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +Notez que les types d'action pris en charge pour la gestion de l'allocation ont des exigences différentes en matière de données d'entrée : -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - - required action params: + - paramètres d'action requis : - deploymentID - amount -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `Unallocate` - ferme l'allocation, libérant le staking pour le réallouer ailleurs - - required action params: + - paramètres d'action requis : - allocationID - deploymentID - - optional action params: + - paramètres d'action optionnels : - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (force à utiliser le POI fourni même s'il ne correspond pas à ce que graph-node fournit) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - - required action params: + - paramètres d'action requis : - allocationID - deploymentID - amount - - optional action params: + - paramètres d'action optionnels : - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (force à utiliser le POI fourni même s'il ne correspond pas à ce que graph-node fournit) -#### Cost models +#### Modèles de coûts -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Le langage Agora fournit un format flexible pour déclarer des modèles de coûts pour les requêtes. Un modèle de prix Agora est une séquence d'instructions qui s'exécutent dans l'ordre pour chaque requête de niveau supérieur dans une requête GraphQL. Pour chaque requête de niveau supérieur, la première instruction qui lui correspond détermine le prix de cette requête. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Une déclaration est composée d'un prédicat, qui est utilisé pour faire correspondre les requêtes GraphQL, et d'une expression de coût qui, lorsqu'elle est évaluée, produit un coût en GRT décimal. Les valeurs de l'argument nommé d'une requête peuvent être capturées dans le prédicat et utilisées dans l'expression. Les globaux peuvent également être définis et remplacés par des espaces réservés dans une expression. -Example cost model: +Exemple de modèle de coût : ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +# Cette instruction capture la valeur du saut, +# utilise une expression booléenne dans le prédicat pour faire correspondre les requêtes spécifiques qui utilisent `skip` +# et une expression de coût pour calculer le coût en fonction de la valeur `skip` et de la valeur globale SYSTEM_LOAD query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +# Cette valeur par défaut correspondra à n'importe quelle expression GraphQL. +# Il utilise un Global substitué dans l'expression pour calculer le coût. default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +Exemple de calcul des coûts d'une requête à l'aide du modèle ci-dessus : -| Query | Price | -| ---------------------------------------------------------------------------- | ------- | -| { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | -| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | +| Requête | Prix | +| ------------------------------------------------------------------------------ | ------- | +| { pairs(skip: 5000) { id } } | 0.5 GRT | +| { tokens { symbol } } | 0.1 GRT | +| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### Application du modèle de coût -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Les modèles de coûts sont appliqués via l'Indexer CLI, qui les transmet à l'API de gestion de l'Indexer agent pour qu'ils soient stockés dans la base de données. L'Indexer Service les récupère ensuite et fournit les modèles de coûts aux passerelles chaque fois qu'elles le demandent. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Interaction avec le réseau -### Stake in the protocol +### Staker dans le protocol -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +Les premières étapes de la participation au réseau en tant qu'Indexeur consistent à approuver le protocole, à staker des fonds et (éventuellement) à créer une adresse d'opérateur pour les interactions quotidiennes avec le protocole. -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> Note : Dans le cadre de ces instructions, Remix sera utilisé pour l'interaction contractuelle, mais vous pouvez utiliser l'outil de votre choix ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), et [MyCrypto](https://www.mycrypto.com/account) sont d'autres outils connus). -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +Une fois qu'un Indexeur a staké des GRT dans le protocole, les [composants de l'Indexeur](/indexing/overview/#indexer-components) peuvent être démarrés et commencer leurs interactions avec le réseau. -#### Approve tokens +#### Approuver les jetons -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Ouvrir l'[application Remix](https://remix.ethereum.org/) dans un navigateur -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. Dans l'explorateur de fichiers, créez un fichier nommé **GraphToken.abi** avec le [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Avec `GraphToken.abi` sélectionné et ouvert dans l'éditeur, passez à la section `Deploy and run transactions` dans l'interface Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Sous environnement, sélectionnez `Injected Web3` et sous `Account` sélectionnez votre adresse d'Indexeur. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Définir l'adresse du contrat GraphToken - Collez l'adresse du contrat GraphToken (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) à côté de `At Address` et cliquez sur le bouton `At Address` pour l'appliquer. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Appeler la fonction `approve(spender, amount)` pour approuver le contrat de Staking. Remplissez `spender` avec l'adresse du contrat de staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) et `amount` avec les jetons à staker (en wei). -#### Stake tokens +#### Staker les jetons -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Ouvrir l'[application Remix](https://remix.ethereum.org/) dans un navigateur -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. Dans l'explorateur de fichiers, créez un fichier nommé **Staking.abi** avec l'ABI de staking. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Avec `Staking.abi` sélectionné et ouvert dans l'éditeur, passez à la section `Deploy and run transactions` dans l'interface Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Sous environnement, sélectionnez `Injected Web3` et sous `Account` sélectionnez votre adresse d'Indexeur. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Définir l'adresse du contrat de staking - Collez l'adresse du contrat de staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) à côté de `At Address` et cliquez sur le bouton `At address` pour l'appliquer. -6. Call `stake()` to stake GRT in the protocol. +6. Appeler `stake()` pour staker les GRT dans le protocole. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### Définition des paramètres de délégation -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +La fonction `setDelegationParameters()` du [contrat de staking](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) est essentielle pour les Indexeurs, car elle leur permet de définir les paramètres qui définissent leurs interactions avec les Déléguateurs, en influençant le partage des récompenses et la capacité de délégation. -### How to set delegation parameters +### Comment définir les paramètres de délégation -To set the delegation parameters using Graph Explorer interface, follow these steps: +Pour définir les paramètres de délégation à l'aide de l'interface Graph Explorer, procédez comme suit : -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. Naviguez jusqu'à [Graph Explorer](https://thegraph.com/explorer/). +2. Connectez votre portefeuille. Choisissez multisig (comme Gnosis Safe) et sélectionnez ensuite mainnet. Note : Vous devrez répéter ce processus pour Arbitrum One. +3. Connectez le portefeuille que vous avez en tant que signataire. +4. Accédez à la section 'Paramètres' et sélectionnez 'Paramètres de délégation'. Ces paramètres doivent être configurés de manière à obtenir une réduction effective dans la fourchette souhaitée. En saisissant des valeurs dans les champs de saisie prévus à cet effet, l'interface calculera automatiquement la réduction effective. Ajustez ces valeurs si nécessaire pour obtenir le pourcentage de réduction effective souhaité. +5. Soumettre la transaction au réseau. -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> Remarque : cette transaction devra être confirmée par les signataires du portefeuille multisig. -### The life of an allocation +### La durée de vie d'une allocation -After being created by an Indexer a healthy allocation goes through two states. +Après avoir été créée par un Indexeur, une allocation saine passe par deux états. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **Closed** - Un indexeur est libre de clôturer une allocation une fois qu'une époque s'est écoulée ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) ou son agent d'Indexeur clôturera automatiquement l'allocation après le **maxAllocationEpochs** (actuellement 28 jours). Lorsqu'une allocation est clôturée avec une preuve d'indexation (POI) valide, les récompenses d'indexation sont distribuées à l'Indexeur et à ses délégués ([en savoir plus](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/fr/indexing/supported-network-requirements.mdx b/website/src/pages/fr/indexing/supported-network-requirements.mdx index 799fd25b8136..cac54d88c086 100644 --- a/website/src/pages/fr/indexing/supported-network-requirements.mdx +++ b/website/src/pages/fr/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Exigences du réseau pris en charge --- -| Réseau | Guides | Configuration requise | Récompenses d'indexation | -| --- | --- | --- | :-: | -| Arbitrum | [Guide Baremetal ](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Guide Docker ](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | CPU 4+ coeurs
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_dernière mise à jour août 2023_ | ✅ | -| Avalanche | [Guide Docker](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | CPU 4 cœurs / 8 threads
Ubuntu 22.04
16Go+ RAM
>= 5 Tio NVMe SSD
_dernière mise à jour août 2023_ | ✅ | -| Base | [Guide Erigon Baremetal ](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[Guide GETH Baremetal ](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[Guide GETH Docker ](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | CPU 8+ cœurs
Debian 12/Ubuntu 22.04
16 Go RAM
>= 4.5To (NVME recommandé)
_Dernière mise à jour le 14 mai 2024_ | ✅ | -| Binance | [Guide Erigon Baremetal ](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | CPU 8 cœurs / 16 threads
Ubuntu 22.04
>=32 Go RAM
>= 14 Tio NVMe SSD
_Dernière mise à jour le 22 juin 2024_ | ✅ | -| Celo | [Guide Docker](https://docs.infradao.com/archive-nodes-101/celo/docker) | CPU 4 cœurs / 8 threads
Ubuntu 22.04
16Go+ RAM
>= 2 Tio NVMe SSD
_Dernière mise à jour en août 2023_ | ✅ | -| Ethereum | [Guide Docker](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Vitesse d'horloge supérieure par rapport au nombre de cœurs
Ubuntu 22.04
16 Go+ RAM
>=3 To (NVMe recommandé)
_dernière mise à jour août 2023_ | ✅ | -| Fantom | [Guide Docker](https://docs.infradao.com/archive-nodes-101/fantom/docker) | CPU 4 cœurs / 8 threads
Ubuntu 22.04
16 Go + RAM
>= 13 Tio SSD NVMe
_dernière mise à jour août 2023_ | ✅ | -| Gnosis | [Guide Baremetal](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | CPU 6 cœurs / 12 threads
Ubuntu 22.04
16 Go+ RAM
>= 3 To SSD NVMe
_dernière mise à jour août 2023_ | ✅ | -| Linea | [Guide Baremetal ](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | CPU 4+ cœurs
Ubuntu 22.04
16 Go+ RAM
>= 1 To SSD NVMe
_dernière mise à jour le 2 avril 2024_ | ✅ | -| Optimism | [Guide Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[Guide GETH Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[Guide GETH Docker](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | CPU 4 cœurs / 8 threads
Ubuntu 22.04
16 Go + RAM
>= SSD NVMe 8 Tio
_dernière mise à jour août 2023_ | ✅ | -| Polygon | [Guide Docker](https://docs.infradao.com/archive-nodes-101/polygon/docker) | CPU 16 cœurs
Ubuntu 22.04
32 Go+ RAM
>= 10 Tio NVMe SSD
_dernière mise à jour août 2023_ | ✅ | -| Scroll | [Guide Baremetal](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Guide Docker](https://docs.infradao.com/archive-nodes-101/scroll/docker) | CPU 4 cœurs / 8 threads
Debian 12
16 Go + RAM
>= 1 Tio NVMe SSD
_dernière mise à jour le 3 avril 2024_ | ✅ | +| Réseau | Guides | Configuration requise | Récompenses d'indexation | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------: | +| Arbitrum | [Guide Baremetal ](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Guide Docker ](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | CPU 4+ coeurs
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_dernière mise à jour août 2023_ | ✅ | +| Avalanche | [Guide Docker](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | CPU 4 cœurs / 8 threads
Ubuntu 22.04
16Go+ RAM
>= 5 Tio NVMe SSD
_dernière mise à jour août 2023_ | ✅ | +| Base | [Guide Erigon Baremetal ](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[Guide GETH Baremetal ](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[Guide GETH Docker ](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Guide Erigon Baremetal ](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | CPU 8 cœurs / 16 threads
Ubuntu 22.04
>=32 Go RAM
>= 14 Tio NVMe SSD
_Dernière mise à jour le 22 juin 2024_ | ✅ | +| Celo | [Guide Docker](https://docs.infradao.com/archive-nodes-101/celo/docker) | CPU 4 cœurs / 8 threads
Ubuntu 22.04
16Go+ RAM
>= 2 Tio NVMe SSD
_Dernière mise à jour en août 2023_ | ✅ | +| Ethereum | [Guide Docker](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Vitesse d'horloge supérieure par rapport au nombre de cœurs
Ubuntu 22.04
16 Go+ RAM
>=3 To (NVMe recommandé)
_dernière mise à jour août 2023_ | ✅ | +| Fantom | [Guide Docker](https://docs.infradao.com/archive-nodes-101/fantom/docker) | CPU 4 cœurs / 8 threads
Ubuntu 22.04
16 Go + RAM
>= 13 Tio SSD NVMe
_dernière mise à jour août 2023_ | ✅ | +| Gnosis | [Guide Baremetal](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | CPU 6 cœurs / 12 threads
Ubuntu 22.04
16 Go+ RAM
>= 3 To SSD NVMe
_dernière mise à jour août 2023_ | ✅ | +| Linea | [Guide Baremetal ](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | CPU 4+ cœurs
Ubuntu 22.04
16 Go+ RAM
>= 1 To SSD NVMe
_dernière mise à jour le 2 avril 2024_ | ✅ | +| Optimism | [Guide Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[Guide GETH Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[Guide GETH Docker](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | CPU 4 cœurs / 8 threads
Ubuntu 22.04
16 Go + RAM
>= SSD NVMe 8 Tio
_dernière mise à jour août 2023_ | ✅ | +| Polygon | [Guide Docker](https://docs.infradao.com/archive-nodes-101/polygon/docker) | CPU 16 cœurs
Ubuntu 22.04
32 Go+ RAM
>= 10 Tio NVMe SSD
_dernière mise à jour août 2023_ | ✅ | +| Scroll | [Guide Baremetal](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Guide Docker](https://docs.infradao.com/archive-nodes-101/scroll/docker) | CPU 4 cœurs / 8 threads
Debian 12
16 Go + RAM
>= 1 Tio NVMe SSD
_dernière mise à jour le 3 avril 2024_ | ✅ | diff --git a/website/src/pages/fr/indexing/tap.mdx b/website/src/pages/fr/indexing/tap.mdx index b378f70212be..1986db8769ee 100644 --- a/website/src/pages/fr/indexing/tap.mdx +++ b/website/src/pages/fr/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: Guide de migration TAP +title: GraphTally Guide --- -Découvrez le nouveau système de paiement de The Graph, le **Timeline Aggregation Protocol, TAP**. Ce système permet des microtransactions rapides et efficaces avec une confiance minimale. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Aperçu -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) est un remplacement direct du système de paiement Scalar actuellement en place. Il offre les fonctionnalités clés suivantes : +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Gère efficacement les micropaiements. - Ajoute une couche de consolidations aux transactions et aux coûts onchain. - Permet aux Indexeurs de contrôler les recettes et les paiements, garantissant ainsi le paiement des requêtes. - Il permet des passerelles décentralisées, sans confiance, et améliore les performances du service d'indexation pour les expéditeurs multiples. -## Spécificités⁠ +### Spécificités⁠ -Le TAP permet à un expéditeur d'effectuer plusieurs paiements à un destinataire, **TAP Receipts**, qui regroupe ces paiements en un seul paiement, un **Receipt Aggregate Voucher**, également connu sous le nom de **RAV**. Ce paiement regroupé peut ensuite être vérifié sur la blockchain, ce qui réduit le nombre de transactions et simplifie le processus de paiement. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. Pour chaque requête, la passerelle vous enverra un `reçu signé` qui sera stocké dans votre base de données. Ensuite, ces requêtes seront agrégées par un `tap-agent` par le biais d'une demande. Vous recevrez ensuite un RAV. Vous pouvez mettre à jour un RAV en l'envoyant avec des reçus plus récents, ce qui générera un nouveau RAV avec une valeur plus élevée. @@ -45,28 +45,28 @@ Tant que vous exécutez `tap-agent` et `indexer-agent`, tout sera exécuté auto ### Contrats -| Contrat | Mainnet Arbitrum (42161) | Arbitrum Sepolia (421614) | -| --- | --- | --- | -| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | -| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | -| Tiers de confiance (Escrow) | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | +| Contrat | Mainnet Arbitrum (42161) | Arbitrum Sepolia (421614) | +| ---------------------------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | +| Tiers de confiance (Escrow) | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | ### Passerelle (Gateway) -| Composant | Mainnet Node et Edge (Arbitrum Mainnet) | Testnet Node et Edge (Arbitrum Mainnet) | -| ----------- | --------------------------------------------- | --------------------------------------------- | -| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | -| Signataires | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | -| Aggregateur | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | +| Composant | Mainnet Node et Edge (Arbitrum Mainnet) | Testnet Node et Edge (Arbitrum Mainnet) | +| -------------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signataires | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregateur | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Exigences +### Prérequis -En plus des conditions typiques pour faire fonctionner un Indexeur, vous aurez besoin d'un Endpoint `tap-escrow-subgraph` pour interroger les mises à jour de TAP. Vous pouvez utiliser The Graph Network pour interroger ou vous héberger vous-même sur votre `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Subgraph Graph TAP Arbitrum Sepolia (pour le testnet The Graph )](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Subgraph Graph TAP Arbitrum One (Pour le mainnet The Graph )](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note : `indexer-agent` ne gère pas actuellement l'indexation de ce subgraph comme il le fait pour le déploiement du subgraph réseau. Par conséquent, vous devez l'indexer manuellement. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Guide De Migration @@ -79,7 +79,7 @@ La version requise du logiciel peut être trouvée [ici](https://github.com/grap 1. **Agent d'indexeur** - Suivez le [même processus](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Donnez le nouvel argument `--tap-subgraph-endpoint` pour activer les nouveaux chemins de code TAP et permettre l'échange de RAVs TAP. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -99,72 +99,72 @@ La version requise du logiciel peut être trouvée [ici](https://github.com/grap Pour une configuration minimale, utilisez le modèle suivant : ```bash -# Vous devrez modifier *toutes* les valeurs ci-dessous pour qu'elles correspondent à votre configuration. +# You will have to change *all* the values below to match your setup. # -# Certaines des configurations ci-dessous sont des valeurs globales de graph network, que vous pouvez trouver ici : +# Some of the config below are global graph network values, which you can find here: # # -# Astuce de pro : si vous devez charger certaines valeurs de l'environnement dans cette configuration, vous -# pouvez les écraser avec des variables d'environnement. Par exemple, ce qui suit peut être remplacé -# par [PREFIX]_DATABASE_POSTGRESURL, où PREFIX peut être `INDEXER_SERVICE` ou `TAP_AGENT` : +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: # # [database] # postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" [indexer] -indexer_address = "0x111111111111111111111111111111111111111111" +indexer_address = "0x1111111111111111111111111111111111111111" operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" [database] -# L'URL de la base de données Postgres utilisée pour les composants de l'Indexeur. La même base de données -# qui est utilisée par `indexer-agent`. Il est prévu que `indexer-agent` crée -# les tables nécessaires. +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. postgres_url = "postgres://postgres@postgres:5432/postgres" [graph_node] -# URL vers l'endpoint de requête de votre graph-node +# URL to your graph-node's query endpoint query_url = "" -# URL vers l'endpoint d'état de votre graph-node +# URL to your graph-node's status endpoint status_url = "" [subgraphs.network] -# URL de requête pour le subgraph Graph Network. +# Query URL for the Graph Network Subgraph. query_url = "" -# Facultatif, déploiement à rechercher dans le `graph-node` local, s'il est indexé localement. -# L'indexation locale du subgraph est recommandée. -# REMARQUE : utilisez uniquement `query_url` ou `deployment_id` -deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the Subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# URL de requête pour le subgraph Escrow. +# Query URL for the Escrow Subgraph. query_url = "" -# Facultatif, déploiement à rechercher dans le `graph-node` local, s'il est indexé localement. -# Il est recommandé d'indexer localement le subgraph. -# REMARQUE : utilisez uniquement `query_url` ou `deployment_id` -deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the Subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [blockchain] -# Le chain ID du réseau sur lequel The Graph Network s'exécute +# The chain ID of the network that the graph network is running on chain_id = 1337 -# Adresse du contrat du vérificateur de bon de réception agrégé (RAV) de TAP. -receives_verifier_address = "0x222222222222222222222222222222222222222222222" +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" -############################################ -# Configurations spécifiques à tap-agent # -########################################## +######################################## +# Specific configurations to tap-agent # +######################################## [tap] -# Il s'agit du montant des frais que vous êtes prêt à risquer à un moment donné. Par exemple, -# si l'expéditeur cesse de fournir des RAV pendant suffisamment longtemps et que les frais dépassent ce -# montant, le service d'indexation cessera d'accepter les requêtes de l'expéditeur -# jusqu'à ce que les frais soient agrégés. -# REMARQUE : utilisez des chaînes de caractère pour les valeurs décimales afin d'éviter les erreurs d'arrondi -# p. ex. : +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: # max_amount_willing_to_lose_grt = "0.1" max_amount_willing_to_lose_grt = 20 [tap.sender_aggregator_endpoints] -# Clé-valeur de tous les expéditeurs et de leurs endpoint d'agrégation -# Celle-ci ci-dessous concerne par exemple la passerelle de testnet E&N. +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" ``` diff --git a/website/src/pages/fr/indexing/tooling/graph-node.mdx b/website/src/pages/fr/indexing/tooling/graph-node.mdx index 6476aad5aa73..a9f18a18abb2 100644 --- a/website/src/pages/fr/indexing/tooling/graph-node.mdx +++ b/website/src/pages/fr/indexing/tooling/graph-node.mdx @@ -2,39 +2,39 @@ title: Nœud de The Graph --- -Graph Node est le composant qui indexe les subgraphs et rend les données résultantes disponibles pour interrogation via une API GraphQL. En tant que tel, il est au cœur de la pile de l’indexeur, et le bon fonctionnement de Graph Node est crucial pour exécuter un indexeur réussi. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. Ceci fournit un aperçu contextuel de Graph Node et de certaines des options les plus avancées disponibles pour les Indexeurs. Une documentation et des instructions détaillées peuvent être trouvées dans le dépôt [Graph Node ](https://github.com/graphprotocol/graph-node). ## Nœud de The Graph -[Graph Node](https://github.com/graphprotocol/graph-node) est l'implémentation de référence pour l'indexation des subgraphs sur The Graph Network, la connexion aux clients de la blockchain, l'indexation des subgraphs et la mise à disposition des données indexées pour les requêtes. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (et l'ensemble de la pile de l’indexeur) peut être exécuté sur serveur dédié (bare metal) ou dans un environnement cloud. Cette souplesse du composant central d'indexation est essentielle à la solidité du protocole The Graph. De même, Graph Node peut être [compilé à partir du code source](https://github.com/graphprotocol/graph-node), ou les Indexeurs peuvent utiliser l'une des [images Docker fournies](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -Le magasin principal du nœud de graph, c'est là que les données des sous-graphes sont stockées, ainsi que les métadonnées sur les subgraphs et les données réseau indépendantes des subgraphs telles que le cache de blocs et le cache eth_call. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Clients réseau Pour indexer un réseau, Graph Node doit avoir accès à un client réseau via une API JSON-RPC compatible avec EVM. Cette RPC peut se connecter à un seul client ou à une configuration plus complexe qui équilibre la charge entre plusieurs clients. -Alors que certains subgraphs peuvent ne nécessiter qu'un nœud complet, d'autres peuvent avoir des caractéristiques d'indexation qui nécessitent des fonctionnalités RPC supplémentaires. En particulier, les subgraphs qui font des `eth_calls` dans le cadre de l'indexation nécessiteront un noeud d'archive qui supporte [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), et les subgraphs avec des `callHandlers`, ou des `blockHandlers` avec un filtre `call`, nécessitent le support de `trace_filter` ([voir la documentation du module trace ici](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). \*\*Network Firehoses : un Firehose est un service gRPC fournissant un flux de blocs ordonné, mais compatible avec les fork, développé par les principaux développeurs de The Graph pour mieux prendre en charge une indexation performante à l'échelle. Il ne s'agit pas actuellement d'une exigence de l'Indexeur, mais les Indexeurs sont encouragés à se familiariser avec la technologie, en avance sur la prise en charge complète du réseau. Pour en savoir plus sur le Firehose [ici](https://firehose.streamingfast.io/). ### Nœuds IPFS -Les métadonnées de déploiement de subgraphs sont stockées sur le réseau IPFS. The Graph Node accède principalement au noed IPFS pendant le déploiement du subgraph pour récupérer le manifeste du subgraph et tous les fichiers liés. Les indexeurs de réseau n'ont pas besoin d'héberger leur propre noed IPFS. Un noed IPFS pour le réseau est hébergé sur https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Serveur de métriques Prometheus Pour activer la surveillance et la création de rapports, Graph Node peut éventuellement enregistrer les métriques sur un serveur de métriques Prometheus. -### Getting started from source +### Démarrer à partir des sources -#### Install prerequisites +#### Installer les prérequis - **Rust** @@ -48,9 +48,9 @@ Pour activer la surveillance et la création de rapports, Graph Node peut évent sudo apt-get install -y clang libpq-dev libssl-dev pkg-config ``` -#### Setup +#### Configuration -1. Start a PostgreSQL database server +1. Démarrer un serveur de base de données PostgreSQL ```sh initdb -D .postgres @@ -60,7 +60,7 @@ createdb graph-node 2. Clonez le repo [Graph Node](https://github.com/graphprotocol/graph-node) et compilez les sources en lançant `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Maintenant que toutes les dépendances sont installées, démarrez Graph Node : ```sh cargo run -p graph-node --release -- \ @@ -77,19 +77,19 @@ Un exemple complet de configuration Kubernetes se trouve dans le [dépôt d'Inde Lorsqu'il est en cours d'exécution, Graph Node expose les ports suivants : -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Objectif | Routes | Argument CLI | Variable d'Environment | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | ---------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(pour gérer les déploiements) | / | \--admin-port | - | +| 8030 | API du statut de l'indexation des subgraphs | /graphql | \--index-node-port | - | +| 8040 | Métriques Prometheus | /metrics | \--metrics-port | - | > **Important** : Soyez prudent lorsque vous exposez des ports publiquement - les **ports d'administration** doivent être verrouillés. Ceci inclut l'endpoint JSON-RPC de Graph Node. ## Configuration avancée du nœud graph -Dans sa forme la plus simple, Graph Node peut être utilisé avec une seule instance de Graph Node, une seule base de données PostgreSQL, un nœud IPFS et les clients réseau selon les besoins des subgraphs à indexer. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. Cette configuration peut être mise à l'échelle horizontalement, en ajoutant plusieurs Graph Nodes, et plusieurs bases de données pour supporter ces Graph Nodes. Les utilisateurs avancés voudront peut-être profiter de certaines des capacités de mise à l'échelle horizontale de Graph Node, ainsi que de certaines des options de configuration les plus avancées, via le fichier `config.toml` et les variables d'environnement de Graph Node. @@ -114,13 +114,13 @@ La documentation complète de `config.toml` peut être trouvée dans la [documen #### Multiple Graph Nodes -L'indexation Graph Node peut être mise à l'échelle horizontalement, en exécutant plusieurs instances de Graph Node pour répartir l'indexation et l'interrogation sur différents nœuds. Cela peut être fait simplement en exécutant des Graph Nodes configurés avec un `node_id` différent au démarrage (par exemple dans le fichier Docker Compose), qui peut ensuite être utilisé dans le fichier `config.toml` pour spécifier les [nœuds de requête dédiés](#dedicated-query-nodes), les [ingesteurs de blocs](#dedicated-block-ingestion) et en répartissant les subgraphs sur les nœuds avec des [règles de déploiement](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Notez que plusieurs nœuds de graph peuvent tous être configurés pour utiliser la même base de données, qui elle-même peut être mise à l'échelle horizontalement via le partitionnement. #### Règles de déploiement -Étant donné plusieurs Graph Node, il est nécessaire de gérer le déploiement de nouveaux subgraphs afin que le même subgraph ne soit pas indexé par deux nœuds différents, ce qui entraînerait des collisions. Cela peut être fait en utilisant des règles de déploiement, qui peuvent également spécifier dans quel `shard` les données d'un subgraph doivent être stockées, si le partitionnement de base de données est utilisé. Les règles de déploiement peuvent correspondre au nom du subgraph et au réseau que le déploiement indexe afin de prendre une décision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Exemple de configuration de règle de déploiement : @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Tout nœud dont --node-id correspond à l'expression régulière sera configuré Pour la plupart des cas d'utilisation, une seule base de données Postgres suffit pour prendre en charge une instance de nœud graph. Lorsqu'une instance de nœud graph dépasse une seule base de données Postgres, il est possible de diviser le stockage des données de nœud graph sur plusieurs bases de données Postgres. Toutes les bases de données forment ensemble le magasin de l’instance de nœud graph. Chaque base de données individuelle est appelée une partition. -Les fragments peuvent être utilisés pour diviser les déploiements de subgraph sur plusieurs bases de données et peuvent également être utilisés pour faire intervenir des réplicas afin de répartir la charge de requête sur plusieurs bases de données. Cela inclut la configuration du nombre de connexions de base de données disponibles que chaque `graph-node` doit conserver dans son pool de connexions pour chaque base de données, ce qui devient de plus en plus important à mesure que davantage de subgraph sont indexés. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Le partage devient utile lorsque votre base de données existante ne peut pas suivre la charge que Graph Node lui impose et lorsqu'il n'est plus possible d'augmenter la taille de la base de données. -> Il est généralement préférable de créer une base de données unique aussi grande que possible avant de commencer avec des fragments. Une exception est lorsque le trafic des requêtes est réparti de manière très inégale entre les subgraphs ; dans ces situations, cela peut être considérablement utile si les subgraphs à volume élevé sont conservés dans une partition et tout le reste dans une autre, car cette configuration rend plus probable que les données des subgraphs à volume élevé restent dans le cache interne de la base de données et ne le font pas. sont remplacés par des données qui ne sont pas autant nécessaires à partir de subgraphs à faible volume. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. En termes de configuration des connexions, commencez par max_connections dans postgresql.conf défini sur 400 (ou peut-être même 200) et regardez les métriques store_connection_wait_time_ms et store_connection_checkout_count Prometheus. Des temps d'attente notables (tout ce qui dépasse 5 ms) indiquent qu'il y a trop peu de connexions disponibles ; des temps d'attente élevés seront également dus au fait que la base de données est très occupée (comme une charge CPU élevée). Cependant, si la base de données semble par ailleurs stable, des temps d'attente élevés indiquent la nécessité d'augmenter le nombre de connexions. Dans la configuration, le nombre de connexions que chaque instance de nœud graph peut utiliser constitue une limite supérieure, et Graph Node ne maintiendra pas les connexions ouvertes s'il n'en a pas besoin. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Prise en charge de plusieurs réseaux -The Graph Protocol augmente le nombre de réseaux pris en charge pour l'indexation des récompenses, et il existe de nombreux subgraphs indexant des réseaux non pris en charge. Un indexeur peut choisir de les indexer malgré tout. Le fichier `config.toml` permet une configuration riche et flexible : +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Plusieurs réseaux - Plusieurs fournisseurs par réseau (cela peut permettre de répartir la charge entre les fournisseurs, et peut également permettre la configuration de nœuds complets ainsi que de nœuds d'archives, Graph Node préférant les fournisseurs moins chers si une charge de travail donnée le permet). @@ -225,11 +225,11 @@ Les utilisateurs qui utilisent une configuration d'indexation à grande échelle ### Gestion du nœud de graph -Étant donné un nœud de graph en cours d'exécution (ou des nœuds de graph !), le défi consiste alors à gérer les subgraphs déployés sur ces nœuds. Graph Node propose une gamme d'outils pour vous aider à gérer les subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Journal de bord -Les logs de Graph Node peuvent fournir des informations utiles pour le débogage et l'optimisation de Graph Node et de subgraphs spécifiques. Graph Node supporte différents niveaux de logs via la variable d'environnement `GRAPH_LOG`, avec les niveaux suivants : error, warn, info, debug ou trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. De plus, fixer `GRAPH_LOG_QUERY_TIMING` à `gql` fournit plus de détails sur la façon dont les requêtes GraphQL s'exécutent (bien que cela génère un grand volume de logs). @@ -247,11 +247,11 @@ La commande graphman est incluse dans les conteneurs officiels, et vous pouvez d La documentation complète des commandes `graphman` est disponible dans le dépôt Graph Node. Voir [/docs/graphman.md](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) dans le dépôt Graph Node `/docs` -### Travailler avec des subgraphs +### Working with Subgraphs #### API d'état d'indexation -Disponible sur le port 8030/graphql par défaut, l'API d'état d'indexation expose une gamme de méthodes pour vérifier l'état d'indexation de différents subgraphs, vérifier les preuves d'indexation, inspecter les fonctionnalités des subgraphs et bien plus encore. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. Le schéma complet est disponible [ici](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ Le processus d'indexation comporte trois parties distinctes : - Traiter les événements dans l'ordre avec les gestionnaires appropriés (cela peut impliquer d'appeler la chaîne pour connaître l'état et de récupérer les données du magasin) - Écriture des données résultantes dans le magasin -Ces étapes sont pipeline (c’est-à-dire qu’elles peuvent être exécutées en parallèle), mais elles dépendent les unes des autres. Lorsque les subgraphs sont lents à indexer, la cause sous-jacente dépendra du subgraph spécifique. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Causes courantes de lenteur d’indexation : @@ -276,24 +276,24 @@ Causes courantes de lenteur d’indexation : - Le prestataire lui-même prend du retard sur la tête de la chaîne - Lenteur dans la récupération des nouvelles recettes en tête de chaîne auprès du prestataire -Les métriques d’indexation de subgraphs peuvent aider à diagnostiquer la cause première de la lenteur de l’indexation. Dans certains cas, le problème réside dans le subgraph lui-même, mais dans d'autres, des fournisseurs de réseau améliorés, une réduction des conflits de base de données et d'autres améliorations de configuration peuvent améliorer considérablement les performances d'indexation. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Subgraphs ayant échoué +#### Failed Subgraphs -Lors de l'indexation, les subgraphs peuvent échouer s'ils rencontrent des données inattendues, si certains composants ne fonctionnent pas comme prévu ou s'il y a un bogue dans les gestionnaires d'événements ou la configuration. Il existe deux types généraux de pannes : +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Échecs déterministes : ce sont des échecs qui ne seront pas résolus par de nouvelles tentatives - Échecs non déterministes : ils peuvent être dus à des problèmes avec le fournisseur ou à une erreur inattendue de Graph Node. Lorsqu'un échec non déterministe se produit, Graph Node réessaiera les gestionnaires défaillants, en reculant au fil du temps. -Dans certains cas, un échec peut être résolu par l'indexeur (par exemple, si l'erreur est due au fait de ne pas disposer du bon type de fournisseur, l'ajout du fournisseur requis permettra de poursuivre l'indexation). Cependant, dans d'autres cas, une modification du code du subgraph est requise. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Les défaillances déterministes sont considérés comme "final" (définitifs), avec une preuve d'indexation générée pour le bloc défaillant, alors que les défaillances non déterministes ne le sont pas, car le subgraph pourait "se rétablir " et poursuivre l'indexation. Dans certains cas, l'étiquette non déterministe est incorrecte et le subgraph ne surmontera jamais l'erreur ; de tels défaillances doivent être signalés en tant que problèmes sur le dépôt de Graph Node. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Bloquer et appeler le cache -Graph Node met en cache certaines données dans le store afin d'éviter de les récupérer auprès du fournisseur. Les blocs sont mis en cache, ainsi que les résultats des `eth_calls` (ces derniers étant mis en cache à partir d'un bloc spécifique). Cette mise en cache peut augmenter considérablement la vitesse d'indexation lors de la « resynchronisation » d'un subgraph légèrement modifié. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -Cependant, dans certains cas, si un nœud Ethereum a fourni des données incorrectes pendant une certaine période, cela peut se retrouver dans le cache, conduisant à des données incorrectes ou à des subgraphs défaillants. Dans ce cas, les Indexeurs peuvent utiliser `graphman` pour effacer le cache empoisonné, puis rembobiner les subgraph affectés, ce qui permettra de récupérer des données fraîches auprès du fournisseur (que l'on espère sain). +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Si une incohérence du cache de blocs est suspectée, telle qu'un événement de réception de transmission manquant : @@ -304,7 +304,7 @@ Si une incohérence du cache de blocs est suspectée, telle qu'un événement de #### Interroger les problèmes et les erreurs -Une fois qu'un subgraph a été indexé, les indexeurs peuvent s'attendre à traiter les requêtes via le point de terminaison de requête dédié du subgraph. Si l'indexeur espère traiter un volume de requêtes important, un nœud de requête dédié est recommandé, et en cas de volumes de requêtes très élevés, les indexeurs peuvent souhaiter configurer des fragments de réplique afin que les requêtes n'aient pas d'impact sur le processus d'indexation. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Cependant, même avec un nœud de requête et des répliques dédiés, certaines requêtes peuvent prendre beaucoup de temps à exécuter et, dans certains cas, augmenter l'utilisation de la mémoire et avoir un impact négatif sur le temps de requête des autres utilisateurs. @@ -316,7 +316,7 @@ Graph Node met en cache les requêtes GraphQL par défaut, ce qui peut réduire ##### Analyser les requêtes -Les requêtes problématiques apparaissent le plus souvent de deux manières. Dans certains cas, les utilisateurs eux-mêmes signalent qu'une requête donnée est lente. Dans ce cas, le défi consiste à diagnostiquer la raison de la lenteur, qu'il s'agisse d'un problème général ou spécifique à ce subgraph ou à cette requête. Et puis bien sûr de le résoudre, si possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. Dans d'autres cas, le déclencheur peut être une utilisation élevée de la mémoire sur un nœud de requête, auquel cas le défi consiste d'abord à identifier la requête à l'origine du problème. @@ -336,10 +336,10 @@ En général, les tables où le nombre d'entités distinctes est inférieur à 1 Une fois qu'une table a été déterminée comme étant de type compte, l'exécution de `graphman stats account-like .
` activera l'optimisation de type compte pour les requêtes sur cette table. L'optimisation peut être désactivée à nouveau avec `graphman stats account-like --clear .
` Il faut compter jusqu'à 5 minutes pour que les noeuds de requêtes remarquent que l'optimisation a été activée ou désactivée. Après avoir activé l'optimisation, il est nécessaire de vérifier que le changement ne ralentit pas les requêtes pour cette table. Si vous avez configuré Grafana pour surveiller Postgres, les requêtes lentes apparaîtront dans `pg_stat_activity` en grand nombre, prenant plusieurs secondes. Dans ce cas, l'optimisation doit être désactivée à nouveau. -Pour les subgraphs de type Uniswap, les tables `pair` et `token` sont les meilleurs candidats pour cette optimisation, et peuvent avoir un effet considérable sur la charge de la base de données. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Supprimer des subgraphs +#### Removing Subgraphs > Il s'agit d'une nouvelle fonctionnalité qui sera disponible dans Graph Node 0.29.x -A un moment donné, un Indexeur peut vouloir supprimer un subgraph donné. Cela peut être facilement fait via `graphman drop`, qui supprime un déploiement et toutes ses données indexées. Le déploiement peut être spécifié soit comme un nom de subgraph, soit comme un hash IPFS `Qm..`, ou alors comme le namespace `sgdNN` de la base de données . Une documentation plus détaillée est disponible [ici](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/fr/indexing/tooling/graphcast.mdx b/website/src/pages/fr/indexing/tooling/graphcast.mdx index 5edccfb10588..e24e9904bdd8 100644 --- a/website/src/pages/fr/indexing/tooling/graphcast.mdx +++ b/website/src/pages/fr/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Actuellement, le coût de diffusion d’informations vers d’autres participant Le SDK Graphcast (Software Development Kit) permet aux développeurs de créer des radios, qui sont des applications basées sur les potins que les indexeurs peuvent exécuter dans un but donné. Nous avons également l'intention de créer quelques radios (ou de fournir une assistance à d'autres développeurs/équipes qui souhaitent créer des radios) pour les cas d'utilisation suivants : -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Réalisation d'enchères et coordination pour les subgraphs, les substreams, et les données Firehose de synchronisation de distorsion provenant d'autres indexeurs. -- Auto-rapport sur l'analyse des requêtes actives, y compris les volumes de requêtes de subgraphs, les volumes de frais, etc. -- Auto-rapport sur l'analyse de l'indexation, y compris le temps d'indexation des subgraphs, les coûts des gaz de traitement, les erreurs d'indexation rencontrées, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Auto-déclaration sur les informations de la pile, y compris la version du graph-node, la version Postgres, la version du client Ethereum, etc. ### En savoir plus diff --git a/website/src/pages/fr/resources/benefits.mdx b/website/src/pages/fr/resources/benefits.mdx index b39ea9fa5ca4..77407f040adb 100644 --- a/website/src/pages/fr/resources/benefits.mdx +++ b/website/src/pages/fr/resources/benefits.mdx @@ -27,58 +27,57 @@ Les coûts d'interrogation peuvent varier ; le coût indiqué est la moyenne au ## Utilisateur à faible volume (moins de 100 000 requêtes par mois) -| Cost Comparison | Auto-hébergé | The Graph Network | -| :-: | :-: | :-: | -| Coût mensuel du serveur\* | 350 $ au mois | 0 $ | -| Frais de requête | - 0 $ | 0$ par mois | -| Temps d'ingénierie | 400 $ au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | -| Requêtes au mois | Limité aux capacités infra | 100 000 (Plan Gratuit) | -| Tarif par requête | 0 $ | 0$ | -| Infrastructure | Centralisée | Décentralisée | -| La redondance géographique | 750$+ par nœud complémentaire | Compris | -| Temps de disponibilité | Variable | - 99.9% | -| Total des coûts mensuels | 750 $+ | 0 $ | +| Cost Comparison | Auto-hébergé | The Graph Network | +| :----------------------------: | :--------------------------------------: | :-------------------------------------------------------------------------: | +| Coût mensuel du serveur\* | 350 $ au mois | 0 $ | +| Frais de requête | - 0 $ | 0$ par mois | +| Temps d'ingénierie | 400 $ au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | +| Requêtes au mois | Limité aux capacités infra | 100 000 (Plan Gratuit) | +| Tarif par requête | 0 $ | 0$ | +| Infrastructure | Centralisée | Décentralisée | +| La redondance géographique | 750$+ par nœud complémentaire | Compris | +| Temps de disponibilité | Variable | - 99.9% | +| Total des coûts mensuels | 750 $+ | 0 $ | ## Utilisateur à volume moyen (~3M requêtes par mois) -| Cost Comparison | Auto-hébergé | The Graph Network | -| :-: | :-: | :-: | -| Coût mensuel du serveur\* | 350 $ au mois | 0 $ | -| Frais de requête | 500 $ au mois | 120$ par mois | -| Temps d'ingénierie | 800 $ au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | -| Requêtes au mois | Limité aux capacités infra | ~3,000,000 | -| Tarif par requête | 0 $ | $0.00004 | -| Infrastructure | Centralisée | Décentralisée | -| Frais d'ingénierie | 200 $ au mois | Compris | -| La redondance géographique | 1 200 $ coût total par nœud supplémentaire | Compris | -| Temps de disponibilité | Variable | - 99.9% | -| Total des coûts mensuels | 1 650 $+ | 120$ | +| Cost Comparison | Auto-hébergé | The Graph Network | +| :----------------------------: | :-----------------------------------------: | :-------------------------------------------------------------------------: | +| Coût mensuel du serveur\* | 350 $ au mois | 0 $ | +| Frais de requête | 500 $ au mois | 120$ par mois | +| Temps d'ingénierie | 800 $ au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | +| Requêtes au mois | Limité aux capacités infra | ~3,000,000 | +| Tarif par requête | 0 $ | $0.00004 | +| Infrastructure | Centralisée | Décentralisée | +| Frais d'ingénierie | 200 $ au mois | Compris | +| La redondance géographique | 1 200 $ coût total par nœud supplémentaire | Compris | +| Temps de disponibilité | Variable | - 99.9% | +| Total des coûts mensuels | 1 650 $+ | 120$ | ## Utilisateur à volume élevé (~30M requêtes par mois) -| Cost Comparison | Auto-hébergé | The Graph Network | -| :-: | :-: | :-: | -| Coût mensuel du serveur\* | 1100 $ au mois, par nœud | 0 $ | -| Frais de requête | 4000 $ | 1 200 $ par mois | -| Nombre de nœuds obligatoires | 10 | Sans objet | -| Temps d'ingénierie | 6000 $ ou plus au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | -| Requêtes au mois | Limité aux capacités infra | ~30,000,000 | -| Tarif par requête | 0 $ | $0.00004 | -| Infrastructure | Centralisée | Décentralisée | -| La redondance géographique | 1 200 $ coût total par nœud supplémentaire | Compris | -| Temps de disponibilité | Variable | - 99.9% | -| Total des coûts mensuels | 11 000 $+ | 1,200$ | +| Cost Comparison | Auto-hébergé | The Graph Network | +| :----------------------------: | :------------------------------------------: | :-------------------------------------------------------------------------: | +| Coût mensuel du serveur\* | 1100 $ au mois, par nœud | 0 $ | +| Frais de requête | 4000 $ | 1 200 $ par mois | +| Nombre de nœuds obligatoires | 10 | Sans objet | +| Temps d'ingénierie | 6000 $ ou plus au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | +| Requêtes au mois | Limité aux capacités infra | ~30,000,000 | +| Tarif par requête | 0 $ | $0.00004 | +| Infrastructure | Centralisée | Décentralisée | +| La redondance géographique | 1 200 $ coût total par nœud supplémentaire | Compris | +| Temps de disponibilité | Variable | - 99.9% | +| Total des coûts mensuels | 11 000 $+ | 1,200$ | \*y compris les coûts de sauvegarde : $50-$ à 100 dollars au mois Temps d'ingénierie basé sur une hypothèse de 200 $ de l'heure -Reflète le coût pour le consommateur de données. Les frais de requête sont toujours payés aux Indexeurs pour -les requêtes du Plan Gratuit. +Reflète le coût pour le consommateur de données. Les frais de requête sont toujours payés aux Indexeurs pour les requêtes du Plan Gratuit. -Les coûts estimés concernent uniquement les subgraphs sur le Mainnet d'Ethereum — les coûts sont encore plus élevés lorsqu’un `graph-node` est auto-hébergé sur d’autres réseaux. Certains utilisateurs peuvent avoir besoin de mettre à jour leur subgraph vers une nouvelle version. En raison des frais de gas sur Ethereum, une mise à jour coûte environ 50 $ au moment de la rédaction. Notez que les frais de gas sur [Arbitrum](/archived/arbitrum/arbitrum-faq/) sont nettement inférieurs à ceux du Mainnet d'Ethereum. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Émettre un signal sur un subgraph est un cout net, nul optionnel et unique (par exemple, 1 000 $ de signal peuvent être conservés sur un subgraph, puis retirés - avec la possibilité de gagner des revenus au cours du processus). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## Pas de Coûts d’Installation & Plus grande Efficacité Opérationnelle @@ -90,4 +89,4 @@ Le réseau décentralisé de The Graph offre aux utilisateurs une redondance gé En résumé : The Graph Network est moins cher, plus facile à utiliser et produit des résultats supérieurs à ceux obtenus par l'exécution locale d'un `graph-node`. -Commencez à utiliser The Graph Network dès aujourd’hui et découvrez comment [publier votre subgraph sur le réseau décentralisé de The Graph](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/fr/resources/glossary.mdx b/website/src/pages/fr/resources/glossary.mdx index cfaa0beb4c78..f874e54e73cd 100644 --- a/website/src/pages/fr/resources/glossary.mdx +++ b/website/src/pages/fr/resources/glossary.mdx @@ -4,80 +4,80 @@ title: Glossaire - **The Graph** : Un protocole décentralisé pour l'indexation et l'interrogation des données. -- **Query** : Une requête de données. Dans le cas de The Graph, une requête est une demande de données provenant d'un subgraph à laquelle répondra un Indexeur. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL** : Un langage de requête pour les API et un moteur d'exécution pour répondre à ces requêtes avec vos données existantes. The Graph utilise GraphQL pour interroger les subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint** : Une URL qui peut être utilisée pour interroger un subgraph. L'endpoint de test pour Subgraph Studio est `https://api.studio.thegraph.com/query///` et l'endpoint pour Graph Explorer est `https://gateway.thegraph.com/api//subgraphs/id/`. L'endpoint Graph Explorer est utilisé pour interroger les subgraphs sur le réseau décentralisé de The Graph. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph** : Une API ouverte qui extrait des données d'une blockchain, les traite et les stocke de manière à ce qu'elles puissent être facilement interrogées via GraphQL. Les développeurs peuvent créer, déployer et publier des subgraphs sur The Graph Network. Une fois indexé, le subgraph peut être interrogé par n'importe qui. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexeur** : Participants au réseau qui gèrent des nœuds d'indexation pour indexer les données des blockchains et répondre aux requêtes GraphQL. - **Flux de revenus pour les Indexeurs** : Les Indexeurs sont récompensés en GRT par deux éléments : les remises sur les frais de requête et les récompenses pour l'indexation. - 1. **Remboursements de frais de requête** : Paiements effectués par les consommateurs de subgraphs pour avoir servi des requêtes sur le réseau. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Récompenses d'indexation** : Les récompenses que les Indexeurs reçoivent pour l'indexation des subgraphs. Les récompenses d'indexation sont générées par une nouvelle émission de 3 % de GRT par an. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake** : Le montant de GRT que les Indexeurs stakent pour participer au réseau décentralisé. Le minimum est de 100 000 GRT, et il n'y a pas de limite supérieure. - **Delegation Capacity** : C'est le montant maximum de GRT qu'un Indexeur peut accepter de la part des Déléguateurs. Les Indexeurs ne peuvent accepter que jusqu'à 16 fois leur propre Indexer Self-Stake, et toute délégation supplémentaire entraîne une dilution des récompenses. Par exemple, si un Indexeur a une Indexer Self-Stake de 1M GRT, sa capacité de délégation est de 16M. Cependant, les indexeurs peuvent augmenter leur capacité de délégation en augmentant leur Indexer Self-Stake. -- **Upgrade Indexer** : Un Indexeur conçu pour servir de solution de repli pour les requêtes de subgraphs qui ne sont pas traitées par d'autres Indexeurs sur le réseau. L'upgrade Indexer n'est pas compétitif par rapport aux autres Indexeurs. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**(Déléguateurs) : Participants au réseau qui possèdent des GRT et les délèguent à des Indexeurs. Cela permet aux Indexeurs d'augmenter leur participation dans les subgraphs du réseau. En retour, les Déléguateurs reçoivent une partie des récompenses d'indexation que les Indexeurs reçoivent pour le traitement des subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Taxe de délégation** : Une taxe de 0,5 % payée par les Déléguateurs lorsqu'ils délèguent des GRT aux Indexeurs. Les GRT utilisés pour payer la taxe sont brûlés. -- **Curator**(Curateur) : Participants au réseau qui identifient les subgraphs de haute qualité et signalent les GRT sur ces derniers en échange de parts de curation. Lorsque les Indexeurs réclament des frais de requête pour un subgraph, 10 % sont distribués aux Curateurs de ce subgraph. Il existe une corrélation positive entre la quantité de GRT signalée et le nombre d'Indexeurs indexant un subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Taxe de curation** : Une taxe de 1% payée par les Curateurs lorsqu'ils signalent des GRT sur des subgraphs. Les GRT utiliséa pour payer la taxe sont brûlés. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Consommateur de données** : Toute application ou utilisateur qui interroge un subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Développeur de subgraphs** : Un développeur qui construit et déploie un subgraph sur le réseau décentralisé de The Graph. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Manifeste du subgraph** : Un fichier YAML qui décrit le schéma GraphQL du subgraph, les sources de données et d'autres métadonnées. Vous trouverez [Ici](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) un exemple. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoque** : Unité de temps au sein du réseau. Actuellement, une époque correspond à 6 646 blocs, soit environ 1 jour. -- **Allocation** : Un Indexeur peut allouer l'ensemble de son staking de GRT (y compris le staking des Déléguateurs) à des subgraphs qui ont été publiés sur le réseau décentralisé de The Graph. Les allocations peuvent avoir différents statuts : +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Actif** : Une allocation est considérée comme active lorsqu'elle est créée onchain. C'est ce qu'on appelle ouvrir une allocation, et cela indique au réseau que l'Indexeur est en train d'indexer et de servir des requêtes pour un subgraph particulier. Les allocations actives accumulent des récompenses d'indexation proportionnelles au signal sur le subgraph et à la quantité de GRT allouée. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Fermé** : Un Indexeur peut réclamer les récompenses d'indexation accumulées sur un subgraph donné en soumettant une preuve d'indexation (POI) récente et valide. C'est ce qu'on appelle la fermeture d'une allocation. Une allocation doit avoir été ouverte pendant au moins une époque avant de pouvoir être fermée. La période d'allocation maximale est de 28 époques. Si un Indexeur laisse une allocation ouverte au-delà de 28 époques, il s'agit d'une allocation périmée. Lorsqu'une allocation est dans l'état **fermé**, un Fisherman peut encore ouvrir un litige pour contester un Indexeur pour avoir servi de fausses données. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio** : Une dapp puissante pour construire, déployer et publier des subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen** : Un rôle au sein de The Graph Network tenu par les participants qui surveillent l'exactitude et l'intégrité des données servies par les Indexeurs. Lorsqu'un Fisherman identifie une réponse à une requête ou un POI qu'il estime incorrect, il peut lancer un litige contre l'indexeur. Si le litige est tranché en faveur du Fisherman, l'indexeur perd 2,5 % de son staking. Sur ce montant, 50 % sont attribués au Fisherman à titre de récompense pour sa vigilance, et les 50 % restants sont retirés de la circulation (brûlés). Ce mécanisme est conçu pour encourager les pêcheurs à contribuer au maintien de la fiabilité du réseau en veillant à ce que les Indexeurs soient tenus responsables des données qu'ils fournissent. - **Arbitres** : Les arbitres sont des participants au réseau nommés dans le cadre d'un processus de gouvernance. Le rôle de l'arbitre est de décider de l'issue des litiges relatifs à l'indexation et aux requêtes. Leur objectif est de maximiser l'utilité et la fiabilité de The Graph. - **Slashing**(Taillade) : Les Indexeurs peuvent se voir retirer leur GRT pour avoir fourni un POI incorrect ou pour avoir diffusé des données inexactes. Le pourcentage de réduction est un paramètre protocolaire actuellement fixé à 2,5 % du staking personnel de l'Indexeur. 50 % des GRT réduit est versé au pêcheur qui a contesté les données inexactes ou le point d'intérêt incorrect. Les 50 % restants sont brûlés. -- **Récompenses d'indexation** : Les récompenses que les Indexeurs reçoivent pour l'indexation des subgraphs. Les récompenses d'indexation sont distribuées en GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Récompenses de délégation** : Les récompenses que les Déléguateurs reçoivent pour avoir délégué des GRT aux Indexeurs. Les récompenses de délégation sont distribuées en GRT. -- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. +- **GRT** : Le jeton d'utilité du travail de The Graph. Le GRT fournit des incitations économiques aux participants du réseau pour leur contribution au réseau. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. -- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. +- **The Graph Client** : Une bibliothèque pour construire des dapps basées sur GraphQL de manière décentralisée. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. -- **Graph CLI**: A command line interface tool for building and deploying to The Graph. +- **Graph CLI** : Un outil d'interface de ligne de commande pour construire et déployer sur The Graph. -- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. +- **Cooldown Period** : Le temps restant avant qu'un indexeur qui a modifié ses paramètres de délégation puisse le faire à nouveau. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/fr/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/fr/resources/migration-guides/assemblyscript-migration-guide.mdx index afd49ffd3fa8..efb857163671 100644 --- a/website/src/pages/fr/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/fr/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: Guide de migration de l'AssemblyScript --- -Jusqu'à présent, les subgraphs utilisaient l'une des [premières versions d'AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Nous avons enfin ajouté la prise en charge de la [dernière version disponible](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10) ! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Cela permettra aux développeurs de subgraph d'utiliser les nouvelles fonctionnalités du langage AS et de la bibliothèque standard. +That will enable Subgraph developers to use newer features of the AS language and standard library. Ce guide s'applique à tous ceux qui utilisent `graph-cli`/`graph-ts` en dessous de la version `0.22.0`. Si vous êtes déjà à une version supérieure (ou égale) à celle-ci, vous avez déjà utilisé la version `0.19.10` d'AssemblyScript 🙂 -> Note : A partir de `0.24.0`, `graph-node` peut supporter les deux versions, en fonction de la `apiVersion` spécifiée dans le manifeste du subgraph. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Fonctionnalités @@ -44,7 +44,7 @@ Ce guide s'applique à tous ceux qui utilisent `graph-cli`/`graph-ts` en dessous ## Comment mettre à niveau ? -1. Changez vos mappages `apiVersion` dans `subgraph.yaml` en `0.0.6` : +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -Si vous ne savez pas lequel choisir, nous vous recommandons de toujours utiliser la version sécurisée. Si la valeur n'existe pas, vous souhaiterez peut-être simplement effectuer une instruction if précoce avec un retour dans votre gestionnaire de subgraph. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Ombrage variable @@ -132,7 +132,7 @@ Vous devrez renommer vos variables en double si vous conservez une observation d ### Comparaisons nulles -En effectuant la mise à niveau sur votre subgraph, vous pouvez parfois obtenir des erreurs comme celles-ci : +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -329,7 +329,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // ne donne pas d'erreurs de compilation comme il se doit ``` -Nous avons ouvert un problème sur le compilateur AssemblyScript pour cela, mais pour l'instant, si vous effectuez ce type d'opérations dans vos mappages de subgraph, vous devez les modifier pour effectuer une vérification nulle avant. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -351,7 +351,7 @@ value.x = 10 value.y = 'content' ``` -Il sera compilé mais s'arrêtera au moment de l'exécution, cela se produit parce que la valeur n'a pas été initialisée, alors assurez-vous que votre subgraph a initialisé ses valeurs, comme ceci : +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized @@ -406,7 +406,7 @@ type Total @entity { let total = Total.load('latest') if (total === null) { - total = new Total('latest') // initialise déjà les propriétés non-nullables + total = new Total('latest') // initialise déjà les propriétés non-nullables } total.amount = total.amount + BigInt.fromI32(1) @@ -488,12 +488,12 @@ Vous ne pouvez désormais plus définir de champs dans vos types qui sont des li ```graphql type Something @entity { - id: Bytes! + id: Bytes! } type MyEntity @entity { - id: Bytes! - invalidField: [Something]! # n'est plus valide + id: Bytes! + invalidField: [Something]! # n'est plus valide } ``` diff --git a/website/src/pages/fr/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/fr/resources/migration-guides/graphql-validations-migration-guide.mdx index 62e5435c0fc3..526052f7e358 100644 --- a/website/src/pages/fr/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/fr/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Guide de migration des validations GraphQL +title: GraphQL Validations Migration Guide --- Bientôt, `graph-node` supportera 100% de la couverture de la [Spécification des validations GraphQL] (https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ Pour être conforme à ces validations, veuillez suivre le guide de migration. Vous pouvez utiliser l'outil de migration CLI pour rechercher tous les problèmes dans vos opérations GraphQL et les résoudre. Vous pouvez également mettre à jour le point de terminaison de votre client GraphQL pour utiliser le point de terminaison « https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME ». Tester vos requêtes sur ce point de terminaison vous aidera à trouver les problèmes dans vos requêtes. -> Tous les subgraphs n'auront pas besoin d'être migrés si vous utilisez [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) ou [GraphQL Code Generator](https://the-guild.dev /graphql/codegen), ils garantissent déjà que vos requêtes sont valides. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Outil CLI de migration @@ -103,7 +103,7 @@ query myData { } query myData2 { - # renommer la deuxième requête + # renommer la deuxième requête name } ``` @@ -158,7 +158,7 @@ _Solution:_ ```graphql query myData($id: String) { - # conserver la variable pertinente (ici : `$id: String`) + # conserver la variable pertinente (ici : `$id: String`) id ...MyFields } @@ -259,7 +259,7 @@ query { ```graphql # Différents arguments peuvent conduire à des données différentes, -# donc nous ne pouvons pas supposer que les champs seront les mêmes. +# donc nous ne pouvons pas supposer que les champs seront les mêmes. query { dogs { doesKnowCommand(dogCommand: SIT) diff --git a/website/src/pages/fr/resources/roles/curating.mdx b/website/src/pages/fr/resources/roles/curating.mdx index 909aa9f0e848..931afdc98101 100644 --- a/website/src/pages/fr/resources/roles/curating.mdx +++ b/website/src/pages/fr/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curation --- -Les Curateurs jouent un rôle essentiel dans l'économie décentralisée de The Graph. Ils utilisent leur connaissance de l'écosystème web3 pour évaluer et signaler les subgraphs qui devraient être indexés par The Graph Network. à travers Graph Explorer, les Curateurs consultent les données du réseau pour prendre des décisions de signalisation. En retour, The Graph Network récompense les Curateurs qui signalent des subgraphs de bonne qualité en leur reversant une partie des frais de recherche générés par ces subgraphs. La quantité de GRT signalée est l'une des principales considérations des Indexeurs lorsqu'ils déterminent les subgraphs à indexer. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## Que signifie "le signalement" pour The Graph Network? -Avant que les consommateurs ne puissent interroger un subgraphs, celui-ci doit être indexé. C'est ici que la curation entre en jeu. Afin que les Indexeurs puissent gagner des frais de requête substantiels sur des subgraphs de qualité, ils doivent savoir quels subgraphs indexer. Lorsque les Curateurs signalent un subgraphs , ils indiquent aux Indexeurs qu'un subgraphs est demandé et de qualité suffisante pour être indexé. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Les Curateurs rendent le réseau The Graph efficace et le [signalement](#how-to-signal) est le processus que les Curateurs utilisent pour informer les Indexeurs qu'un subgraph est bon à indexer. Les Indexeurs peuvent se fier au signal d’un Curateur car, en signalant, les Curateurs mintent une part de curation (curation share) pour le subgraph, leur donnant droit à une partie des futurs frais de requête générés par ce subgraph. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Les signaux des Curateurs sont représentés par des jetons ERC20 appelés Graph Curation Shares (GCS). Ceux qui veulent gagner plus de frais de requête doivent signaler leurs GRT aux subgraphs qui, selon eux, généreront un flux important de frais pour le réseau. Les Curateurs ne peuvent pas être réduits pour mauvais comportement, mais il y a une taxe de dépôt sur les Curateurs pour dissuader les mauvaises décisions pouvant nuire à l'intégrité du réseau. Les Curateurs gagneront également moins de frais de requête s'ils sélectionnent un subgraph de mauvaise qualité car il y aura moins de requêtes à traiter ou moins d'Indexeurs pour les traiter. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -L’[Indexer Sunrise Upgrade](/archived/sunrise/#what-is-the-upgrade-indexer) assure l'indexation de tous les subgraphs, toutefois, signaler des GRT sur un subgraph spécifique attirera davantage d’Indexeurs vers ce dernier. Cette incitation supplémentaire a pour but d’améliorer la qualité de service pour les requêtes en réduisant la latence et en améliorant la disponibilité du réseau. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -Lors du signalement, les Curateurs peuvent décider de signaler une version spécifique du subgraph ou de signaler en utilisant l'auto-migration. S'ils signalent en utilisant l'auto-migration, les parts d'un Curateur seront toujours mises à jour vers la dernière version publiée par le développeur. S'ils décident de signaler une version spécifique, les parts resteront toujours sur cette version spécifique. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Si vous avez besoin d’aide pour la curation afin d’améliorer la qualité de service, envoyez une demande à l’équipe Edge & Node à l’adresse support@thegraph.zendesk.com en précisant les subgraphs pour lesquels vous avez besoin d’assistance. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Les Indexeurs peuvent trouver des subgraphs à indexer en fonction des signaux de curation qu'ils voient dans Graph Explorer (capture d'écran ci-dessous). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Subgraphs de l'Explorer](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Comment signaler -Dans l'onglet Curateur de Graph Explorer, les curateurs pourront signaler et retirer leur signal sur certains subgraphs en fonction des statistiques du réseau. Pour un guide pas à pas expliquant comment procéder dans Graph Explorer, [cliquez ici.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Un curateur peut choisir de signaler une version spécifique d'un sugraph ou de faire migrer automatiquement son signal vers la version de production la plus récente de ce subgraph. Ces deux stratégies sont valables et comportent leurs propres avantages et inconvénients. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Le signalement sur une version spécifique est particulièrement utile lorsqu'un subgraph est utilisé par plusieurs dapps. Une dapp pourrait avoir besoin de mettre à jour régulièrement le subgraph avec de nouvelles fonctionnalités, tandis qu’une autre dapp pourrait préférer utiliser une version plus ancienne et bien testée du subgraph. Lors de la curation initiale, une taxe standard de 1 % est prélevée. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. La migration automatique de votre signal vers la version de production la plus récente peut s'avérer utile pour vous assurer que vous continuez à accumuler des frais de requête. Chaque fois que vous effectuez une curation, une taxe de curation de 1 % est appliquée. Vous paierez également une taxe de curation de 0,5 % à chaque migration. Les développeurs de subgraphs sont découragés de publier fréquemment de nouvelles versions - ils doivent payer une taxe de curation de 0,5 % sur toutes les parts de curation migrées automatiquement. -> **Remarque**: La première adresse à signaler un subgraph donné est considérée comme le premier curateur et devra effectuer un travail bien plus coûteux en gas que les curateurs suivants, car le premier curateur doit initialiser les tokens de part de curation et transférer les tokens dans le proxy de The Graph. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Retrait de vos GRT @@ -40,39 +40,39 @@ Les Curateurs ont la possibilité de retirer leur GRT signalé à tout moment. Contrairement au processus de délégation, si vous décidez de retirer vos GRT signalés, vous n'aurez pas un délai d'attente et vous recevrez le montant total (moins la taxe de curation de 1%). -Une fois qu'un Curateur retire ses signaux, les Indexeurs peuvent choisir de continuer à indexer le subgraph, même s'il n'y a actuellement aucun GRT signalé actif. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -Cependant, il est recommandé que les Curateurs laissent leur GRT signalé en place non seulement pour recevoir une partie des frais de requête, mais aussi pour assurer la fiabilité et la disponibilité du subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risques 1. Le marché des requêtes est intrinsèquement jeune chez The Graph et il y a un risque que votre %APY soit inférieur à vos attentes en raison de la dynamique naissante du marché. -2. Frais de curation - lorsqu'un Curateur signale des GRT sur un subgraph, il doit s'acquitter d'une taxe de curation de 1%. Cette taxe est brûlée. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Un subgraph peut échouer à cause d'un bug. Un subgraph qui échoue n'accumule pas de frais de requête. Par conséquent, vous devrez attendre que le développeur corrige le bogue et déploie une nouvelle version. - - Si vous êtes abonné à la version la plus récente d'un subgraph, vos parts migreront automatiquement vers cette nouvelle version. Cela entraînera une taxe de curation de 0,5 %. - - Si vous avez signalé sur une version spécifique d'un subgraph et qu'elle échoue, vous devrez brûler manuellement vos parts de curation. Vous pouvez alors signaler sur la nouvelle version du subgraph, encourant ainsi une taxe de curation de 1%. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## FAQs sur la Curation ### 1. Quel pourcentage des frais de requête les Curateurs perçoivent-ils? -En signalant sur un subgraph, vous gagnerez une part de tous les frais de requête générés par le subgraph. 10% de tous les frais de requête vont aux Curateurs au prorata de leurs parts de curation. Ces 10% sont soumis à la gouvernance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Comment décider quels sont les subgraphs de haute qualité sur lesquels on peut émettre un signal ? +### 2. How do I decide which Subgraphs are high quality to signal on? -Identifier des subgraphs de haute qualité est une tâche complexe, mais il existe de multiples approches.. En tant que Curateur, vous souhaitez trouver des subgraphs fiables qui génèrent un volume de requêtes élevé. Un subgraph fiable peut être précieux s’il est complet, précis et s’il répond aux besoins en données d’une dapp. Un subgraph mal conçu pourrait avoir besoin d'être révisé ou republié, et peut aussi finir par échouer. Il est crucial pour les Curateurs d'examiner l'architecture ou le code d'un subgraph afin d'évaluer sa valeur. Ainsi : +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Les Curateurs peuvent utiliser leur compréhension d'un réseau pour essayer de prédire comment un subgraph individuel peut générer un volume de requêtes plus élevé ou plus faible à l'avenir -- Les Curateurs doivent également comprendre les métriques disponibles via Graph Explorer. Des métriques telles que le volume de requêtes passées et l'identité du développeur du subgraph peuvent aider à déterminer si un subgraph mérite ou non d'être signalé. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Quel est le coût de la mise à jour d'un subgraph ? +### 3. What’s the cost of updating a Subgraph? -La migration de vos parts de curation (curation shares) vers une nouvelle version de subgraph entraîne une taxe de curation de 1 %. Les Curateurs peuvent choisir de s'abonner à la dernière version d'un subgraph. Lorsque les parts de Curateurs sont automatiquement migrées vers une nouvelle version, les Curateurs paieront également une demi-taxe de curation, soit 0,5 %, car la mise à niveau (upgrade) des subgraphs est une action onchain qui coûte du gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. À quelle fréquence puis-je mettre à jour mon subgraph ? +### 4. How often can I update my Subgraph? -Il est conseillé de ne pas mettre à jour vos subgraphs trop fréquemment. Voir la question ci-dessus pour plus de détails. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Puis-je vendre mes parts de curateurs ? diff --git a/website/src/pages/fr/resources/roles/delegating/delegating.mdx b/website/src/pages/fr/resources/roles/delegating/delegating.mdx index 83dcd5dfc17c..5425b865ba2e 100644 --- a/website/src/pages/fr/resources/roles/delegating/delegating.mdx +++ b/website/src/pages/fr/resources/roles/delegating/delegating.mdx @@ -2,54 +2,54 @@ title: Délégation --- -To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +Pour commencer à déléguer tout de suite, consultez [déléguer sur The Graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). ## Aperçu -Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. +Les Déléguateurs gagnent des GRT en déléguant des GRT aux indexeurs, ce qui contribue à la sécurité et à la fonctionnalité du réseau. ## Avantages de la délégation -- Strengthen the network’s security and scalability by supporting Indexers. -- Earn a portion of rewards generated by the Indexers. +- Renforcer la sécurité et l'évolutivité du réseau en soutenant les Indexeurs. +- Gagner une partie des récompenses générées par les Indexeurs. ## Comment fonctionne la délégation ? -Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. +Les Déléguateurs reçoivent des récompenses GRT de la part de l'Indexeur ou des Indexeurs auxquels ils choisissent de déléguer leurs GRT. -An Indexer's ability to process queries and earn rewards depends on three key factors: +La capacité d'un Indexeur à traiter les requêtes et à obtenir des récompenses dépend de trois facteurs clés : -1. The Indexer's Self-Stake (GRT staked by the Indexer). -2. The total GRT delegated to them by Delegators. -3. The price the Indexer sets for queries. +1. L'Indexer's Self-Stake (GRT stackés par l'Indexeur). +2. Le total des GRT qui leur ont été déléguées par les Déléguateurs. +3. Le prix que l'Indexeur fixe pour les requêtes. -The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. +Plus le nombre de GRT staké et délégués à un Indexeur est important, plus le nombre de requêtes qu'il peut traiter est élevé, ce qui se traduit par des récompenses potentielles plus importantes tant pour le Déléguateur que pour l'Indexeur. -### What is Delegation Capacity? +### Qu'est-ce que la capacité de délégation ? -Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. +La capacité de délégation fait référence au montant maximum de GRT qu'un Indexeur peut accepter de la part des Déléguateurs, , en fonction de la mise personnelle de l’Indexeur(Self-Stake). -The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. +The Graph Network comprend un ratio de délégation de 16, ce qui signifie qu'un Indexeur peut accepter jusqu'à 16 fois son Self-Stake en GRT délégués. -For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. +Par exemple, si un indexeur a un Self-Stake de 1 million de GRT, sa capacité de délégation est de 16 millions de GRT. -### Why Does Delegation Capacity Matter? +### Pourquoi la capacité de délégation est-elle importante ? -If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. +Si un Indexeur dépasse sa capacité de délégation, les récompenses pour tous les Déléguateurs sont diluées parce que l'excédent de GRT délégué ne peut pas être utilisé efficacement dans le protocole. -This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. +Il est donc essentiel que les Déléguateurs évaluent la capacité de délégation actuelle d'un Indexeur avant de le sélectionner. -Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. +Les Indexeurs peuvent augmenter leur capacité de délégation en augmentant leur Self-Stake, ce qui a pour effet d'augmenter la limite des jetons délégués. -## Delegation on The Graph +## Délégation sur The Graph -> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). +> Veuillez noter que ce guide ne couvre pas les étapes telles que la configuration de MetaMask. La communauté Ethereum propose une [ressource complète sur les portefeuilles](https://ethereum.org/en/wallets/). -There are two sections in this guide: +Ce guide comporte deux sections : - Les risques de la délégation de jetons dans The Graph Network - Comment calculer les rendements escomptés en tant que délégué @@ -70,17 +70,17 @@ En tant que Délégateur, il est important de comprendre ce qui suit : ### La période de retrait de délégation -When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. +Lorsqu'un délégué choisit de retirer sa délégation, ses jetons sont soumis à une période de retrait de 28 jours. -This means they cannot transfer their tokens or earn any rewards for 28 days. +Cela signifie qu'ils ne peuvent pas transférer leurs jetons ou gagner des récompenses pendant 28 jours. -After the undelegation period, GRT will return to your crypto wallet. +Après la période de retrait de délégation, les GRT retourneront dans votre portefeuille crypto. ### Pourquoi ceci est t-il important? -If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. +Si vous choisissez un indexeur qui n'est pas digne de confiance ou qui ne fait pas du bon travail, vous voudrez retirer la délégation. Cela signifie que vous perdrez des occasions de gagner des récompenses. -As a result, it’s recommended that you choose an Indexer wisely. +Il est donc recommandé de bien choisir son Indexeur. ![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) @@ -96,25 +96,25 @@ Pour comprendre comment choisir un Indexeur fiable, vous devez comprendre les pa - **Query Fee Cut** - C’est la même chose que l’Indexing Reward Cut, mais cela s’applique aux revenus des frais de requête que l’Indexeur perçoit. -- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. +- Il est fortement recommandé d'explorer [Le Discord de The Graph](https://discord.gg/graphprotocol) pour déterminer quels indexeurs ont les meilleures réputations sociales et techniques. -- Many Indexers are active in Discord and will be happy to answer your questions. +- De nombreux Indexeurs sont actifs sur Discord et seront heureux de répondre à vos questions. ## Calcul du rendement attendu par les Délégateurs -> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +> Calculez le retour sur investissement de votre délégation [ici](https://thegraph.com/explorer/delegate?chain=arbitrum-one). -A Delegator must consider a variety of factors to determine a return: +Le Délégateur doit tenir compte de plusieurs facteurs pour déterminer un retour : -An Indexer's ability to use the delegated GRT available to them impacts their rewards. +La capacité d'un Indexeur à utiliser les GRT délégués dont il dispose a un impact sur ses récompenses. -If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. +Si un Indexeur n'alloue pas tout les GRT à sa disposition, il risque de ne pas maximiser ses gains potentiels et ceux de ses déléguateurs. -Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. +Les Indexeurs peuvent clôturer une allocation et collecter les récompenses à tout moment dans la fenêtre de 1 à 28 jours. Toutefois, si les récompenses ne sont pas perçues rapidement, le montant total des récompenses peut sembler inférieur, même si un pourcentage des récompenses n'est pas réclamé. ### Considérant la réduction des frais d'interrogation et la réduction des frais d'indexation -You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. +Vous devriez choisir un Indexeur qui est transparent quant à la fixation de ses frais de requête et de ses réductions de frais d'indexation. La formule est : diff --git a/website/src/pages/fr/resources/roles/delegating/undelegating.mdx b/website/src/pages/fr/resources/roles/delegating/undelegating.mdx index e4b61c71142a..2fadb116424e 100644 --- a/website/src/pages/fr/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/fr/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Étape par Étape 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## Ressources supplémentaires diff --git a/website/src/pages/fr/resources/subgraph-studio-faq.mdx b/website/src/pages/fr/resources/subgraph-studio-faq.mdx index 10300b3d9ada..c7c261788a00 100644 --- a/website/src/pages/fr/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/fr/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQ ## 1. Qu'est-ce que Subgraph Studio ? -[Subgraph Studio](https://thegraph.com/studio/) est une dapp permettant de créer, gérer et publier des subgraphs et des clés API. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Comment créer une clé API ? @@ -18,14 +18,14 @@ Oui ! Vous pouvez créer plusieurs clés API à utiliser dans différents projet Après avoir créé une clé API, dans la section Sécurité, vous pouvez définir les domaines qui peuvent interroger une clé API spécifique. -## Puis-je transférer mon subgraph à un autre propriétaire ? +## 5. Can I transfer my Subgraph to another owner? -Oui, les subgraphs qui ont été publiés sur Arbitrum One peuvent être transférés vers un nouveau portefeuille ou un Multisig. Vous pouvez le faire en cliquant sur les trois points à côté du bouton 'Publish' sur la page des détails du subgraph et en sélectionnant 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Notez que vous ne pourrez plus voir ou modifier le subgraph dans Studio une fois qu'il aura été transféré. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## Comment trouver les URL de requête pour les sugraphs si je ne suis pas le développeur du subgraph que je veux utiliser ? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -Vous pouvez trouver l'URL de requête de chaque subgraph dans la section Détails du subgraph de Graph Explorer. Lorsque vous cliquez sur le bouton “Requête”, vous serez redirigé vers un volet dans lequel vous pourrez afficher l'URL de requête du subgraph qui vous intéresse. Vous pouvez ensuite remplacer le placeholder `` par la clé API que vous souhaitez exploiter dans Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -N'oubliez pas que vous pouvez créer une clé API et interroger n'importe quel subgraph publié sur le réseau, même si vous créez vous-même un subgraph. Ces requêtes via la nouvelle clé API, sont des requêtes payantes comme n'importe quelle autre sur le réseau. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/fr/resources/tokenomics.mdx b/website/src/pages/fr/resources/tokenomics.mdx index 27bbbee1af4d..7568b69ebd35 100644 --- a/website/src/pages/fr/resources/tokenomics.mdx +++ b/website/src/pages/fr/resources/tokenomics.mdx @@ -1,103 +1,103 @@ --- title: Les tokenomiques du réseau The Graph sidebarTitle: Tokenomics -description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. +description: The Graph Network est encouragé par une puissante tokénomic. Voici comment fonctionne GRT, le jeton d'utilité de travail natif de The Graph. --- ## Aperçu -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Spécificités⁠ -The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. +Le modèle de The Graph s'apparente à un modèle B2B2C, mais il est piloté par un réseau décentralisé où les participants collaborent pour fournir des données aux utilisateurs finaux en échange de récompenses GRT. GRT est le jeton d'utilité de The Graph. Il coordonne et encourage l'interaction entre les fournisseurs de données et les consommateurs au sein du réseau. -The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/subgraphs/billing/). +The Graph joue un rôle essentiel en rendant les données de la blockchain plus accessibles et en soutenant une marketplace pour leur échange. Pour en savoir plus sur le modèle de facturation de The Graph, consultez ses [plans gratuits et de croissance](/subgraphs/billing/). -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +- Adresse du jeton GRT sur le réseau principal : [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +- Adresse du jeton GRT sur Arbitrum One : [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) ## Les rôles des participants au réseau -There are four primary network participants: +Les participants au réseau sont au nombre de quatre : -1. Delegators - Delegate GRT to Indexers & secure the network +1. Délégateurs - Délèguent des GRT aux Indexeurs & sécurisent le réseau -2. Curateurs - Trouver les meilleurs subgraphs pour les indexeurs +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexeurs - épine dorsale des données de la blockchain -Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). +Les Fishermen et les arbitres font également partie intégrante du succès du réseau grâce à d'autres contributions, soutenant le travail des autres participants principaux. Pour plus d'informations sur les rôles du réseau, [lire cet article](https://thegraph.com/blog/the-graph-grt-token-economics/). -![Tokenomics diagram](/img/updated-tokenomics-image.png) +![Diagramme de la tokenomic](/img/updated-tokenomics-image.png) -## Delegators (Passively earn GRT) +## Délégateurs (gagnent passivement des GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. +Par exemple, si un Délégateur délègue 15 000 GRT à un Indexeur offrant 10 %, le Délégateur recevra environ 1 500 GRT de récompenses par an. -There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. +Une taxe de délégation de 0,5 % est prélevée chaque fois qu'un Délégateur délègue des GRT sur le réseau. Si un Délégateur choisit de retirer les GRT qu'il a délégués, il doit attendre la période de déverrouillage de 28 époques. Chaque époque compte 6 646 blocs, ce qui signifie que 28 époques représentent environ 26 jours. -If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. +Si vous lisez ceci, vous pouvez devenir Délégateur dès maintenant en vous rendant sur la [page des participants au réseau](https://thegraph.com/explorer/participants/indexers), et en déléguant des GRT à un Indexeur de votre choix. ## Curateurs (Gagnez des GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. -## Developers +## Développeurs -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Création d'un subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Interroger un subgraph existant +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. -Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. +Les subgraph sont [interrogés à l'aide de GraphQL](/subgraphs/querying/introduction/), et les frais d'interrogation sont payés avec des GRT dans [Subgraph Studio](https://thegraph.com/studio/). Les frais d'interrogation sont distribués aux participants au réseau en fonction de leur contribution au protocole. -1% of the query fees paid to the network are burned. +1% des frais de requête payés au réseau sont brûlés. -## Indexers (Earn GRT) +## Indexeurs (gagner des GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Les Indexeurs peuvent gagner des récompenses en GRT de deux façons : -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. -In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. +Pour faire fonctionner un nœud d'indexation, les Indexeurs doivent staker 100 000 GRT ou plus avec le réseau. Les Indexeurs sont incités à s'approprier des GRT proportionnellement au nombre de requêtes qu'ils traitent. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. -The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. +Le montant des récompenses reçues par un Indexeur peut varier en fonction du self-stake de l'indexeur, de la délégation acceptée, de la qualité du service et de nombreux autres facteurs. ## Token Supply : Incinération & Emission -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. -![Total burned GRT](/img/total-burned-grt.jpeg) +![Total de GRT brûlés](/img/total-burned-grt.jpeg) -In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. +En plus de ces activités d'incinération régulières, le jeton GRT dispose également d'un mécanisme de réduction (slashing) pour pénaliser les comportements malveillants ou irresponsables des Indexeurs. Lorsqu'un Indexeur est sanctionné, 50 % de ses récompenses d'indexation pour l'époque sont brûlées (l'autre moitié est versée au fisherman), et sa participation personnelle est réduite de 2,5 %, la moitié de ce montant étant brûlée. Les Indexeurs sont ainsi fortement incités à agir dans l'intérêt du réseau et à contribuer à sa sécurité et à sa stabilité. ## Amélioration du protocole -The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). +The Graph Network est en constante évolution et des améliorations sont constamment apportées à la conception économique du protocole afin d'offrir la meilleure expérience possible à tous les participants au réseau. The Graph Council supervise les modifications du protocole et les membres de la communauté sont encouragés à y participer. Participez aux améliorations du protocole sur [le Forum The Graph](https://forum.thegraph.com/). diff --git a/website/src/pages/fr/sps/introduction.mdx b/website/src/pages/fr/sps/introduction.mdx index 64f5b60d32fe..0454b6f4acee 100644 --- a/website/src/pages/fr/sps/introduction.mdx +++ b/website/src/pages/fr/sps/introduction.mdx @@ -3,28 +3,29 @@ title: Introduction aux Subgraphs alimentés par Substreams sidebarTitle: Présentation --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Améliorez l'efficacité et l'évolutivité de votre subgraph en utilisant [Substreams](/substreams/introduction/) pour streamer des données blockchain pré-indexées. ## Aperçu -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Utilisez un package Substreams (`.spkg`) comme source de données pour donner à votre Subgraph l'accès à un flux de données blockchain pré-indexées. Cela permet un traitement des données plus efficace et évolutif, en particulier avec des réseaux de blockchain complexes ou de grande taille. ### Spécificités⁠ Il existe deux méthodes pour activer cette technologie : -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Utilisation des [déclencheurs](/sps/triggers/) de Substreams ** : Consommez à partir de n'importe quel module Substreams en important le modèle Protobuf par le biais d'un gestionnaire de subgraph et déplacez toute votre logique dans un subgraph. Cette méthode crée les entités du subgraph directement dans le subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **En utilisant [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)** : En écrivant une plus grande partie de la logique dans Substreams, vous pouvez consommer la sortie du module directement dans [graph-node](/indexing/tooling/graph-node/). Dans graph-node, vous pouvez utiliser les données de Substreams pour créer vos entités Subgraph. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +Vous pouvez choisir où placer votre logique, soit dans le subgraph, soit dans Substreams. Cependant, réfléchissez à ce qui correspond à vos besoins en matière de données, car Substreams a un modèle parallélisé et les déclencheurs sont consommés de manière linéaire dans graph node. ### Ressources supplémentaires -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +Consultez les liens suivants pour obtenir des tutoriels sur l'utilisation de l'outil de génération de code afin de créer rapidement votre premier projet Substreams de bout en bout : - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/fr/sps/sps-faq.mdx b/website/src/pages/fr/sps/sps-faq.mdx index 0924ecb989ca..9519360ba265 100644 --- a/website/src/pages/fr/sps/sps-faq.mdx +++ b/website/src/pages/fr/sps/sps-faq.mdx @@ -5,27 +5,27 @@ sidebarTitle: FAQ ## Que sont les sous-flux ? -Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications. +Substreams est un moteur de traitement exceptionnellement puissant capable de consommer de riches flux de données blockchain. Il vous permet d'affiner et de façonner les données de la blockchain pour une digestion rapide et transparente par les applications des utilisateurs finaux. Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere. -Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. +Substreams est développé par [StreamingFast](https://www.streamingfast.io/). Visitez la [Documentation Substreams](/substreams/introduction/) pour en savoir plus sur Substreams. -## Qu'est-ce qu'un subgraph alimenté par des courants de fond ? +## Qu'est-ce qu'un subgraph alimenté par Substreams ? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +Les [subgraphs alimentés par Substreams](/sps/introduction/) combinent la puissance de Substreams avec la capacité d'interrogation des subgraphs. Lors de la publication d'un subgraph alimenté par Substreams, les données produites par les transformations Substreams peuvent [produire des changements d'entité](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) compatibles avec les entités du subgraph. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +Si vous êtes déjà familiarisé avec le développement de subgraphs, notez que les subgraphs alimentés par Substreams peuvent être interrogés comme s'ils avaient été produits par la couche de transformation AssemblyScript. Cela permet de bénéficier de tous les avantages des subgraphs, y compris d'une API GraphQL dynamique et flexible. -## En quoi les subgraphs alimentés par les courants secondaires sont-ils différents des subgraphs ? +## En quoi les Subgraphs alimentés par Substreams se distinguent-ils des Subgraphs ? Les subgraphs sont constitués de sources de données qui spécifient des événements onchain et comment ces événements doivent être transformés via des gestionnaires écrits en Assemblyscript. Ces événements sont traités de manière séquentielle, en fonction de l'ordre dans lequel ils se produisent onchain. -En revanche, les subgraphs alimentés par des substreams ont une seule source de données qui référence un package de substreams, qui est traité par Graph Node. Les substreams ont accès à des données onchain supplémentaires granulaires par rapport aux subgraphs conventionnels et peuvent également bénéficier d'un traitement massivement parallélisé, ce qui peut signifier des temps de traitement beaucoup plus rapides. +En revanche, les subgraphs alimentés par Substreams ont une source de données unique qui fait référence à un package substream, qui est traité par le Graph Node. Les subgraphs ont accès à des données granulaires supplémentaires onchain par rapport aux subgraphs conventionnels et peuvent également bénéficier d'un traitement massivement parallélisé, ce qui peut se traduire par des temps de traitement beaucoup plus rapides. -## Quels sont les avantages de l'utilisation de subgraphs alimentés par des courants descendants ? +## Quels sont les avantages de l'utilisation des subgraphs alimentés par Substreams ? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Les subgraphs alimentés par Substreams combinent tous les avantages de Substreams avec la capacité d'interrogation des subgraphs. Ils apportent à The Graph une plus grande composabilité et une indexation très performante. Ils permettent également de nouveaux cas d'utilisation des données ; par exemple, une fois que vous avez construit votre subgraph alimenté par Substreams, vous pouvez réutiliser vos [modules Substreams](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) pour sortir vers différents [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) tels que PostgreSQL, MongoDB et Kafka. ## Quels sont les avantages de Substreams ? @@ -35,7 +35,7 @@ L'utilisation de Substreams présente de nombreux avantages, notamment: - Indexation haute performance : Indexation plus rapide d'un ordre de grandeur grâce à des grappes d'opérations parallèles à grande échelle (comme BigQuery). -- Sortez vos données n'importe où : Transférez vos données où vous le souhaitez : PostgreSQL, MongoDB, Kafka, subgraphs, fichiers plats, Google Sheets. +- "Sinkez" n'importe où : "Sinkez" vos données où vous le souhaitez : PostgreSQL, MongoDB, Kafka, Subgraphs, fichiers plats, Google Sheets. - Programmable : Utilisez du code pour personnaliser l'extraction, effectuer des agrégations au moment de la transformation et modéliser vos résultats pour plusieurs puits. @@ -63,19 +63,19 @@ L'utilisation de Firehose présente de nombreux avantages, notamment: - Exploite les fichiers plats : Les données de la blockchain sont extraites dans des fichiers plats, la ressource informatique la moins chère et la plus optimisée disponible. -## Où les développeurs peuvent-ils trouver plus d'informations sur les subgraphs alimentés par Substreams et sur Substreams ? +## Où les développeurs peuvent-ils trouver plus d'informations sur les Substreams et les Subgraphs alimentés par Substreams ? La [documentation Substreams ](/substreams/introduction/) vous explique comment construire des modules Substreams. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +La [documentation sur les subgraphs alimentés par Substreams](/sps/introduction/) vous montrera comment les packager pour les déployer sur The Graph. Le [dernier outil Substreams Codegen ](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) vous permettra de lancer un projet Substreams sans aucun code. ## Quel est le rôle des modules Rust dans Substreams ? -Les modules Rust sont l'équivalent des mappeurs AssemblyScript dans les subgraphs. Ils sont compilés dans WASM de la même manière, mais le modèle de programmation permet une exécution parallèle. Ils définissent le type de transformations et d'agrégations que vous souhaitez appliquer aux données brutes de la blockchain. +Les modules Rust sont l'équivalent des mappeurs AssemblyScript dans Subgraphs. Ils sont compilés dans WASM de la même manière, mais le modèle de programmation permet une exécution parallèle. Ils définissent le type de transformations et d'agrégations que vous souhaitez appliquer aux données brutes de la blockchain. -See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. +Consultez la [documentation des modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) pour plus de détails. ## Qu'est-ce qui rend Substreams composable ? @@ -85,12 +85,12 @@ Par exemple, Alice peut créer un module de prix DEX, Bob peut l'utiliser pour c ## Comment pouvez-vous créer et déployer un Subgraph basé sur Substreams ? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +Après avoir [défini](/sps/introduction/) un subgraph basé sur Substreams, vous pouvez utiliser Graph CLI pour le déployer dans [Subgraph Studio](https://thegraph.com/studio/). -## Où puis-je trouver des exemples de subgraphs et de subgraphs alimentés par des substreams ? +## Où puis-je trouver des exemples de Substreams et de Subgraphs alimentés par Substreams ? -Vous pouvez visiter [ce repo Github] (https://github.com/pinax-network/awesome-substreams) pour trouver des exemples de Substreams et de subgraphs alimentés par Substreams. +Vous pouvez consulter [cette repo Github](https://github.com/pinax-network/awesome-substreams) pour trouver des exemples de Substreams et de subgraphs alimentés par Substreams. -## Que signifient les subgraphs et les subgraphs alimentés par des substreams pour le réseau graph ? +## Que signifient les Substreams et les subgraphs alimentés par Substreams pour The Gaph Network ? L'intégration promet de nombreux avantages, notamment une indexation extrêmement performante et une plus grande composabilité grâce à l'exploitation des modules de la communauté et à leur développement. diff --git a/website/src/pages/fr/sps/triggers.mdx b/website/src/pages/fr/sps/triggers.mdx index 3dea45dc752b..ecd1253f24c7 100644 --- a/website/src/pages/fr/sps/triggers.mdx +++ b/website/src/pages/fr/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Aperçu -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Les déclencheurs personnalisés vous permettent d'envoyer des données directement dans votre fichier de mappage de subgraph et dans vos entités, qui sont similaires aux tables et aux champs. Cela vous permet d'utiliser pleinement la couche GraphQL. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +En important les définitions Protobuf émises par votre module Substreams, vous pouvez recevoir et traiter ces données dans le gestionnaire de votre subgraph. Cela garantit une gestion efficace et rationalisée des données dans le cadre du Subgraph. -### Defining `handleTransactions` +### Définition de `handleTransactions` -Le code suivant montre comment définir une fonction `handleTransactions` dans un gestionnaire de subgraph. Cette fonction reçoit des données brutes (bytes) Substreams en paramètre et les décode en un objet Transactions. Pour chaque transaction, une nouvelle entité de subgraph est créée. +Le code suivant montre comment définir une fonction `handleTransactions` dans un gestionnaire de Subgraph. Cette fonction reçoit comme paramètre de Substreams des Bytes bruts et les décode en un objet `Transactions`. Pour chaque transaction, une nouvelle entité Subgraph est créée. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -34,14 +34,14 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +Voici ce que vous voyez dans le fichier `mappings.ts` : 1. Les bytes contenant les données Substreams sont décodés en un objet `Transactions` généré, qui est utilisé comme n’importe quel autre objet AssemblyScript 2. Boucle sur les transactions -3. Création d’une nouvelle entité de subgraph pour chaque transaction +3. Créer une nouvelle entité de subgraph pour chaque transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +Pour découvrir un exemple détaillé de subgraph à déclencheurs, [consultez le tutoriel](/sps/tutorial/). ### Ressources supplémentaires -To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/). +Pour élaborer votre premier projet dans le conteneur de développement, consultez l'un des [guides pratiques](/substreams/developing/dev-container/). diff --git a/website/src/pages/fr/sps/tutorial.mdx b/website/src/pages/fr/sps/tutorial.mdx index a923cca0d94e..71659989b6e8 100644 --- a/website/src/pages/fr/sps/tutorial.mdx +++ b/website/src/pages/fr/sps/tutorial.mdx @@ -1,15 +1,15 @@ --- -title: 'Tutoriel : Configurer un Subgraph alimenté par Substreams sur Solana' -sidebarTitle: Tutorial +title: "Tutoriel : Configurer un Subgraph alimenté par Substreams sur Solana" +sidebarTitle: Tutoriel --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Mise en place réussie d'un subgraph alimenté par Substreams basé sur des déclencheurs pour un jeton Solana SPL. ## Commencer -For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) +Pour un tutoriel vidéo, consultez [Comment indexer Solana avec un subgraph alimenté par des Substreams](/sps/tutorial/#video-tutorial) -### Prerequisites +### Prérequis Avant de commencer, assurez-vous de : @@ -54,7 +54,7 @@ params: # Modifiez les champs param pour répondre à vos besoins ### Étape 2 : Générer le Manifeste du Subgraph -Une fois le projet initialisé, générez un manifeste de subgraph en exécutant la commande suivante dans le Dev Container: +Une fois le projet initialisé, générez un manifeste de subgraph en exécutant la commande suivante dans le Dev Container : ```bash substreams codegen subgraph @@ -70,10 +70,10 @@ dataSources: network: solana-mainnet-beta source: package: - moduleName: map_spl_transfers # Module défini dans le substreams.yaml + moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,9 +81,9 @@ dataSources: ### Étape 3 : Définir les Entités dans `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Définissez les champs que vous souhaitez enregistrer dans vos entités Subgraph en mettant à jour le fichier `schema.graphql`. -Here is an example: +Voici un exemple : ```graphql type MyTransfer @entity { @@ -99,9 +99,9 @@ Ce schéma définit une entité `MyTransfer` avec des champs tels que `id`, `amo ### Étape 4 : Gérer les Données Substreams dans `mappings.ts` -With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. +Avec les objets Protobuf générés, vous pouvez désormais gérer les données de Substreams décodées dans votre fichier `mappings.ts` trouvé dans le répertoire `./src`. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +L'exemple ci-dessous montre comment extraire vers les entités du subgraph les transferts non dérivés associés à l'Id du compte Orca : ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,13 +140,13 @@ Pour générer les objets Protobuf en AssemblyScript, exécutez la commande suiv npm run protogen ``` -Cette commande convertit les définitions Protobuf en AssemblyScript, vous permettant de les utiliser dans le gestionnaire de votre subgraph. +Cette commande convertit les définitions Protobuf en AssemblyScript, ce qui permet de les utiliser dans le gestionnaire du subgraph. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Félicitations ! Vous avez configuré avec succès un subgraph alimenté par Substreams basé sur des déclencheurs pour un jeton Solana SPL. Vous pouvez passer à l'étape suivante en personnalisant votre schéma, vos mappages et vos modules pour les adapter à votre cas d'utilisation spécifique. -### Video Tutorial +### Tutoriel Vidéo diff --git a/website/src/pages/fr/subgraphs/_meta-titles.json b/website/src/pages/fr/subgraphs/_meta-titles.json index 0556abfc236c..e10948c648a1 100644 --- a/website/src/pages/fr/subgraphs/_meta-titles.json +++ b/website/src/pages/fr/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", - "best-practices": "Best Practices" + "guides": "Guides pratiques", + "best-practices": "Les meilleures pratiques" } diff --git a/website/src/pages/fr/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/fr/subgraphs/best-practices/avoid-eth-calls.mdx index 2015af316873..33594aca38e1 100644 --- a/website/src/pages/fr/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Meilleure Pratique Subgraph 4 - Améliorer la Vitesse d'Indexation en Évitant les eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Éviter les eth_calls --- ## TLDR -Les `eth_calls` sont des appels qui peuvent être faits depuis un subgraph vers un nœud Ethereum. Ces appels prennent un temps considérable pour renvoyer des données, ralentissant ainsi l'indexation. Si possible, concevez des smart contracts pour émettre toutes les données dont vous avez besoin afin de ne pas avoir à utiliser des `eth_calls`. +Les `eth_calls` sont des appels qui peuvent être effectués depuis un Subgraph vers un nœud Ethereum. Ces appels prennent beaucoup de temps pour renvoyer les données, ce qui ralentit l'indexation. Si possible, concevez des contrats intelligents pour émettre toutes les données dont vous avez besoin afin de ne pas avoir à utiliser les `eth_calls`. ## Pourquoi Éviter les `eth_calls` est une Bonne Pratique -Les subgraphs sont optimisés pour indexer les données des événements émis par les smart contracts. Un subgraph peut également indexer les données provenant d'un `eth_call`, cependant, cela peut considérablement ralentir l'indexation du subgraph car les `eth_call` nécessitent de faire des appels externes aux smart contracts. La réactivité de ces appels dépend non pas du subgraph mais de la connectivité et de la réactivité du nœud Ethereum interrogé. En minimisant ou en éliminant les `eth_call` dans nos subgraphs, nous pouvons améliorer considérablement notre vitesse d'indexation. +Les subgraphs sont optimisés pour indexer les données d'événements émises par les contrats intelligents. Un subgraph peut également indexer les données provenant d'un `eth_call`, mais cela peut ralentir considérablement l'indexation du subgraph car les `eth_calls` nécessitent de faire des appels externes aux smart contracts. La réactivité de ces appels ne dépend pas du subgraph mais de la connectivité et de la réactivité du nœud Ethereum interrogé. En minimisant ou en éliminant les eth_calls dans nos subgraphs, nous pouvons améliorer de manière significative notre vitesse d'indexation. ### À quoi ressemble un eth_call ? -Les `eth_calls` sont souvent nécessaires lorsque les données requises pour un subgraph ne sont pas disponibles par le biais d'événements émis. Par exemple, considérons un scénario où un subgraph doit identifier si les tokens ERC20 font partie d'un pool spécifique, mais le contrat n'émet qu'un événement `Transfer` de base et n'émet pas un événement contenant les données dont nous avons besoin : +Les `eth_calls` sont souvent nécessaires lorsque les données requises pour un Subgraph ne sont pas disponibles par le biais des événements émis. Par exemple, considérons un scénario dans lequel un Subgraph doit identifier si les tokens ERC20 font partie d'un pool spécifique, mais le contrat n'émet qu'un événement `Transfer` de base et n'émet pas d'événement contenant les données dont nous avons besoin : ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -Cela fonctionne, mais ce n'est pas idéal car cela ralentit l'indexation de notre subgraph. +Cette méthode est fonctionnelle, mais elle n'est pas idéale car elle ralentit l'indexation de notre Subgraph. ## Comment Éliminer les `eth_calls` @@ -54,7 +54,7 @@ Idéalement, le smart contract devrait être mis à jour pour émettre toutes le event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -Avec cette mise à jour, le subgraph peut indexer directement les données requises sans appels externes : +Grâce à cette mise à jour, le Subgraph peut indexer directement les données requises sans appel externe : ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,22 +96,22 @@ La partie mise en évidence en jaune est la déclaration d'appel. La partie avan Le handler lui-même accède au résultat de ce `eth_call` exactement comme dans la section précédente en se liant au contrat et en effectuant l'appel. graph-node met en cache les résultats des `eth_calls` déclarés en mémoire et l'appel depuis le handler récupérera le résultat depuis ce cache en mémoire au lieu d'effectuer un appel RPC réel. -Note : Les eth_calls déclarés ne peuvent être effectués que dans les subgraphs avec specVersion >= 1.2.0. +Remarque : les appels eth_call déclarés ne peuvent être effectués que dans les Subgraphs dont la version specVersion est >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +Vous pouvez améliorer de manière significative les performances d'indexation en minimisant ou en éliminant les `eth_calls` dans vos Subgraphs. -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/fr/subgraphs/best-practices/derivedfrom.mdx index 0f735fd35304..2966865fe02c 100644 --- a/website/src/pages/fr/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Bonne pratique pour les subgraphs 2 - Améliorer la Réactivité de l'Indexation et des Requêtes en Utilisant @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Tableaux avec @derivedFrom --- ## TLDR -Les tableaux dans votre schéma peuvent vraiment ralentir les performances d'un subgraph lorsqu'ils dépassent des milliers d'entrées. Si possible, la directive `@derivedFrom` devrait être utilisée lors de l'utilisation des tableaux car elle empêche la formation de grands tableaux, simplifie les gestionnaires et réduit la taille des entités individuelles, améliorant considérablement la vitesse d'indexation et la performance des requêtes. +Les tableaux dans votre schéma peuvent vraiment ralentir les performances d'un Subgraph lorsqu'ils dépassent des milliers d'entrées. Si possible, la directive `@derivedFrom` devrait être utilisée lors de l'utilisation de tableaux, car elle empêche la formation de grands tableaux, simplifie les gestionnaires et réduit la taille des entités individuelles, ce qui améliore considérablement la vitesse d'indexation et les performances des requêtes. ## Comment Utiliser la Directive `@derivedFrom` @@ -15,7 +15,7 @@ Il vous suffit d'ajouter une directive `@derivedFrom` après votre tableau dans comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` crée des relations efficaces de un à plusieurs, permettant à une entité de s'associer dynamiquement à plusieurs entités liées en fonction d'un champ dans l'entité liée. Cette approche élimine la nécessité pour les deux côtés de la relation de stocker des données dupliquées, rendant le subgraph plus efficace. +`@derivedFrom` crée des relations efficaces d'un à plusieurs, permettant à une entité de s'associer dynamiquement à plusieurs entités apparentées sur la base d'un champ de l'entité apparentée. Cette approche évite aux deux parties de la relation de stocker des données en double, ce qui rend le Subgraph plus efficace. ### Exemple de cas d'utilisation de `@derivedFrom` @@ -60,30 +60,30 @@ type Comment @entity { En ajoutant simplement la directive `@derivedFrom`, ce schéma ne stockera les "Comments" que du côté "Comments" de la relation et non du côté "Post" de la relation. Les tableaux sont stockés sur des lignes individuelles, ce qui leur permet de s'étendre de manière significative. Cela peut entraîner des tailles particulièrement grandes si leur croissance est illimitée. -Cela rendra non seulement notre subgraph plus efficace, mais débloquera également trois fonctionnalités : +Cela ne rendra pas seulement notre Subgraph plus efficace, mais débloquera également trois fonctionnalités : 1. Nous pouvons interroger le `Post` et voir tous ses commentaires. 2. Nous pouvons faire une recherche inverse et interroger n'importe quel Commentaire et voir de quel post il provient. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. Nous pouvons utiliser [Chargeurs de champs dérivés] (/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) pour débloquer la possibilité d'accéder directement aux données des relations virtuelles et de les manipuler dans nos mappages de Subgraph. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Utilisez la directive `@derivedFrom` dans les Subgraphs pour gérer efficacement les tableaux à croissance dynamique, en améliorant l'efficacité de l'indexation et la récupération des données. -For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +Pour une explication plus détaillée des stratégies permettant d'éviter les tableaux volumineux, consultez le blog de Kevin Jones : [Bonnes pratiques en matière de développement de subgraphs : éviter les tableaux volumineux](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/fr/subgraphs/best-practices/grafting-hotfix.mdx index 3b56e2b7eb6c..e8813f5e8a20 100644 --- a/website/src/pages/fr/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,68 +1,68 @@ --- -title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +title: Meilleure pratique pour les subgraphs 6 - Utiliser le greffage pour un déploiement rapide des correctifs +sidebarTitle: Greffage et réparation en environement de production --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Le greffage est une fonctionnalité puissante dans le développement de Subgraphs qui vous permet de construire et de déployer de nouveaux Subgraphs tout en réutilisant les données indexées des Subgraphs existants. ### Aperçu -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +Cette fonction permet de déployer rapidement des correctifs pour les problèmes critiques, éliminant ainsi la nécessité de réindexer l'ensemble du Subgraph à partir de zéro. En préservant les données historiques, le greffage minimise les temps d'arrêt et assure la continuité des services de données. -## Benefits of Grafting for Hotfixes +## Avantages du greffage pour les correctifs -1. **Rapid Deployment** +1. **Déploiement rapide** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimiser les temps d'arrêt** : Lorsqu'un Subgraph rencontre une erreur critique et cesse d'être indexé, la greffe vous permet de déployer immédiatement un correctif sans attendre la réindexation. + - **Récupération immédiate** : Le nouveau Subgraph continue à partir du dernier bloc indexé, garantissant que les services de données restent ininterrompus. -2. **Data Preservation** +2. **Préservation des données** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. - - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + - **Réutilisation des données historiques** : Le greffage copie les données existantes du Subgraph de base, de sorte que vous ne perdez pas de précieux enregistrements historiques. + - **Consistance** : Maintient la continuité des données, ce qui est crucial pour les applications qui s'appuient sur des données historiques cohérentes. -3. **Efficiency** - - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. - - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. +3. **Efficacité** + - **Économie de temps et de ressources** : Évite surcoût de calcul lié à la réindexation de grands ensembles de données. + - **Focalisation sur les corrections** : Permet aux développeurs de se concentrer sur la résolution des problèmes plutôt que sur la gestion de la récupération des données. -## Best Practices When Using Grafting for Hotfixes +## Meilleures pratiques lors de l'utilisation du greffage pour les correctifs -1. **Initial Deployment Without Grafting** +1. **Déploiement initial sans greffage** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Démarrez proprement** : Déployez toujours votre Subgraph initial sans greffe pour vous assurer qu'il est stable et qu'il fonctionne comme prévu. + - **Testez minutieusement** : Validez les performances du Subgraph afin de minimiser les besoins en correctifs futurs. -2. **Implementing the Hotfix with Grafting** +2. **Mise en œuvre du correctif par greffage** - - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Identifier le problème** : Lorsqu'une erreur critique se produit, déterminez le numéro de bloc du dernier événement indexé avec succès. + - **Créer un nouveau Subgraph** : Développer un nouveau Subgraph qui inclut le correctif. + - **Configurer la greffe** : Utiliser la greffage pour copier les données jusqu'au numéro de bloc identifié à partir du Subgraph défaillant. + - **Déployer rapidement** : Publier le Subgraph greffé pour rétablir le service dès que possible. -3. **Post-Hotfix Actions** +3. **Actions post-correctif** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. - > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Surveillez les performances** : Assurez-vous que le Subgraph greffé est indexé correctement et que le correctif résout le problème. + - **Républier sans greffer** : Une fois stable, déployer une nouvelle version du Subgraph sans greffe pour une maintenance à long terme. + > Remarque : il n'est pas recommandé de s'appuyer indéfiniment sur le greffage, car cela peut compliquer les mises à jour et la maintenance futures. + - **Mettre à jour les références** : Rediriger tous les services ou applications pour qu'ils utilisent le nouveau Subgraph non greffé. -4. **Important Considerations** - - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. +4. **Considérations importantes** + - **Sélection minutieuse des blocs** : Choisissez soigneusement le numéro du bloc de greffage pour éviter toute perte de données. + - **Conseil** : Utilisez le numéro de bloc du dernier événement correctement traité. + - **Utiliser l'ID de déploiement** : Assurez-vous que vous faites référence à l'ID de déploiement du Subgraph de base, et non à l'ID du Subgraph. + - **Note** : L'ID de déploiement est l'identifiant unique d'un déploiement de Subgraph spécifique. + - **Déclaration de fonctionnalité** : N'oubliez pas de déclarer le greffage dans le manifeste Subgraph en dessous de features. -## Example: Deploying a Hotfix with Grafting +## Exemple : Déploiement d'un correctif par greffage -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Supposons que vous ayez un Subgraph qui suit un contrat intelligent qui a cessé d'être indexé en raison d'une erreur critique. Voici comment vous pouvez utiliser le greffage pour déployer un correctif. -1. **Failed Subgraph Manifest (subgraph.yaml)** +1. **Manifeste du subgraph échoué (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -88,9 +88,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing file: ./src/old-lock.ts ``` -2. **New Grafted Subgraph Manifest (subgraph.yaml)** +2. **Nouveau manifeste de subgraph greffé (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -100,10 +100,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing source: address: '0xNewContractAddress' abi: Lock - startBlock: 6000001 # Block after the last indexed block + startBlock: 6000001 # Bloc suivant le dernier bloc indexé mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,71 +117,71 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph - block: 6000000 # Last successfully indexed block + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph + block: 6000000 # Dernier bloc indexé avec succès ``` -**Explanation:** +**Explication:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. -- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. -- **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. - - **block**: Block number where grafting should begin. +- **Mise à jour de la source de données** : Le nouveau Subgraph pointe vers 0xNewContractAddress, qui pourrait être une version corrigée du contrat intelligent. +- **Bloc de départ** : Fixé à un bloc après le dernier bloc indexé avec succès afin d'éviter de retraiter l'erreur. +- **Configuration du greffage** : + - **base** : ID de déploiement du Subgraph défaillant. + - **bloc** : Numéro du bloc où le greffage doit commencer. -3. **Deployment Steps** +3. **Étapes de déploiement** - - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). - - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - - **Deploy the Subgraph**: - - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - **Mise à jour du code** : Implémentez le correctif dans vos scripts de mappage (par exemple, handleWithdrawal). + - **Ajuster le manifeste** : Comme indiqué ci-dessus, mettez à jour le fichier `subgraph.yaml` avec les configurations de greffage. + - **Déployer le subgraph** : + - S'authentifier à l'aide de l'interface de Graph CLI. + - Déployer le nouveau Subgraph en utilisant `graph deploy`. -4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. - - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. +4. **Post-Déploiement** + - **Vérifier l'indexation** : Vérifier que le Subgraph est correctement indexé à partir du point de greffage. + - **Surveiller les données** : S'assurer que les nouvelles données sont capturées et que le correctif est efficace. + - **Planifier la republication** : Planifier le déploiement d'une version non greffée pour une stabilité à long terme. -## Warnings and Cautions +## Avertissements et précautions -While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. +Bien que le greffage soit un outil puissant pour déployer rapidement des correctifs, il existe des scénarios spécifiques dans lesquels il doit être évité afin de préserver l'intégrité des données et d'assurer des performances optimales. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. -- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Modifications de schéma incompatibles** : Si votre correctif nécessite de modifier le type des champs existants ou de supprimer des champs de votre schéma, le greffage n'est pas appropriée. La greffe suppose que le schéma du nouveau subgraph soit compatible avec celui du subgraph de base. Des modifications incompatibles peuvent entraîner des incohérences et des erreurs dans les données, car les données existantes ne seront pas alignées sur le nouveau schéma. +- **Révisions importantes de la logique de mappage** : Lorsque le correctif implique des modifications substantielles de votre logique de mappage, telles que la modification du traitement des événements ou des fonctions de gestion, le greffage risque de ne pas fonctionner correctement. La nouvelle logique peut ne pas être compatible avec les données traitées dans le cadre de l'ancienne logique, ce qui entraîne des données incorrectes ou un échec de l'indexation. +- **Déploiements sur le réseau The Graph** : Le greffage n'est pas recommandée pour les subgraphs destinés au réseau décentralisé de The Graph (réseau principal). Elle peut compliquer l'indexation et peut ne pas être entièrement prise en charge par tous les Indexeurs, ce qui peut entraîner un comportement inattendu ou une augmentation des coûts. Pour les déploiements sur le réseau principal, il est plus sûr de réindexer le subgraph à partir de zéro pour garantir une compatibilité et une fiabilité totales. -### Risk Management +### Gestion des risques -- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. -- **Testing**: Always test grafting in a development environment before deploying to production. +- **Intégrité des données** : Des numéros de blocs incorrects peuvent entraîner la perte ou la duplication de données. +- **Test** : Testez toujours le greffage dans un environnement de développement avant de la déployer en production. ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Le greffage est une stratégie efficace pour déployer des correctifs dans le cadre du développement de Subgraphs : -- **Quickly Recover** from critical errors without re-indexing. -- **Preserve Historical Data**, maintaining continuity for applications and users. -- **Ensure Service Availability** by minimizing downtime during critical fixes. +- **Rétablissement rapide** sans besoin de réindexation après des erreurs critiques. +- **Préserver les données historiques**, en maintenant la continuité pour les applications et les utilisateurs. +- **Assurer la disponibilité du service** en minimisant les temps d'arrêt lors des corrections critiques. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +Cependant, il est important d'utiliser le greffage de manière judicieuse et de suivre les meilleures pratiques pour atténuer les risques. Après avoir stabilisé votre Subgraph à l'aide du correctif, prévoyez de déployer une version non greffée afin de garantir la maintenabilité à long terme. ## Ressources supplémentaires -- **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting -- **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. +- **[Documentation sur le greffage](/subgraphs/cookbook/grafting/)** : Remplacer un contrat et conserver son historique avec le greffage +- **[Comprendre les ID de déploiement](/subgraphs/querying/subgraph-id-vs-deployment-id/)** : Apprenez la différence entre l'ID de déploiement et l'ID de subgraph. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +En incorporant le greffage dans votre flux de développement Subgraph, vous pouvez améliorer votre capacité à répondre rapidement aux problèmes, en veillant à ce que vos services de données restent robustes et fiables. -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/fr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index ae0e39b2564b..e87150855b2e 100644 --- a/website/src/pages/fr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Bonne pratique pour les subgraphs 3 - Améliorer l'Indexation et les Performances de Recherche en Utilisant des Entités Immuables et des Bytes comme IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Entités immuables et Bytes comme IDs --- ## TLDR @@ -22,7 +22,7 @@ type Transfer @entity(immutable: true) { En rendant l'entité `Transfer` immuable, graph-node est capable de traiter l'entité plus efficacement, améliorant la vitesse d'indexation et la réactivité des requêtes. -Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging onchain event data, such as a `Transfer` event being logged as a `Transfer` entity. +Les structures des entités immuables ne changeront pas dans le futur. Une entité idéale pour devenir une Entité immuable serait une entité qui enregistre directement les données d'un événement onchain, comme un événement `Transfer` enregistré en tant qu'entité `Transfer`. ### Sous le capot @@ -50,12 +50,12 @@ Bien que d'autres types d'ID soient possibles, tels que String et Int8, il est r ### Raisons de ne pas utiliser les Bytes comme IDs 1. Si les IDs d'entité doivent être lisibles par les humains, comme les IDs numériques auto-incrémentés ou les chaînes lisibles, les Bytes pour les IDs ne doivent pas être utilisés. -2. Si nous intégrons des données d'un subgraph avec un autre modèle de données qui n'utilise pas les Bytes comme IDs, les Bytes comme IDs ne doivent pas être utilisés. +2. Si vous intégrez les données d'un Subgraph dans un autre modèle de données qui n'utilise pas les Bytes en tant qu'ID, il ne faut pas utiliser les Bytes en tant qu'ID. 3. Les améliorations de performances d'indexation et de recherche ne sont pas souhaitées. ### Concatenation Avec Bytes comme IDs -Il est courant dans de nombreux subgraphs d'utiliser la concaténation de chaînes de caractères pour combiner deux propriétés d'un événement en un seul ID, comme utiliser `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`.. Cependant, comme cela retourne une chaîne de caractères, cela nuit considérablement à la performance d'indexation et de recherche des subgraphs. +Dans de nombreux subgraphs, il est courant d'utiliser la concaténation de chaînes pour combiner deux propriétés d'un événement en un seul identifiant, par exemple en utilisant `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Cependant, comme cette méthode renvoie une chaîne de caractères, elle entrave considérablement les performances d'indexation et d'interrogation du Subgraph. Au lieu de cela, nous devrions utiliser la méthode `concatI32()` pour concaténer les propriétés des événements. Cette stratégie donne un ID de type Bytes beaucoup plus performant. @@ -172,20 +172,20 @@ Réponse de la requête: ## Conclusion -L'utilisation à la fois d' Entités immuables et de Bytes en tant qu'IDs a montré une amélioration marquée de l'efficacité des subgraphs. Plus précisément, des tests ont mis en évidence une augmentation de 28% des performances des requêtes et une accélération de 48% des vitesses d'indexation. +L'utilisation d'entités immuables et de Bytes comme IDs a permis d'améliorer sensiblement l'efficacité de Subgraph. Plus précisément, les tests ont mis en évidence une augmentation de 28 % des performances des requêtes et une accélération de 48 % des vitesses d'indexation. En savoir plus sur l'utilisation des Entités immuables et des Bytes en tant qu'IDs dans cet article de blog de David Lutterkort, un ingénieur logiciel chez Edge & Node : [Deux améliorations simples des performances des subgraphs](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/best-practices/pruning.mdx b/website/src/pages/fr/subgraphs/best-practices/pruning.mdx index 82db761dcdac..ea2ff4855676 100644 --- a/website/src/pages/fr/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Meilleure Pratique Subgraph 1 - Améliorer la Vitesse des Requêtes avec le Pruning de Subgraph -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Élagage avec indexerHints --- ## TLDR -[Le pruning](/developing/creating-a-subgraph/#prune) (élagage) retire les entités archivées de la base de données des subgraphs jusqu'à un bloc donné, et retirer les entités inutilisées de la base de données d'un subgraph améliorera souvent de manière spectaculaire les performances de requête d'un subgraph. L'utilisation de `indexerHints` est un moyen simple de réaliser le pruning d'un subgraph. +[L'élagage](/developing/creating-a-subgraph/#prune) supprime les entités archivées de la base de données du Subgraph jusqu'à un bloc donné, et la suppression des entités inutilisées de la base de données d'un Subgraph améliore les performances d'interrogation d'un Subgraph, souvent de façon spectaculaire. L'utilisation de `indexerHints` est un moyen facile d'élaguer un Subgraph. ## Comment effectuer le Pruning d'un subgraph avec `indexerHints` @@ -13,14 +13,14 @@ Ajoutez une section appelée `indexerHints` dans le manifest. `indexerHints` dispose de trois options de `prune` : -- `prune: auto`: Conserve l'historique minimum nécessaire tel que défini par l'Indexeur, optimisant ainsi les performances des requêtes. C'est le paramètre généralement recommandé et celui par défaut pour tous les subgraphs créés par `graph-cli` >= 0.66.0. +- `prune : auto` : Conserve le minimum d'historique nécessaire tel que défini par l'Indexeur, optimisant ainsi les performances des requêtes. C'est le réglage généralement recommandé et c'est le réglage par défaut pour tous les Subgraphs créés par `graph-cli` >= 0.66.0. - `prune: `: Définit une limite personnalisée sur le nombre de blocs historiques à conserver. -- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. +- `prune : never` : Pas d'élagage des données historiques ; conserve l'historique complet et est la valeur par défaut s'il n'y a pas de section `indexerHints`. `prune : never` devrait être sélectionné si [Les requetes Chronologiques](/subgraphs/querying/graphql-api/#time-travel-queries) sont désirées. -Nous pouvons ajouter `indexerHints` à nos subgraphs en mettant à jour notre `subgraph.yaml`: +Nous pouvons ajouter des `indexerHints` à nos Subgraphs en mettant à jour notre `subgraph.yaml` : ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -33,24 +33,24 @@ dataSources: ## Points Importants -- If [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data. +- Si les [requêtes chronologiques](/subgraphs/querying/graphql-api/#time-travel-queries) sont souhaitées en plus de l'élagage, l'élagage doit être effectué avec précision pour conserver la fonctionnalité des requêtes chronologiques. Pour cette raison, il n'est généralement pas recommandé d'utiliser `indexerHints : prune : auto` avec les requêtes chronologiques. Au lieu de cela, élaguez en utilisant `indexerHints : prune : ` pour élaguer précisément à une hauteur de bloc qui préserve les données historiques requises par les requêtes chronologiques, ou utilisez `prune : never` pour conserver toutes les données. -- It is not possible to [graft](/subgraphs/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months). +- Il n'est pas possible de [greffer](/subgraphs/cookbook/grafting/) à une hauteur de bloc qui a été élaguée. Si le greffage est effectué de manière routinière et que l'élagage est souhaité, il est recommandé d'utiliser `indexerHints : prune : ` qui conservera avec précision un nombre défini de blocs (par exemple, suffisamment pour six mois). ## Conclusion -L'élagage en utilisant `indexerHints` est une meilleure bonne pour le développement de subgraphs, offrant des améliorations significatives des performances des requêtes. +L'élagage à l'aide de `indexerHints` est une meilleure pratique pour le développement de Subgraphs, offrant des améliorations significatives de la performance des requêtes. -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/best-practices/timeseries.mdx b/website/src/pages/fr/subgraphs/best-practices/timeseries.mdx index 39363a06651f..9be75d158d07 100644 --- a/website/src/pages/fr/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/fr/subgraphs/best-practices/timeseries.mdx @@ -1,49 +1,53 @@ --- -title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +title: Meilleure pratique pour les subgraphs 5 - Simplifier et optimiser avec les séries chronologiques et les agrégations +sidebarTitle: Séries chronologiques et agrégations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +L'utilisation de la nouvelle fonction de séries Chronologiques et d'agrégations dans les Subgraphs peut améliorer de manière significative la vitesse d'indexation et la performance des requêtes. ## Aperçu -Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. +Les séries chronologiques et les agrégations réduisent le coûts de traitementt des données et accélèrent les requêtes en déchargeant les calculs d'agrégation dans la base de données et en simplifiant le code de mappage. Cette approche est particulièrement efficace lorsqu'il s'agit de traiter de grands volumes de données chronologiques. -## Benefits of Timeseries and Aggregations +## Avantages des séries chronologiques et des agrégations -1. Improved Indexing Time +1. Amélioration du temps d'indexation -- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. -- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. +- Moins de données à charger : Les mappages traitent moins de données puisque les points de données brutes sont stockés sous forme d'entités chronologiques immuables. +- Agrégations gérées par la base de données : Les agrégations sont automatiquement calculées par la base de données, ce qui réduit la charge de travail sur les mappages. -2. Simplified Mapping Code +2. Code de mappage simplifié -- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. -- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. +- Pas de calculs manuels : Les développeurs n'ont plus besoin d'écrire une logique d'agrégation complexe dans les mappages. +- Complexité réduite : Simplifie la maintenance du code et minimise les risques d'erreurs. -3. Dramatically Faster Queries +3. Des requêtes beaucoup plus rapides -- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. -- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. +- Données immuables : Toutes les données de séries chronologiques sont immuables, ce qui permet un stockage et une extraction efficaces. +- Séparation efficace des données : Les agrégats sont stockés séparément des données chronologiques brutes, ce qui permet aux requêtes de traiter beaucoup moins de données, souvent plusieurs ordres de grandeur en moins. ### Points Importants -- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. -- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. -- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. +- Données immuables : Les données chronologiques ne peuvent pas être modifiées une fois écrites, ce qui garantit l'intégrité des données et simplifie l'indexation. +- Gestion automatique de l'identification et de l'horodatage : les champs d'identification et d'horodatage sont automatiquement gérés par graph-node, ce qui réduit les erreurs potentielles. +- Stockage efficace des données : En séparant les données brutes des agrégats, le stockage est optimisé et les requêtes s'exécutent plus rapidement. -## How to Implement Timeseries and Aggregations +## Comment mettre en œuvre des séries chronologiques et des agrégations -### Defining Timeseries Entities +### Prérequis -A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: +Vous avez besoin de `spec version 1.1.0` pour cette fonctionnalité. -- Immutable: Timeseries entities are always immutable. -- Mandatory Fields: - - `id`: Must be of type `Int8!` and is auto-incremented. - - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. +### Définition des entités de séries chronologiques + +Une entité de séries chronologiques représente des points de données brutes collectés au fil du temps. Elle est définie par l'annotation `@entity(timeseries : true)`. Exigences principales : + +- Immuable : Les entités de séries chronologiques sont toujours immuables. +- Champs obligatoires : + - `id` : Doit être de type `Int8!` et est auto-incrémenté. + - `timestamp` : Doit être de type 'Timestamp!\` et est automatiquement fixé à l'horodatage du bloc. L'exemple: @@ -51,16 +55,16 @@ L'exemple: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` -### Defining Aggregation Entities +### Définition des entités d'agrégation -An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: +Une entité d'agrégation calcule des valeurs agrégées à partir d'une source de séries chronologiques. Elle est définie par l'annotation `@aggregation`. Composants clés : -- Annotation Arguments: - - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). +- Arguments d'annotation : + - `intervals` : Spécifie les intervalles de temps (par exemple, `["hour", "day"]`). L'exemple: @@ -68,15 +72,15 @@ L'exemple: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +Dans cet exemple, Stats agrège le champ montant de Data sur des intervalles horaires et quotidiens, en calculant la somme. -### Querying Aggregated Data +### Interroger des données agrégées -Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. +Les agrégations sont exposées via des champs de requête qui permettent le filtrage et la recherche sur la base de dimensions et d'intervalles de temps. L'exemple: @@ -98,13 +102,13 @@ L'exemple: } ``` -### Using Dimensions in Aggregations +### Utilisation des dimensions dans les agrégations -Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. +Les dimensions sont des champs non agrégés utilisés pour regrouper des points de données. Elles permettent des agrégations basées sur des critères spécifiques, tels qu'un jeton dans une application financière. L'exemple: -### Timeseries Entity +### Entité de séries chronologiques ```graphql type TokenData @entity(timeseries: true) { @@ -116,7 +120,7 @@ type TokenData @entity(timeseries: true) { } ``` -### Aggregation Entity with Dimension +### Entité d'agrégation avec dimension ```graphql type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { @@ -129,15 +133,15 @@ type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { } ``` -- Dimension Field: token groups the data, so aggregates are computed per token. -- Aggregates: - - totalVolume: Sum of amount. - - priceUSD: Last recorded priceUSD. - - count: Cumulative count of records. +- Champ dimensionnel : le jeton regroupe les données, de sorte que les agrégats sont calculés par jeton. +- Agrégats : + - totalVolume: Somme des montants. + - priceUSD: priceUSD le plus récent Enregistré. + - count: Nombre cumulé d'enregistrements. -### Aggregation Functions and Expressions +### Fonctions et expressions d'agrégation -Supported aggregation functions: +Fonctions d'agrégation prises en charge : - sum - count @@ -146,50 +150,50 @@ Supported aggregation functions: - first - last -### The arg in @aggregate can be +### L'argument dans @aggregate peut être -- A field name from the timeseries entity. -- An expression using fields and constants. +- Un nom de champ de l'entité de série chronologique. +- Une expression utilisant des champs et des constantes. -### Examples of Aggregation Expressions +### Exemples d'expressions d'agrégation -- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") -- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") -- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") +- Addition de la Valeur du jeton: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Montant positif maximum: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Somme conditionnelle: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") -Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. +Les opérateurs et fonctions pris en charge comprennent l'arithmétique de base (+, -, \_, /), les opérateurs de comparaison, les opérateurs logiques (and, or, not) et les fonctions SQL telles que la plus grande, la plus petite, la coalescence, etc. -### Query Parameters +### Paramètres de requête -- interval: Specifies the time interval (e.g., "hour"). -- where: Filters based on dimensions and timestamp ranges. -- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). +- interval: Spécifie l'intervalle de temps (e.g., "heure"). +- where: Filtres basés sur les dimensions et les plages d'horodatage. +- timestamp_gte / timestamp_lt: Filtre pour les heures de début et de fin (microsecondes depuis l'epoch). ### Notes -- Sorting: Results are automatically sorted by timestamp and id in descending order. -- Current Data: An optional current argument can include the current, partially filled interval. +- Tri : Les résultats sont automatiquement triés par date et par numéro d'identification, dans l'ordre décroissant. +- Données actuelles : Un argument facultatif de données actuelles peut inclure l'intervalle actuel, partiellement rempli. ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +La mise en œuvre de séries chronologiques et d'agrégations dans des Subgraphs est une bonne pratique pour les projets traitant de données temporelles. Cette approche : -- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. -- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. -- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. +- Améliore les performances : Accélère l'indexation et l'interrogation en réduisant le coût du traitement des données. +- Simplifie le développement : Élimine la nécessité d'une logique d'agrégation manuelle dans les correspondances. +- Évolue efficacement : Traite d'importants volumes de données sans compromettre la vitesse ou la réactivité. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +En adoptant ce modèle, les développeurs peuvent construire des subgraphs plus efficaces et plus évolutifs, offrant un accès aux données plus rapide et plus fiable aux utilisateurs finaux. Pour en savoir plus sur l'implémentation des séries chronologiques et des agrégations, consultez le [Readme des Séries chronologiques et agrégations] (https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) et envisagez d'expérimenter cette fonctionnalité dans vos subgraphs. -## Subgraph Best Practices 1-6 +## Bonnes pratiques pour les subgraphs 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [Améliorer la vitesse des requêtes avec l'élagage des Subgraphs](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Améliorer l'indexation et la réactivité des requêtes en utilisant @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Améliorer l'indexation et les performances des requêtes en utilisant des entités immuables et des Bytes comme IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Améliorer la vitesse d'indexation en évitant les `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [Simplifier et optimiser avec les séries chronologiques et les agrégations](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [Utiliser le greffage pour un déploiement rapide des correctifs](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/fr/subgraphs/billing.mdx b/website/src/pages/fr/subgraphs/billing.mdx index ba4239f2ea01..c718e8864b9d 100644 --- a/website/src/pages/fr/subgraphs/billing.mdx +++ b/website/src/pages/fr/subgraphs/billing.mdx @@ -2,20 +2,22 @@ title: Facturation --- -## Querying Plans +## Plans de requêtes -Il y a deux plans à utiliser lorsqu'on interroge les subgraphs sur le réseau de The Graph. +Il existe deux plans à utiliser pour interroger les subgraphs sur The Graph Network. -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- **Plan Gratuit (Free Plan)** : Le plan gratuit comprend 100 000 requêtes mensuelles gratuites et un accès complet à l'environnement de test Subgraph Studio. Ce plan est conçu pour les amateurs, les hackathoniens et ceux qui ont des projets parallèles pour essayer The Graph avant de faire évoluer leur dapp. -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **Plan Croissance (Growth Plan)** : Le plan de croissance comprend tout ce qui est inclus dans le plan gratuit avec toutes les requêtes après 100 000 requêtes mensuelles nécessitant des paiements avec des GRT ou par carte de crédit. Le plan de croissance est suffisamment flexible pour couvrir les équipes qui ont établi des dapps à travers une variété de cas d'utilisation. + +Learn more about pricing [here](https://thegraph.com/studio-pricing/). ## Paiements de Requêtes avec Carte de Crédit⁠ - Pour mettre en place la facturation par carte de crédit/débit, les utilisateurs doivent accéder à Subgraph Studio (https://thegraph.com/studio/) - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). + 1. Accédez à la [page de Facturation de Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Cliquez sur le bouton "Connecter le portefeuille" dans le coin supérieur droit de la page. Vous serez redirigé vers la page de sélection des portefeuilles. Sélectionnez votre portefeuille et cliquez sur "Connecter". 3. Choisissez « Mettre à niveau votre abonnement » si vous effectuez une mise à niveau depuis le plan gratuit, ou choisissez « Gérer l'abonnement » si vous avez déjà ajouté des GRT à votre solde de facturation par le passé. Ensuite, vous pouvez estimer le nombre de requêtes pour obtenir une estimation du prix, mais ce n'est pas une étape obligatoire. 4. Pour choisir un paiement par carte de crédit, choisissez “Credit card” comme mode de paiement et remplissez les informations de votre carte de crédit. Ceux qui ont déjà utilisé Stripe peuvent utiliser la fonctionnalité Link pour remplir automatiquement leurs informations. @@ -45,17 +47,17 @@ L'utilisation du GRT sur Arbitrum est nécessaire pour le paiement des requêtes - Alternativement, vous pouvez acquérir du GRT directement sur Arbitrum via un échange décentralisé. -> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). +> Cette section est écrite en supposant que vous avez déjà des GRT dans votre portefeuille, et que vous êtes sur Arbitrum. Si vous n'avez pas de GRT, vous pouvez apprendre à en obtenir [ici](#getting-grt). Une fois que vous avez transféré du GRT, vous pouvez l'ajouter à votre solde de facturation. ### Ajout de GRT à l'aide d'un portefeuille -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Accédez à la [page de Facturation de Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Cliquez sur le bouton "Connecter le portefeuille" dans le coin supérieur droit de la page. Vous serez redirigé vers la page de sélection des portefeuilles. Sélectionnez votre portefeuille et cliquez sur "Connecter". 3. Cliquez sur le bouton « Manage » situé dans le coin supérieur droit. Les nouveaux utilisateurs verront l'option « Upgrade to Growth plan » (Passer au plan de croissance), tandis que les utilisateurs existants devront sélectionner « Deposit from wallet » (Déposer depuis le portefeuille). 4. Utilisez le curseur pour estimer le nombre de requêtes que vous prévoyez d’effectuer sur une base mensuelle. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Pour des suggestions sur le nombre de requêtes que vous pouvez utiliser, consultez notre page **Frequently Asked Questions** (Questions fréquemment posées). 5. Choisissez "Cryptocurrency". Le GRT est actuellement la seule cryptomonnaie acceptée sur le réseau The Graph. 6. Sélectionnez le nombre de mois pour lesquels vous souhaitez effectuer un paiement anticipé. - Le paiement anticipé ne vous engage pas sur une utilisation future. Vous ne serez facturé que pour ce que vous utiliserez et vous pourrez retirer votre solde à tout moment. @@ -68,7 +70,7 @@ Une fois que vous avez transféré du GRT, vous pouvez l'ajouter à votre solde ### Retirer des GRT en utilisant un portefeuille -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Accédez à la [page de Facturation de Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Cliquez sur le bouton "Connect Wallet" dans le coin supérieur droit de la page. Sélectionnez votre portefeuille et cliquez sur "Connect". 3. Cliquez sur le bouton « Gérer » dans le coin supérieur droit de la page. Sélectionnez « Retirer des GRT ». Un panneau latéral apparaîtra. 4. Entrez le montant de GRT que vous voudriez retirer. @@ -77,11 +79,11 @@ Une fois que vous avez transféré du GRT, vous pouvez l'ajouter à votre solde ### Ajout de GRT à l'aide d'un portefeuille multisig -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. +1. Accédez à la [page de Facturation de Subgraph Studio] (https://thegraph.com/studio/subgraphs/billing/). +2. Cliquez sur le bouton "Connecter votre Portefeuille" dans le coin supérieur droit de la page. Sélectionnez votre portefeuille et cliquez sur "Connecter". Si vous utilisez [Gnosis-Safe](https://gnosis-safe.io/), vous pourrez connecter votre portefeuille multisig ainsi que votre portefeuille de signature. Ensuite, signez le message associé. Cela ne coûtera pas de gaz. 3. Cliquez sur le bouton « Manage » situé dans le coin supérieur droit. Les nouveaux utilisateurs verront l'option « Upgrade to Growth plan » (Passer au plan de croissance), tandis que les utilisateurs existants devront sélectionner « Deposit from wallet » (Déposer depuis le portefeuille). 4. Utilisez le curseur pour estimer le nombre de requêtes que vous prévoyez d’effectuer sur une base mensuelle. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Pour des suggestions sur le nombre de requêtes que vous pouvez utiliser, consultez notre page **Frequently Asked Questions** (Questions fréquemment posées). 5. Choisissez "Cryptocurrency". Le GRT est actuellement la seule cryptomonnaie acceptée sur le réseau The Graph. 6. Sélectionnez le nombre de mois pour lesquels vous souhaitez effectuer un paiement anticipé. - Le paiement anticipé ne vous engage pas sur une utilisation future. Vous ne serez facturé que pour ce que vous utiliserez et vous pourrez retirer votre solde à tout moment. @@ -99,7 +101,7 @@ Cette section vous montrera comment obtenir du GRT pour payer les frais de requ Voici un guide étape par étape pour acheter de GRT sur Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Allez sur [Coinbase](https://www.coinbase.com/) et créez un compte. 2. Dès que vous aurez créé un compte, vous devrez vérifier votre identité par le biais d'un processus connu sous le nom de KYC (Know Your Customer ou Connaître Votre Client). Il s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. 3. Une fois votre identité vérifiée, vous pouvez acheter des GRT. Pour ce faire, cliquez sur le bouton « Acheter/Vendre » en haut à droite de la page. 4. Sélectionnez la devise que vous souhaitez acheter. Sélectionnez GRT. @@ -107,19 +109,19 @@ Voici un guide étape par étape pour acheter de GRT sur Coinbase. 6. Sélectionnez la quantité de GRT que vous souhaitez acheter. 7. Vérifiez votre achat. Vérifiez votre achat et cliquez sur "Buy GRT". 8. Confirmez votre achat. Confirmez votre achat et vous aurez acheté des GRT avec succès. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). +9. Vous pouvez transférer les GRT de votre compte à votre portefeuille tel que [MetaMask](https://metamask.io/). - Pour transférer les GRT dans votre portefeuille, cliquez sur le bouton "Accounts" en haut à droite de la page. - Cliquez sur le bouton "Send" à côté du compte GRT. - Entrez le montant de GRT que vous souhaitez envoyer et l'adresse du portefeuille vers laquelle vous souhaitez l'envoyer. - Cliquez sur "Continue" et confirmez votre transaction. -Veuillez noter que pour des montants d'achat plus importants, Coinbase peut vous demander d'attendre 7 à 10 jours avant de transférer le montant total vers un portefeuille. -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Vous pouvez en savoir plus sur l'acquisition de GRT sur Coinbase [ici](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance Ceci est un guide étape par étape pour l'achat des GRT sur Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Allez sur [Binance](https://www.binance.com/en) et créez un compte. 2. Dès que vous aurez créé un compte, vous devrez vérifier votre identité par le biais d'un processus connu sous le nom de KYC (Know Your Customer ou Connaître Votre Client). Il s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. 3. Une fois votre identité vérifiée, vous pouvez acheter des GRT. Pour ce faire, cliquez sur le bouton « Acheter maintenant » sur la bannière de la page d'accueil. 4. Vous accéderez à une page où vous pourrez sélectionner la devise que vous souhaitez acheter. Sélectionnez GRT. @@ -127,27 +129,27 @@ Ceci est un guide étape par étape pour l'achat des GRT sur Binance. 6. Sélectionnez la quantité de GRT que vous souhaitez acheter. 7. Confirmez votre achat et cliquez sur « Acheter des GRT ». 8. Confirmez votre achat et vous pourrez voir vos GRT dans votre portefeuille Binance Spot. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. +9. Vous pouvez retirer les GRT de votre compte vers votre portefeuille tel que [MetaMask](https://metamask.io/). + - [Pour retirer](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) les GRT dans votre portefeuille, ajoutez l'adresse de votre portefeuille à la liste blanche des retraits. - Cliquez sur le bouton « portefeuille », cliquez sur retrait et sélectionnez GRT. - Saisissez le montant de GRT que vous souhaitez envoyer et l'adresse du portefeuille sur liste blanche à laquelle vous souhaitez l'envoyer. - Cliquer sur « Continuer » et confirmez votre transaction. -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Vous pouvez en savoir plus sur l'achat de GRT sur Binance [ici](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ### Uniswap Voici comment vous pouvez acheter des GRT sur Uniswap. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. +1. Allez sur [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) et connectez votre portefeuille. 2. Sélectionnez le jeton dont vous souhaitez échanger. Sélectionnez ETH. 3. Sélectionnez le jeton vers lequel vous souhaitez échanger. Sélectionnez GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) + - Assurez-vous que vous échangez contre le bon jeton. L'adresse du contrat intelligent GRT sur Arbitrum One est la suivante : [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) 4. Entrez le montant d'ETH que vous souhaitez échanger. 5. Cliquez sur « Échanger ». 6. Confirmez la transaction dans votre portefeuille et attendez qu'elle soit traitée. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Vous pouvez en savoir plus sur l'obtention de GRT sur Uniswap [ici](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). ## Obtenir de l'Ether⁠ @@ -157,7 +159,7 @@ Cette section vous montrera comment obtenir de l'Ether (ETH) pour payer les frai Ce sera un guide étape par étape pour acheter de l'ETH sur Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Allez sur [Coinbase](https://www.coinbase.com/) et créez un compte. 2. Une fois que vous avez créé un compte, vérifiez votre identité via un processus appelé KYC (ou Know Your Customer). l s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. 3. Une fois que vous avez vérifié votre identité, achetez de l'ETH en cliquant sur le bouton « Acheter/Vendre » en haut à droite de la page. 4. Choisissez la devise que vous souhaitez acheter. Sélectionnez ETH. @@ -165,20 +167,20 @@ Ce sera un guide étape par étape pour acheter de l'ETH sur Coinbase. 6. Entrez le montant d'ETH que vous souhaitez acheter. 7. Vérifiez votre achat et cliquez sur « Acheter des Ethereum ». 8. Confirmez votre achat et vous aurez acheté avec succès de l'ETH. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). +9. Vous pouvez transférer les ETH de votre compte Coinbase vers votre portefeuille tel que [MetaMask](https://metamask.io/). - Pour transférer l'ETH vers votre portefeuille, cliquez sur le bouton « Comptes » en haut à droite de la page. - Cliquez sur le bouton « Envoyer » à côté du compte ETH. - Entrez le montant d'ETH que vous souhaitez envoyer et l'adresse du portefeuille vers lequel vous souhaitez l'envoyer. - Assurez-vous que vous envoyez à votre adresse de portefeuille Ethereum sur Arbitrum One. - Cliquer sur « Continuer » et confirmez votre transaction. -You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Vous pouvez en savoir plus sur l'obtention d'ETH sur Coinbase [ici](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance Ce sera un guide étape par étape pour acheter des ETH sur Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Allez sur [Binance](https://www.binance.com/en) et créez un compte. 2. Une fois que vous avez créé un compte, vérifiez votre identité via un processus appelé KYC (ou Know Your Customer). l s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. 3. Une fois que vous avez vérifié votre identité, achetez des ETH en cliquant sur le bouton « Acheter maintenant » sur la bannière de la page d'accueil. 4. Choisissez la devise que vous souhaitez acheter. Sélectionnez ETH. @@ -186,14 +188,14 @@ Ce sera un guide étape par étape pour acheter des ETH sur Binance. 6. Entrez le montant d'ETH que vous souhaitez acheter. 7. Vérifiez votre achat et cliquez sur « Acheter des Ethereum ». 8. Confirmez votre achat et vous verrez votre ETH dans votre portefeuille Binance Spot. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). +9. Vous pouvez retirer les ETH de votre compte vers votre portefeuille tel que [MetaMask](https://metamask.io/). - Pour retirer l'ETH vers votre portefeuille, ajoutez l'adresse de votre portefeuille à la liste blanche de retrait. - Cliquez sur le bouton « portefeuille », cliquez sur retirer et sélectionnez ETH. - Entrez le montant d'ETH que vous souhaitez envoyer et l'adresse du portefeuille sur liste blanche à laquelle vous souhaitez l'envoyer. - Assurez-vous que vous envoyez à votre adresse de portefeuille Ethereum sur Arbitrum One. - Cliquer sur « Continuer » et confirmez votre transaction. -You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Vous pouvez en savoir plus sur l'achat d'ETH sur Binance [ici](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ## FAQ sur la facturation @@ -203,11 +205,11 @@ Vous n'avez pas besoin de savoir à l'avance combien de requêtes vous aurez bes Nous vous recommandons de surestimer le nombre de requêtes dont vous aurez besoin afin de ne pas avoir à recharger votre solde fréquemment. Pour les applications de petite et moyenne taille, une bonne estimation consiste à commencer par 1 à 2 millions de requêtes par mois et à surveiller de près l'utilisation au cours des premières semaines. Pour les applications plus grandes, une bonne estimation consiste à utiliser le nombre de visites quotidiennes que reçoit votre site multiplié par le nombre de requêtes que votre page la plus active effectue à son ouverture. -Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. +Bien entendu, les nouveaux utilisateurs et les utilisateurs existants peuvent contacter l'équipe BD d'Edge & Node pour une consultation afin d'en savoir plus sur l'utilisation prévue. ### Puis-je retirer du GRT de mon solde de facturation ? -Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). +Oui, vous pouvez toujours retirer de votre solde de facturation les GRT qui n'ont pas encore été utilisés pour des requêtes. Le contrat de facturation est uniquement conçu pour faire le bridge entre les GRT du réseau principal Ethereum et le réseau Arbitrum. Si vous souhaitez transférer vos GRT d'Arbitrum vers le réseau principal Ethereum, vous devrez utiliser le [Bridge Arbitrum](https://bridge.arbitrum.io/?l2ChainId=42161). ### Que se passe-t-il lorsque mon solde de facturation est épuisé ? Vais-je recevoir un avertissement ? diff --git a/website/src/pages/fr/subgraphs/cookbook/arweave.mdx b/website/src/pages/fr/subgraphs/cookbook/arweave.mdx index 2b11f5ea02a1..46db0f527d35 100644 --- a/website/src/pages/fr/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/fr/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Construction de subgraphs pour Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! Dans ce guide, vous apprendrez comment créer et déployer des subgraphs pour indexer la blockchain Arweave. @@ -13,99 +13,99 @@ Arweave est un protocole qui permet aux développeurs de stocker des données de Arweave a déjà construit de nombreuses bibliothèques pour intégrer le protocole dans plusieurs langages de programmation différents. Pour plus d'informations, vous pouvez consulter : - [Arwiki](https://arwiki.wiki/#/en/main) -- [Arweave Resources](https://www.arweave.org/build) +- [Ressources Arweave](https://www.arweave.org/build) ## À quoi servent les subgraphs d'Arweave ? -The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). +The Graph vous permet de créer des API ouvertes personnalisées appelées "Subgraphs". Les subgraphs sont utilisés pour indiquer aux Indexeurs (opérateurs de serveur) quelles données indexer sur une blockchain et enregistrer sur leurs serveurs afin que vous puissiez les interroger à tout moment à l'aide de [GraphQL](https://graphql.org/). -[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. +[Graph Node](https://github.com/graphprotocol/graph-node) est désormais capable d'indexer les données sur le protocole Arweave. L'intégration actuelle indexe uniquement Arweave en tant que blockchain (blocs et transactions), elle n'indexe pas encore les fichiers stockés. ## Construire un subgraph Arweave Pour pouvoir créer et déployer des Arweave Subgraphs, vous avez besoin de deux packages : -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Caractéristique des subgraphs -Il y a trois composants d'un subgraph : +There are three components of a Subgraph: -### 1. Manifest - `subgraph.yaml` +### 1. Le Manifest - `subgraph.yaml` Définit les sources de données intéressantes et la manière dont elles doivent être traitées. Arweave est un nouveau type de source de données. -### 2. Schema - `schema.graphql` +### 2. Schéma - `schema.graphql` Vous définissez ici les données que vous souhaitez pouvoir interroger après avoir indexé votre subgraph à l'aide de GraphQL. Ceci est en fait similaire à un modèle pour une API, où le modèle définit la structure d'un corps de requête. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). -### 3. AssemblyScript Mappings - `mapping.ts` +### 3. Mappages en AssemblyScript - `mapping.ts` Il s'agit de la logique qui détermine comment les données doivent être récupérées et stockées lorsqu'une personne interagit avec les sources de données que vous interrogez. Les données sont traduites et stockées sur la base du schema que vous avez répertorié. -Lors du développement du subgraph, il y a deux commandes clés : +During Subgraph development there are two key commands: ``` -$ graph codegen # génère des types à partir du fichier de schéma identifié dans le manifeste -$ graph build # génère le Web Assembly à partir des fichiers AssemblyScript, et prépare tous les fichiers de subgraphes dans un dossier /build +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Définition du manifeste du subgraph -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: - file: ./schema.graphql # lien vers le fichier de schéma + file: ./schema.graphql # link to the schema file dataSources: - kind: arweave name: arweave-blocks - network: arweave-mainnet # The Graph ne supporte que le Arweave Mainnet + network: arweave-mainnet # The Graph only supports Arweave Mainnet source: - owner: 'ID-OF-AN-OWNER' # La clé publique d'un porte-monnaie Arweave - startBlock: 0 # mettez cette valeur à 0 pour commencer l'indexation à partir de la genèse de la chaîne. + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript - file: ./src/blocks.ts # lien vers le fichier contenant les mappages d'Assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: - Block - Transaction blockHandlers: - - handler: handleBlock # le nom de la fonction dans le fichier de mapping + - handler: handleBlock # the function name in the mapping file transactionHandlers: - - handler: handleTx # le nom de la fonction dans le fichier de mapping + - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) -- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- Le réseau doit correspondre à un réseau sur le Graph Node hôte. Dans Subgraph Studio, le réseau principal d'Arweave est `arweave-mainnet` - Les sources de données Arweave introduisent un champ source.owner facultatif, qui est la clé publique d'un portefeuille Arweave Les sources de données Arweave prennent en charge deux types de gestionnaires : -- `blockHandlers` - Run on every new Arweave block. No source.owner is required. -- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` +- `blockHandlers` - Exécuté sur chaque nouveau bloc Arweave. Aucun source.owner n'est requis. +- `transactionHandlers` - Exécute chaque transaction dont le propriétaire est `source.owner` de la source de données. Actuellement, un propriétaire est requis pour `transactionHandlers`, si les utilisateurs veulent traiter toutes les transactions, ils doivent fournir "" comme `source.owner` > Source.owner peut être l’adresse du propriétaire ou sa clé publique. > > Les transactions sont les éléments constitutifs du permaweb Arweave et ce sont des objets créés par les utilisateurs finaux. > -> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. +> Note : Les transactions [Irys (anciennement Bundlr)](https://irys.xyz/) ne sont pas encore prises en charge. ## Définition de schéma -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## Cartographies AssemblyScript -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). +Les gestionnaires d'événements sont écrits en [AssemblyScript](https://www.assemblyscript.org/). -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). +L'indexation Arweave introduit des types de données spécifiques à Arweave dans l'[API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/). ```tsx class Block { @@ -146,39 +146,39 @@ class Transaction { } ``` -Block handlers receive a `Block`, while transactions receive a `Transaction`. +Les gestionnaires de blocs reçoivent un `Block`, tandis que les transactions reçoivent un `Transaction`. -Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). +L'écriture des mappages d'un subgraph Arweave est très similaire à l'écriture des mappages d'un subgraph Ethereum. Pour plus d'informations, cliquez [ici](/developing/creating-a-subgraph/#writing-mappings). ## Déploiement d'un subgraph Arweave dans Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash -graph deploy --access-token +graph deploy --access-token ``` ## Interroger un subgraph d'Arweave -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Exemples de subgraphs -Voici un exemple de modèle subgraph : +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Un subgraph peut-il indexer Arweave et d'autres chaînes ? +### Can a Subgraph index Arweave and other chains? -Non, un subgraph ne peut supporter que les sources de données d'une seule chaîne/réseau. +No, a Subgraph can only support data sources from one chain/network. ### Puis-je indexer les fichiers enregistrés sur Arweave ? Actuellement, The Graph n'indexe Arweave qu'en tant que blockchain (ses blocs et ses transactions). -### Puis-je identifier les bundles de Bundlr dans mon subgraph ? +### Can I identify Bundlr bundles in my Subgraph? Cette fonction n'est pas prise en charge actuellement. @@ -188,9 +188,9 @@ La source.owner peut être la clé publique de l'utilisateur ou l'adresse de son ### Quel est le format de chiffrement actuel ? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). -The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: +La fonction d'assistant `bytesToBase64(bytes : Uint8Array, urlSafe : boolean) : string` suivante peut être utilisée, et sera ajoutée à `graph-ts` : ``` const base64Alphabet = [ @@ -219,14 +219,14 @@ function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; result += alphabet[bytes[i] & 0x3F]; } - if (i === l + 1) { // 1 octet yet to write + if (i === l + 1) { // 1 octet à écrire result += alphabet[bytes[i - 2] >> 2]; result += alphabet[(bytes[i - 2] & 0x03) << 4]; if (!urlSafe) { result += "=="; } } - if (!urlSafe && i === l) { // 2 octets yet to write + if (!urlSafe && i === l) { // 2 octets à écrire result += alphabet[bytes[i - 2] >> 2]; result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; result += alphabet[(bytes[i - 1] & 0x0F) << 2]; diff --git a/website/src/pages/fr/subgraphs/cookbook/enums.mdx b/website/src/pages/fr/subgraphs/cookbook/enums.mdx index 5784cb991330..a0a6b93d75b9 100644 --- a/website/src/pages/fr/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/fr/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Les Enums, ou types d'énumération, sont un type de données spécifique qui vo ### Exemple d'Enums dans Votre Schéma -Si vous construisez un subgraph pour suivre l'historique de propriété des tokens sur une marketplace, chaque token peut passer par différentes propriétés, telles que`OriginalOwner`, `SecondOwner`, et `ThirdOwner`. En utilisant des enums, vous pouvez définir ces propriétés spécifiques, garantissant que seules des valeurs prédéfinies sont utilisées. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. Vous pouvez définir des enums dans votre schéma et, une fois définis, vous pouvez utiliser la représentation en chaîne de caractères des valeurs enum pour définir un champ enum sur une entité. @@ -65,7 +65,7 @@ Les Enums assurent la sécurité des types, minimisent les risques de fautes de > Note: Le guide suivant utilise le smart contract CryptoCoven NFT. -Pour définir des enums pour les différents marketplaces où les NFTs sont échangés, utilisez ce qui suit dans votre schéma de subgraph : +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum pour les Marketplaces avec lesquelles le contrat CryptoCoven a interagi (probablement une vente ou un mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Utilisation des Enums pour les Marketplaces NFT -Une fois définis, les enums peuvent être utilisés tout au long de votre subgraph pour catégoriser les transactions ou les événements. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. Par exemple, lors de la journalisation des ventes de NFT, vous pouvez spécifier la marketplace impliqué dans la transaction en utilisant l'enum. diff --git a/website/src/pages/fr/subgraphs/cookbook/grafting.mdx b/website/src/pages/fr/subgraphs/cookbook/grafting.mdx index a81cf0ddf30a..75693f2ffd53 100644 --- a/website/src/pages/fr/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/fr/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Remplacer un contrat et conserver son historique grâce au « greffage » --- -Dans ce guide, vous apprendrez à construire et à déployer de nouveaux subgraphs en utilisant le greffage sur des subgraphs existants. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## Qu'est-ce qu'une greffe ? -C'est une méthode qui réutilise les données d'un subgraph existant et commence à les indexer à un bloc ultérieur. Elle est utile lors du développement pour contourner rapidement les erreurs simples dans les mappings ou pour remettre temporairement en service un subgraph existant qui a échoué. Elle peut également être utilisée pour ajouter une fonctionnalité à un subgraphe dont l'indexation depuis la genèse prend un temps considérable. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -Le subgraph greffé peut utiliser un schema GraphQL qui n'est pas identique à celui du subgraph de base, mais simplement compatible avec lui. Il doit s'agir d'un schema de subgraph valide en tant que tel, mais il peut s'écarter du schema du subgraph de base de la manière suivante : +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Il ajoute ou supprime des types d'entité - Il supprime les attributs des types d'entité @@ -20,40 +20,40 @@ Le subgraph greffé peut utiliser un schema GraphQL qui n'est pas identique à c Pour plus d’informations, vous pouvez vérifier : -- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) +- [Greffage](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -Dans ce tutoriel, nous couvrirons un cas d'utilisation de base. Nous remplacerons un contrat existant par un contrat identique (avec une nouvelle adresse, mais le même code). Ensuite, nous grefferons le subgraph existant sur le subgraph "de base" qui suit le nouveau contrat. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Remarque importante sur le greffage lors de la mise à niveau vers le réseau -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Pourquoi est-ce important? -Le greffage est une fonctionnalité puissante qui vous permet de "greffer" un subgraph sur un autre, transférant efficacement les données historiques du subgraph existant vers une nouvelle version. Il n'est pas possible de greffer un subgraph de The Graph Network vers Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Les meilleures pratiques -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. En respectant ces lignes directrices, vous minimisez les risques et vous vous assurez que le processus de migration se déroule sans heurts. ## Création d'un subgraph existant -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: -- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) +- [Dépôt d'exemples de subgraphs](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Définition du manifeste du subgraph -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -79,33 +79,33 @@ dataSources: file: ./src/lock.ts ``` -- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` -- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. +- La source de données `Lock` est l'adresse de l'abi et du contrat que nous obtiendrons lorsque nous compilerons et déploierons le contrat +- Le réseau doit correspondre à un réseau indexé qui est interrogé. Comme nous fonctionnons sur le réseau de test Sepolia, le réseau est `sepolia` +- La section `mapping` définit les déclencheurs intéressants et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Dans ce cas, nous écoutons l'événement `Withdrawal` et appelons la fonction `handleWithdrawal` lorsqu'il est émis. ## Définition de manifeste de greffage -Le greffage nécessite l'ajout de deux nouveaux éléments au manifeste du subgraph original : +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - - grafting # nom de la fonctionnalité + - grafting # feature name graft: - base: Qm... # ID du subgraph de base - block: 5956000 # numéro du bloc + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number ``` -- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `features:` est une liste de tous les [noms de fonctionnalités](/developing/creating-a-subgraph/#experimental-features) utilisées. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Déploiement du subgraph de base -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Une fois terminé, vérifiez que le subgraph s'indexe correctement. Si vous exécutez la commande suivante dans The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ Cela renvoie quelque chose comme ceci : } ``` -Une fois que vous avez vérifié que le subgraph s'indexe correctement, vous pouvez rapidement le mettre à jour grâce à la méthode du graffage. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Déploiement du subgraph greffé Le subgraph.yaml de remplacement du greffon aura une nouvelle adresse de contrat. Cela peut arriver lorsque vous mettez à jour votre dapp, redéployez un contrat, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Une fois terminé, vérifiez que le subgraph s'indexe correctement. Si vous exécutez la commande suivante dans The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ Le résultat devrait être le suivant : } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Félicitations ! Vous avez réussi à greffer un subgraph sur un autre subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Ressources supplémentaires @@ -197,6 +197,6 @@ Si vous souhaitez acquérir plus d'expérience avec le greffage, voici quelques - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) - [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), -To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results +Pour devenir encore plus expert sur The Graph, vous pouvez vous familiariser avec d'autres méthodes de gestion des modifications apportées aux sources de données sous-jacentes. Des alternatives comme des [Modèles de sources de données](/developing/creating-a-subgraph/#data-source-templates) permettent d'obtenir des résultats similaires -> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) +> Note : De nombreux éléments de cet article ont été repris de l'article [Arweave](/subgraphs/cookbook/arweave/) publié précédemment diff --git a/website/src/pages/fr/subgraphs/cookbook/near.mdx b/website/src/pages/fr/subgraphs/cookbook/near.mdx index 0e6830668726..f535b76a57b1 100644 --- a/website/src/pages/fr/subgraphs/cookbook/near.mdx +++ b/website/src/pages/fr/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Construction de subgraphs sur NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## Que signifie NEAR ? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## Que sont les subgraphs NEAR ? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Gestionnaires de blocs : ceux-ci sont exécutés à chaque nouveau bloc - Gestionnaires de reçus : exécutés à chaque fois qu'un message est exécuté sur un compte spécifié @@ -23,66 +23,66 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Construction d'un subgraph NEAR -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> La construction d'un subgraph NEAR est très similaire à la construction d'un subgraph qui indexe Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -La définition d'un subgraph comporte trois aspects : +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -Lors du développement du subgraph, il y a deux commandes clés : +During Subgraph development there are two key commands: ```bash -$ graph codegen # génère des types à partir du fichier de schéma identifié dans le manifeste -$ graph build # génère le Web Assembly à partir des fichiers AssemblyScript, et prépare tous les fichiers de subgraphes dans un dossier /build +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Définition du manifeste du subgraph -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: - file: ./src/schema.graphql # lien vers le fichier de schéma + file: ./src/schema.graphql # link to the schema file dataSources: - kind: near network: near-mainnet source: - account: app.good-morning.near # Cette source de données surveillera ce compte - startBlock: 10662188 # Requis pour NEAR + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - - handler: handleNewBlock # le nom de la fonction dans le fichier de mapping + - handler: handleNewBlock # the function name in the mapping file receiptHandlers: - - handler: handleReceipt # le nom de la fonction dans le fichier de mappage - file: ./src/mapping.ts # lien vers le fichier contenant les mappings Assemblyscript + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. ```yaml comptes: - préfixes: - - application - - bien - suffixes: - - matin.près - - matin.testnet + préfixes : + - application + - bien + suffixes : + - matin.près + - matin.testnet ``` Les fichiers de données NEAR prennent en charge deux types de gestionnaires : @@ -92,11 +92,11 @@ Les fichiers de données NEAR prennent en charge deux types de gestionnaires : ### Définition de schéma -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### Cartographies AssemblyScript -The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). +Les gestionnaires d'événements sont écrits en [AssemblyScript](https://www.assemblyscript.org/). NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Déploiement d'un subgraph NEAR -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio et l'Indexeur de mise à niveau sur The Graph Network prennent en charge actuellement l'indexation du mainnet et du testnet NEAR en bêta, avec les noms de réseau suivants : - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -La configuration du nœud dépend de l'endroit où le subgraph est déployé. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Une fois que votre subgraph a été déployé, il sera indexé par le nœud The Graph. Vous pouvez vérifier sa progression en interrogeant le subgraph lui-même : +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ Nous fournirons bientôt plus d'informations sur l'utilisation des composants ci ## Interrogation d'un subgraph NEAR -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Exemples de subgraphs -Voici quelques exemples de subgraphs pour référence : +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Voici quelques exemples de subgraphs pour référence : ### Comment fonctionne la bêta ? -Le support de NEAR est en version bêta, ce qui signifie qu'il peut y avoir des changements dans l'API alors que nous continuons à travailler sur l'amélioration de l'intégration. Veuillez envoyer un e-mail à near@thegraph.com pour que nous puissions vous aider à construire des subgraphs NEAR et vous tenir au courant des derniers développements ! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Un subgraph peut-il indexer à la fois les chaînes NEAR et EVM ? +### Can a Subgraph index both NEAR and EVM chains? -Non, un subgraph ne peut supporter que les sources de données d'une seule chaîne/réseau. +No, a Subgraph can only support data sources from one chain/network. -### Les subgraphs peuvent-ils réagir à des déclencheurs plus spécifiques ? +### Can Subgraphs react to more specific triggers? Actuellement, seuls les déclencheurs de blocage et de réception sont pris en charge. Nous étudions les déclencheurs pour les appels de fonction à un compte spécifique. Nous souhaitons également prendre en charge les déclencheurs d'événements, une fois que NEAR disposera d'un support natif pour les événements. @@ -258,25 +258,25 @@ If an `account` is specified, that will only match the exact account name. It is ```yaml comptes: - suffixes: - - mintbase1.near + suffixes : + - mintbase1.near ``` -### Les subgraphs NEAR peuvent-ils faire des appels de view aux comptes NEAR pendant les mappings? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? Cette fonction n'est pas prise en charge. Nous sommes en train d'évaluer si cette fonctionnalité est nécessaire pour l'indexation. -### Puis-je utiliser des modèles de sources de données dans mon subgraph NEAR ? +### Can I use data source templates in my NEAR Subgraph? Ceci n’est actuellement pas pris en charge. Nous évaluons si cette fonctionnalité est requise pour l'indexation. -### Les subgraphs Ethereum supportent les versions "pending" et "current", comment puis-je déployer une version "pending" d'un subgraph NEAR ? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -La fonctionnalité "pending" n'est pas encore prise en charge pour les subgraphs NEAR. Dans l'intervalle, vous pouvez déployer une nouvelle version dans un autre subgraph "named", puis, lorsque celui-ci est synchronisé avec la tête de chaîne, vous pouvez redéployer dans votre subgraph principal "named", qui utilisera le même ID de déploiement sous-jacent, de sorte que le subgraph principal sera instantanément synchronisé. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### Ma question n'a pas reçu de réponse, où puis-je obtenir plus d'aide concernant la création de subgraphs NEAR ? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## Les Références diff --git a/website/src/pages/fr/subgraphs/cookbook/polymarket.mdx b/website/src/pages/fr/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..ee26849decd7 100644 --- a/website/src/pages/fr/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/fr/subgraphs/cookbook/polymarket.mdx @@ -1,23 +1,23 @@ --- -title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +title: Interroger les données de la blockchain à partir de Polymarket avec des subgraphs sur The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. -## Polymarket Subgraph on Graph Explorer +## Subgraph Polymarket sur Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. -![Polymarket Playground](/img/Polymarket-playground.png) +![Terrain de jeux Polymarket](/img/Polymarket-playground.png) -## How to use the Visual Query Editor +## Comment utiliser l'éditeur visuel de requêtes -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. -You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. +Vous pouvez utiliser l'explorateur GraphiQL pour composer vos requêtes GraphQL en cliquant sur les champs souhaités. -### Example Query: Get the top 5 highest payouts from Polymarket +### Exemple de requête : Obtenir les 5 paiements les plus élevés de Polymarket ``` { @@ -30,7 +30,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -### Example output +### Exemple de sortie ``` { @@ -71,41 +71,41 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on } ``` -## Polymarket's GraphQL Schema +## Schéma GraphQL de Polymarket -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). -### Polymarket Subgraph Endpoint +### Endpoint du Subgraph Polymarket https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp -The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). +Le subgraph Polymarket est disponible sur [Graph Explorer](https://thegraph.com/explorer). -![Polymarket Endpoint](/img/Polymarket-endpoint.png) +![Endpoint Polymarket](/img/Polymarket-endpoint.png) -## How to Get your own API Key +## Comment obtenir votre propre clé API -1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet -2. Go to https://thegraph.com/studio/apikeys/ to create an API key +1. Aller à [https://thegraph.com/studio](http://thegraph.com/studio) et connectez votre portefeuille +2. Rendez-vous sur https://thegraph.com/studio/apikeys/ pour créer une clé API -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. -100k queries per month are free which is perfect for your side project! +100k requêtes par mois sont gratuites, ce qui est parfait pour votre projet secondaire ! -## Additional Polymarket Subgraphs +## Subgraphs Additionels Polymarket - [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) -- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) -- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) -- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) +- [Activité Polymarket de Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Profit & Pertes Polymarket ](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Intérêt Ouverts Polymarket](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) -## How to Query with the API +## Comment interroger l'API -You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. +Vous pouvez passer n'importe quelle requête GraphQL àl'endpoint Polymarket et recevoir des données au format json. -This following code example will return the exact same output as above. +L'exemple de code suivant renvoie exactement le même résultat que ci-dessus. -### Sample Code from node.js +### Exemple de code de node.js ``` const axios = require('axios'); @@ -127,22 +127,22 @@ const graphQLRequest = { }, }; -// Send the GraphQL query +// Envoi de la requête GraphQL axios(graphQLRequest) .then((response) => { - // Handle the response here + //Traitez la réponse ici const data = response.data.data console.log(data) }) .catch((error) => { - // Handle any errors + // Traiter les erreurs éventuelles console.error(error); }); ``` -### Additional resources +### Ressources complémentaires -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/fr/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/fr/subgraphs/cookbook/secure-api-keys-nextjs.mdx index cd3b3b46b7f9..45cb2b4c38a4 100644 --- a/website/src/pages/fr/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/fr/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: Comment sécuriser les clés d'API en utilisant les composants serveur de ## Aperçu -Nous pouvons utiliser [les composants serveur de Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components) pour sécuriser correctement notre clé API contre l'exposition dans le frontend de notre dapp. Pour augmenter encore la sécurité de notre clé API, nous pouvons également [restreindre notre clé API à certains subgraphs ou domaines dans Subgraph Studio.](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -Dans ce guide pratique, nous allons passer en revue la création d'un composant de serveur Next.js qui interroge un subgraph tout en masquant la clé API du frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Mise en garde @@ -18,11 +18,11 @@ Dans ce guide pratique, nous allons passer en revue la création d'un composant Dans une application React standard, les clés API incluses dans le code frontend peuvent être exposées du côté client, posant un risque de sécurité. Bien que les fichiers `.env` soient couramment utilisés, ils ne protègent pas complètement les clés car le code de React est exécuté côté client, exposant ainsi la clé API dans les headers. Les composants serveur Next.js résolvent ce problème en gérant les opérations sensibles côté serveur. -### Utilisation du rendu côté client pour interroger un subgraph +### Using client-side rendering to query a Subgraph ![rendu côté client](/img/api-key-client-side-rendering.png) -### Prerequisites +### Prérequis - Une clé API provenant de [Subgraph Studio](https://thegraph.com/studio) - Une connaissance de base de Next.js et React. @@ -120,4 +120,4 @@ Démarrez notre application Next.js en utilisant `npm run dev`. Vérifiez que le ### Conclusion -By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. +En utilisant les composants serveur de Next.js, nous avons effectivement caché la clé API du côté client, améliorant ainsi la sécurité de notre application. Cette méthode garantit que les opérations sensibles sont traitées côté serveur, à l'abri des vulnérabilités potentielles côté client. Enfin, n'oubliez pas d'explorer [d'autres mesures de sécurité des clés d'API](/subgraphs/querying/managing-api-keys/) pour renforcer encore davantage la sécurité de vos clés d'API. diff --git a/website/src/pages/fr/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/fr/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..e555ab7c0277 --- /dev/null +++ b/website/src/pages/fr/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Aperçu + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prérequis + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Commencer + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Spécificités⁠ + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Ressources supplémentaires + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/fr/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/fr/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..b73a8fab8de7 --- /dev/null +++ b/website/src/pages/fr/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Présentation + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Commencer + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Ressources supplémentaires + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/fr/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/fr/subgraphs/cookbook/subgraph-debug-forking.mdx index cedcf3ece5c4..75a0c1543f83 100644 --- a/website/src/pages/fr/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/fr/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,25 +2,25 @@ title: Débogage rapide et facile des subgraph à l'aide de Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## D'accord, qu'est-ce que c'est ? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## Quoi ? Comment ? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## S'il vous plaît, montrez-moi du code ! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. -Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: +Voici les gestionnaires définis pour indexer `Gravatar`s, sans aucun bug : ```tsx export function handleNewGravatar(event: NewGravatar): void { @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. La méthode habituelle pour tenter de résoudre le problème est la suivante : 1. Apportez une modification à la source des mappages, ce qui, selon vous, résoudra le problème (même si je sais que ce ne sera pas le cas). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Attendez qu’il soit synchronisé. 4. S'il se casse à nouveau, revenez au point 1, sinon : Hourra ! -It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ +Il s'agit en fait d'un processus assez familier à un processus de débogage ordinaire, mais il y a une étape qui ralentit terriblement le processus : _3. Attendez qu'il se synchronise._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: -0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +0. Créer un Graph Node local avec l'ensemble de **_base de fork approprié_**. 1. Apportez une modification à la source des mappings qui, selon vous, résoudra le problème. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. S'il casse à nouveau, revenez à 1, sinon : Hourra ! Maintenant, vous pouvez avoir 2 questions : @@ -69,18 +69,18 @@ Maintenant, vous pouvez avoir 2 questions : Je réponds : -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Fourcher est facile, pas besoin de transpirer : ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! Voici donc ce que je fais : -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -90,12 +90,12 @@ $ cargo run -p graph-node --release -- \ --fork-base https://api.thegraph.com/subgraphs/id/ ``` -2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +2. Après une inspection minutieuse, j'ai remarqué qu'il y avait un décalage dans les représentations `id` utilisées lors de l'indexation des `Gravatar`s dans mes deux handlers. Alors que `handleNewGravatar` le convertit en hexadécimal (`event.params.id.toHex()`), `handleUpdatedGravatar` utilise un int32 (`event.params.id.toI32()`) ce qui fait paniquer `handleUpdatedGravatar` avec "Gravatar not found!". Je fais en sorte qu'ils convertissent tous les deux l'`id` en hexadécimal. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. J'inspecte les logs générés par le Graph Node local et, Hourra!, tout semble fonctionner. -5. Je déploie mon subgraph, désormais débarrassé de tout bug, sur un Graph Node distant et vis heureux pour toujours ! (Malheureusement pas de patates, mais c’est la vie…) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/fr/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/fr/subgraphs/cookbook/subgraph-uncrashable.mdx index fadcd9b98faf..bb4a3f214759 100644 --- a/website/src/pages/fr/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/fr/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Générateur de code de subgraph sécurisé --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Pourquoi intégrer Subgraph Uncrashable ? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. -**Key Features** +**Caractéristiques principales** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - Le cadre comprend également un moyen (via le fichier de configuration) de créer des fonctions de définition personnalisées, mais sûres, pour des groupes de variables d'entité. De cette façon, il est impossible pour l'utilisateur de charger/utiliser une entité de graph obsolète et il est également impossible d'oublier de sauvegarder ou définissez une variable requise par la fonction. -- Les logs d'avertissement sont enregistrés sous forme de logs indiquant où il y a une violation de la logique du subgraph pour aider à corriger le problème afin d'assurer l'exactitude des données. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable peut être exécuté en tant qu'indicateur facultatif à l'aide de la commande Graph CLI codegen. @@ -26,4 +26,4 @@ Subgraph Uncrashable peut être exécuté en tant qu'indicateur facultatif à l' graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/fr/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/fr/subgraphs/cookbook/transfer-to-the-graph.mdx index d34a88327c64..cba37c882bf6 100644 --- a/website/src/pages/fr/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/fr/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Transférer vers The Graph +title: Transfer to The Graph --- -Mettez rapidement à jour vos subgraphs depuis n'importe quelle plateforme vers [le réseau décentralisé de The Graph](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Avantages du passage à The Graph -- Utilisez le même subgraph que vos applications utilisent déjà avec une migration sans interruption de service. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Améliorez la fiabilité grâce à un réseau mondial pris en charge par plus de 100 Indexers. -- Bénéficiez d’un support ultra-rapide pour vos subgraphs 24/7, avec une équipe d’ingénieurs de garde. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Mettez à jour votre Subgraph vers The Graph en 3 étapes simples @@ -21,9 +21,9 @@ Mettez rapidement à jour vos subgraphs depuis n'importe quelle plateforme vers ### Créer un subgraph dans Subgraph Studio - Accédez à [Subgraph Studio](https://thegraph.com/studio/) et connectez votre portefeuille. -- Cliquez sur « Créer un subgraph ». Il est recommandé de nommer le subgraph en majuscule : « Nom du subgraph Nom de la chaîne ». +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Installer Graph CLI @@ -37,7 +37,7 @@ Utilisation de [npm](https://www.npmjs.com/) : npm install -g @graphprotocol/graph-cli@latest ``` -Utilisez la commande suivante pour créer un subgraph dans Studio en utilisant CLI : +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Déployez votre Subgraph sur Studio -Si vous avez votre code source, vous pouvez facilement le déployer sur Studio. Si vous ne l'avez pas, voici un moyen rapide de déployer votre subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. Dans Graph CLI, exécutez la commande suivante : @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:**: Chaque subgraph a un hash IPFS (ID de déploiement), qui ressemble à ceci : "Qmasdfad...". Pour déployer, utilisez simplement ce **hash IPFS**. Vous serez invité à entrer une version (par exemple, v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publier votre Subgraph sur The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Interroger votre Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Exemple -[Subgraph Ethereum CryptoPunks](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) par Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![L'URL de requête](/img/cryptopunks-screenshot-transfer.png) -L'URL de requête pour ce subgraph est : +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**votre-propre-clé-Api**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ Vous pouvez créer des clés API dans Subgraph Studio sous le menu "API Keys" en ### Surveiller l'état du Subgraph -Une fois que vous avez mis à jour, vous pouvez accéder et gérer vos subgraphs dans [Subgraph Studio](https://thegraph.com/studio/) et explorer tous les subgraphs dans [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Ressources supplémentaires -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- Pour explorer toutes les façons d'optimiser et de personnaliser votre subgraph pour de meilleures performances, lisez plus sur [la création d'un subgraph ici](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/fr/subgraphs/developing/_meta-titles.json b/website/src/pages/fr/subgraphs/developing/_meta-titles.json index 01a91b09ed77..c49c19eec25d 100644 --- a/website/src/pages/fr/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/fr/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { "creating": "Creating", "deploying": "Deploying", - "publishing": "Publishing", + "publishing": "Publication", "managing": "Managing" } diff --git a/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx b/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx index 12e0f444c4d8..b64f4462d9d3 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/advanced.mdx @@ -4,20 +4,20 @@ title: Fonctionnalités avancées des subgraphs ## Aperçu -Ajoutez et implémentez des fonctionnalités avancées de subgraph pour améliorer la construction de votre subgraph. +Ajouter et mettre en œuvre des fonctionnalités avancées de subgraph pour améliorer la construction de votre subgraph. -À partir de `specVersion` `0.0.4`, les fonctionnalités de subgraph doivent être explicitement déclarées dans la section `features` au niveau supérieur du fichier de manifeste, en utilisant leur nom en `camelCase` comme indiqué dans le tableau ci-dessous : +A partir de la `specVersion` `0.0.4`, les fonctionnalités de Subgraph doivent être explicitement déclarées dans la section `features` au premier niveau du fichier manifest, en utilisant leur nom `camelCase`, comme listé dans le tableau ci-dessous : -| Fonctionnalité | Nom | -| --------------------------------------------------------- | ---------------- | -| [Erreurs non fatales](#non-fatal-errors) | `nonFatalErrors` | -| [Recherche plein texte](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Greffage](#grafting-onto-existing-subgraphs) | `grafting` | +| Fonctionnalité | Nom | +| ----------------------------------------------------------- | ---------------- | +| [Erreurs non fatales](#non-fatal-errors) | `nonFatalErrors` | +| [Recherche plein texte](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Greffage](#grafting-onto-existing-subgraphs) | `grafting` | -Par exemple, si un subgraph utilise les fonctionnalités **Full-Text Search** et **Non-fatal Errors**, le champ `features` dans le manifeste devrait être : +Par exemple, si un subgraph utilise les fonctionnalités **Recherche plein texte** et **Erreurs non fatales**, le champ `features` dans le manifeste devrait être : ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,17 +25,17 @@ features: dataSources: ... ``` -> Notez que L'utilisation d'une fonctionnalité sans la déclarer entraînera une **validation error** lors du déploiement du subgraph, mais aucune erreur ne se produira si une fonctionnalité est déclarée mais non utilisée. +> Notez que l'utilisation d'une fonctionnalité sans la déclarer entraînera une **erreur de validation** lors du déploiement du subgraph, mais aucune erreur ne se produira si une fonctionnalité est déclarée mais n'est pas utilisée. ## Séries chronologiques et agrégations -Prerequisites: +Prérequis : -- Subgraph specVersion must be ≥1.1.0. +- Le subgraph specVersion doit être ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Les séries chronologiques et les agrégations permettent à votre subgraph de suivre des statistiques telles que le prix moyen quotidien, le nombre total de transferts par heure, etc. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +Cette fonctionnalité introduit deux nouveaux types d'entités de subgraph. Les entités de séries chronologiques enregistrent des points de données avec des horodatages. Les entités d'agrégation effectuent des calculs prédéfinis sur les points de données des séries chronologiques sur une base horaire ou quotidienne, puis stockent les résultats pour faciliter l'accès via GraphQL. ### Exemple de schéma @@ -53,19 +53,19 @@ type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { } ``` -### How to Define Timeseries and Aggregations +### Comment définir des séries chronologiques et des agrégations ? -Timeseries entities are defined with `@entity(timeseries: true)` in the GraphQL schema. Every timeseries entity must: +Les entités de séries chronologiques sont définies avec `@entity(timeseries : true)` dans le schéma GraphQL. Chaque entité timeseries doit : -- have a unique ID of the int8 type -- have a timestamp of the Timestamp type -- include data that will be used for calculation by aggregation entities. +- ont un ID unique de type int8 +- ont un horodatage de type Horodatage +- inclure les données qui seront utilisées pour le calcul par les entités d'agrégation. -These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the aggregation entities. +Ces entités de séries chronologiques peuvent être enregistrées dans des gestionnaires de déclencheurs ordinaires et servent de "données brutes" pour les entités d'agrégation. -Aggregation entities are defined with `@aggregation` in the GraphQL schema. Every aggregation entity defines the source from which it will gather data (which must be a timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). +Les entités d'agrégation sont définies avec `@aggregation` dans le schéma GraphQL. Chaque entité d'agrégation définit la source à partir de laquelle elle recueillera les données (qui doit être une entité de série chronologique), définit les intervalles (par exemple, heure, jour) et spécifie la fonction d'agrégation qu'elle utilisera (par exemple, sum, count, min, max, first, last). -Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. +Les entités d'agrégation sont automatiquement calculées sur la base de la source spécifiée à la fin de l'intervalle requis. #### Intervalles d'Agrégation disponibles @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Erreurs non fatales -Les erreurs d'indexation sur les subgraphs déjà synchronisés entraîneront, par défaut, l'échec du subgraph et l'arrêt de la synchronisation. Les subgraphs peuvent également être configurés pour continuer la synchronisation en présence d'erreurs, en ignorant les modifications apportées par le gestionnaire qui a provoqué l'erreur. Cela donne aux auteurs de subgraphs le temps de corriger leurs subgraphs pendant que les requêtes continuent d'être traitées sur le dernier bloc, bien que les résultats puissent être incohérents en raison du bogue à l'origine de l'erreur. Notez que certaines erreurs sont toujours fatales. Pour être non fatale, l'erreur doit être connue pour être déterministe. +Les erreurs d'indexation sur des subgraphs déjà synchronisés entraîneront, par défaut, l'échec du subgraph et l'arrêt de la synchronisation. Les subgraphs peuvent également être configurés pour continuer la synchronisation en présence d'erreurs, en ignorant les modifications apportées par le gestionnaire qui a provoqué l'erreur. Les auteurs de subgraphs ont ainsi le temps de corriger leurs subgraphs tandis que les requêtes continuent d'être servies par rapport au dernier bloc, bien que les résultats puissent être incohérents en raison du bug qui a provoqué l'erreur. Notez que certaines erreurs sont toujours fatales. Pour être non fatale, l'erreur doit être connue comme étant déterministe. -> **Note:** The Graph Network ne supporte pas encore les erreurs non fatales, et les développeurs ne doivent pas déployer de subgraphs utilisant cette fonctionnalité sur le réseau via le Studio. +> **Note:** The Graph Network ne prend pas encore en charge les erreurs non fatales, et les développeurs ne doivent pas déployer les subgraphs utilisant cette fonctionnalité sur le réseau via le Studio. -L'activation des erreurs non fatales nécessite la définition de l'indicateur de fonctionnalité suivant sur le manifeste du subgraph : +Pour activer les erreurs non fatales, il faut définir l'indicateur de fonctionnalité suivant dans le manifeste du subgraph : ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -La requête doit également opter pour l'interrogation de données avec des incohérences potentielles via l'argument `subgraphError`. Il est également recommandé d'interroger `_meta` pour vérifier si le subgraph a ignoré des erreurs, comme dans l'exemple : +La requête doit également accepter d'interroger des données avec des incohérences potentielles grâce à l'argument `subgraphError`. Il est également recommandé d'interroger `_meta` pour vérifier si le subgraph a ignoré les erreurs, comme dans l'exemple : ```graphql foos(first: 100, subgraphError: allow) { @@ -145,7 +145,7 @@ Si le subgraph rencontre une erreur, cette requête renverra à la fois les donn ## File Data Sources de fichiers IPFS/Arweave -Les sources de données de fichiers sont une nouvelle fonctionnalité de subgraph permettant d'accéder aux données hors chaîne pendant l'indexation de manière robuste et extensible. Les sources de données de fichiers prennent en charge la récupération de fichiers depuis IPFS et Arweave. +Les fichiers sources de données sont une nouvelle fonctionnalité de Subgraph permettant d'accéder à des données hors chaîne pendant l'indexation d'une manière robuste et extensible. Les fichiers sources de données permettent de récupérer des fichiers à partir d'IPFS et d'Arweave. > Cela jette également les bases d’une indexation déterministe des données hors chaîne, ainsi que de l’introduction potentielle de données arbitraires provenant de HTTP. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ L'exemple: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//Cet exemple de code concerne un sous-graphe de Crypto coven. Le hachage ipfs ci-dessus est un répertoire contenant les métadonnées des jetons pour toutes les NFT de l'alliance cryptographique. +//Cet exemple de code concerne un subgraph Crypto coven. Le hash ipfs ci-dessus est un répertoire contenant les métadonnées des jetons pour tous les NFT de la communauté crypto export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //Ceci crée un chemin vers les métadonnées pour un seul Crypto coven NFT. Il concatène le répertoire avec "/" + nom de fichier + ".json" + //Cette opération crée un chemin d'accès aux métadonnées d'un seul Crypto coven NFT. Il concatène le répertoire avec "/" + nom de fichier + ".json" token.ipfsURI = tokenIpfsHash @@ -317,23 +317,23 @@ Cela créera une nouvelle source de données de fichier, qui interrogera le poin Cet exemple utilise le CID comme référence entre l'entité parent `Token` et l'entité résultante `TokenMetadata`. -> Auparavant, c'est à ce stade qu'un développeur de subgraphs aurait appelé `ipfs.cat(CID)` pour récupérer le fichier +> Auparavant, c'est à ce stade qu'un développeur de Subgraph aurait appelé `ipfs.cat(CID)` pour récupérer le fichier Félicitations, vous utilisez des sources de données de fichiers ! -#### Déployer vos subgraphs +#### Déploiement de vos Subgraphs -Vous pouvez maintenant `construire` et `déployer` votre subgraph sur n'importe quel Graph Node >=v0.30.0-rc.0. +Vous pouvez maintenant `construire` et `déployer` votre Subgraph sur n'importe quel Graph Node >=v0.30.0-rc.0. #### Limitations -Les entités et les gestionnaires de sources de données de fichiers sont isolés des autres entités du subgraph, ce qui garantit que leur exécution est déterministe et qu'il n'y a pas de contamination des sources de données basées sur des chaînes. Pour être plus précis : +Les entités et les gestionnaires de fichiers sources de données sont isolés des autres entités du subgraph, ce qui garantit qu'ils sont déterministes lorsqu'ils sont exécutés et qu'il n'y a pas de contamination des sources de données basées sur des blockchain. Pour être plus précis : - Les entités créées par les sources de données de fichiers sont immuables et ne peuvent pas être mises à jour - Les gestionnaires de sources de données de fichiers ne peuvent pas accéder à des entités provenant d'autres sources de données de fichiers - Les entités associées aux sources de données de fichiers ne sont pas accessibles aux gestionnaires basés sur des chaînes -> Cette contrainte ne devrait pas poser de problème pour la plupart des cas d'utilisation, mais elle peut en compliquer certains. N'hésitez pas à nous contacter via Discord si vous rencontrez des problèmes pour modéliser vos données basées sur des fichiers dans un subgraph ! +> Cette contrainte ne devrait pas poser de problème pour la plupart des cas d'utilisation, mais elle peut en compliquer certains. N'hésitez pas à nous contacter via Discord si vous rencontrez des problèmes pour modéliser vos données dans un Subgraph! En outre, il n'est pas possible de créer des sources de données à partir d'une source de données de fichier, qu'il s'agisse d'une source de données onchain ou d'une autre source de données de fichier. Cette restriction pourrait être levée à l'avenir. @@ -365,15 +365,15 @@ Les gestionnaires pour les fichiers sources de données ne peuvent pas être dan > **Nécessite** : [SpecVersion](#specversion-releases) >= `1.2.0` -Les filtres de topics, également connus sous le nom de filtres d'arguments indexés, sont une fonctionnalité puissante dans les subgraphs qui permettent aux utilisateurs de filtrer précisément les événements de la blockchain en fonction des valeurs de leurs arguments indexés. +Les filtres thématiques, également connus sous le nom de filtres d'arguments indexés, sont une fonctionnalité puissante de Subgraphs qui permet aux utilisateurs de filtrer précisément les événements de la blockchain en fonction des valeurs de leurs arguments indexés. -- Ces filtres aident à isoler des événements spécifiques intéressants parmi le vaste flux d'événements sur la blockchain, permettant aux subgraphs de fonctionner plus efficacement en se concentrant uniquement sur les données pertinentes. +- Ces filtres permettent d'isoler des événements spécifiques intéressants du vaste flux d'événements sur la blockchain, ce qui permet aux Subgraphs de fonctionner plus efficacement en se concentrant uniquement sur les données pertinentes. - Ceci est utile pour créer des subgraphs personnels qui suivent des adresses spécifiques et leurs interactions avec divers contrats intelligents sur la blockchain. ### Comment fonctionnent les filtres de Topics -Lorsqu'un contrat intelligent émet un événement, tous les arguments marqués comme indexés peuvent être utilisés comme filtres dans le manifeste d'un subgraph. Ceci permet au subgraph d'écouter de façon sélective les événements qui correspondent à ces arguments indexés. +Lorsqu'un contrat intelligent émet un événement, tous les arguments marqués comme indexés peuvent être utilisés comme filtres dans le manifeste d'un subgraph. Cela permet au subgraph d'écouter sélectivement les événements qui correspondent à ces arguments indexés. - Le premier argument indexé de l'événement correspond à `topic1`, le second à `topic2`, et ainsi de suite, jusqu'à `topic3`, puisque la machine virtuelle Ethereum (EVM) autorise jusqu'à trois arguments indexés par événement. @@ -401,7 +401,7 @@ Dans cet exemple: #### Configuration dans les subgraphs -Les filtres de topics sont définis directement dans la configuration du gestionnaire d'évènement situé dans le manifeste du subgraph. Voici comment ils sont configurés : +Les filtres thématiques sont définis directement dans la configuration du gestionnaire d'événements dans le manifeste Subgraph. Voici comment ils sont configurés : ```yaml eventHandlers: @@ -452,17 +452,17 @@ Dans cette configuration: - `topic1` est configuré pour filtrer les événements `Transfer` dont l'expéditeur est `0xAddressA`, `0xAddressB`, `0xAddressC`. - `topic2` est configuré pour filtrer les événements `Transfer` où `0xAddressB` et `0xAddressC` sont les destinataires. -- Le subgraph indexera les transactions qui se produisent dans les deux sens entre plusieurs adresses, permettant une surveillance complète des interactions impliquant toutes les adresses. +- Le subgraph indexe les transactions qui se produisent dans les deux sens entre plusieurs adresses, ce qui permet un suivi complet des interactions impliquant toutes les adresses. ## Déclaration eth_call > Remarque : Il s'agit d'une fonctionnalité expérimentale qui n'est pas encore disponible dans une version stable de Graph Node. Vous ne pouvez l'utiliser que dans Subgraph Studio ou sur votre nœud auto-hébergé. -Les `eth_calls' déclaratifs sont une caractéristique précieuse des subgraphs qui permet aux `eth_calls' d'être exécutés à l'avance, ce qui permet à `graph-node` de les exécuter en parallèle. +Les `eth_calls` déclaratifs sont une fonctionnalité précieuse de Subgraph qui permet aux `eth_calls` d'être exécutés à l'avance, permettant à `graph-node` de les exécuter en parallèle. Cette fonctionnalité permet de : -- Améliorer de manière significative les performances de la récupération des données de la blockchain Ethereum en réduisant le temps total pour plusieurs appels et en optimisant l'efficacité globale du subgraph. +- Améliore considérablement les performances de la récupération des données de la blockchain Ethereum en réduisant le temps total des appels multiples et en optimisant l'efficacité globale du subgraph. - Permet une récupération plus rapide des données, entraînant des réponses de requête plus rapides et une meilleure expérience utilisateur. - Réduire les temps d'attente pour les applications qui doivent réunir des données de plusieurs appels Ethereum, rendant le processus de récupération des données plus efficace. @@ -474,7 +474,7 @@ Cette fonctionnalité permet de : #### Scénario sans `eth_calls` déclaratifs -Imaginez que vous ayez un subgraph qui doit effectuer trois appels Ethereum pour récupérer des données sur les transactions, le solde et les avoirs en jetons d'un utilisateur. +Imaginez que vous ayez un subgraph qui doit faire trois appels Ethereum pour récupérer des données sur les transactions, le solde et les avoirs en jetons d'un utilisateur. Traditionnellement, ces appels pourraient être effectués de manière séquentielle : @@ -498,15 +498,15 @@ Temps total pris = max (3, 2, 4) = 4 secondes #### Comment ça marche -1. Définition déclarative : Dans le manifeste du subgraph, vous déclarez les appels Ethereum d'une manière indiquant qu'ils peuvent être exécutés en parallèle. +1. Définition déclarative : Dans le manifeste Subgraph, vous déclarez les appels Ethereum d'une manière qui indique qu'ils peuvent être exécutés en parallèle. 2. Moteur d'exécution parallèle : Le moteur d'exécution de Graph Node reconnaît ces déclarations et exécute les appels simultanément. -3. Agrégation des résultats : Une fois que tous les appels sont terminés, les résultats sont réunis et utilisés par le subgraph pour un traitement ultérieur. +3. Agrégation des résultats : Une fois tous les appels terminés, les résultats sont agrégés et utilisés par le Subgraph pour la suite du traitement. #### Exemple de configuration dans le manifeste du subgraph Les `eth_calls` déclarés peuvent accéder à l'adresse `event.address` de l'événement sous-jacent ainsi qu'à tous les paramètres `event.params`. -`Subgraph.yaml` utilisant `event.address` : +`subgraph.yaml` en utilisant `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Détails pour l'exemple ci-dessus : - Le texte (`Pool[event.address].feeGrowthGlobal0X128()`) est le `eth_call` réel qui sera exécuté, et est sous la forme de `Contract[address].function(arguments)` - L'adresse et les arguments peuvent être remplacés par des variables qui seront disponibles lorsque le gestionnaire sera exécuté. -`Subgraph.yaml` utilisant `event.params` +`subgraph.yaml` en utilisant `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** il n'est pas recommandé d'utiliser le greffage lors de l'upgrade initial vers The Graph Network. Pour en savoir plus [ici](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -Lorsqu'un subgraph est déployé pour la première fois, il commence à indexer les événements au bloc de initial de la blockchain correspondante (ou au `startBlock` défini avec chaque source de données). Dans certaines circonstances, il est avantageux de réutiliser les données d'un subgraph existant et de commencer l'indexation à un bloc beaucoup plus tardif. Ce mode d'indexation est appelé _Grafting_. Le greffage (grafting) est, par exemple, utile pendant le développement pour surmonter rapidement de simples erreurs dans les mappages ou pour faire fonctionner temporairement un subgraph existant après qu'il ait échoué. +Lorsqu'un subgraph est déployé pour la première fois, il commence à indexer les événements au bloc de genèse de la chaîne correspondante (ou au `startBlock` défini avec chaque source de données). Dans certaines circonstances, il est avantageux de réutiliser les données d'un subgraph existant et de commencer l'indexation à un bloc beaucoup plus tardif. Ce mode d'indexation est appelé "greffage". Le greffage est, par exemple, utile pendant le développement pour surmonter rapidement de simples erreurs dans les mappages ou pour rétablir temporairement le fonctionnement d'un subgraph existant après qu'il ait échoué. Un subgraph est greffé sur un subgraph de base lorsque le manifeste du subgraph dans `subgraph.yaml` contient un bloc `graft` au niveau supérieur : ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph - block: 7345624 # Block number + base: Qm... # ID du Subgraph de base Subgraph + block: 7345624 # Numéro de bloc ``` -Lorsqu'un subgraph dont le manifeste contient un bloc `graft` est déployé, Graph Node copiera les données du subgraph `de base` jusqu'au bloc spécifié inclus, puis continuera à indexer le nouveau subgraph à partir de ce bloc. Le subgraph de base doit exister sur l'instance cible de Graph Node et doit avoir indexé au moins jusqu'au bloc spécifié. En raison de cette restriction, le greffage ne doit être utilisé que pendant le développement ou en cas d'urgence pour accélérer la production d'un subgraph équivalent non greffé. +Lorsqu'un subgraph dont le manifeste contient un bloc `graft` est déployé, Graph Node va copier les données du subgraph `base` jusqu'au `block` donné inclus, puis continuer à indexer le nouveau subgraph à partir de ce bloc. Le subgraph de base doit exister sur l'instance du Graph Node cible et doit avoir été indexé au moins jusqu'au bloc donné. En raison de cette restriction, le greffage ne devrait être utilisée qu'en cours de développement ou en cas d'urgence pour accélérer la production d'un subgraph équivalent non greffé. -Étant donné que le greffage copie plutôt que l'indexation des données de base, il est beaucoup plus rapide d'amener le susgraph dans le bloc souhaité que l'indexation à partir de zéro, bien que la copie initiale des données puisse encore prendre plusieurs heures pour de très gros subgraphs. Pendant l'initialisation du subgraph greffé, le nœud graphique enregistrera des informations sur les types d'entités qui ont déjà été copiés. +Étant donné que la greffe copie les données de base plutôt que de les indexer, il est beaucoup plus rapide d'amener le subgraph au bloc souhaité que de l'indexer à partir de zéro, bien que la copie initiale des données puisse encore prendre plusieurs heures pour les très grands subgraphs. Pendant l'initialisation du subgraph greffé, Graph Node enregistre des informations sur les types d'entités qui ont déjà été copiés. -Le subgraph greffé peut utiliser un schema GraphQL qui n'est pas identique à celui du subgraph de base, mais simplement compatible avec lui. Il doit s'agir d'un schema de subgraph valide en tant que tel, mais il peut s'écarter du schema du subgraph de base de la manière suivante : +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Il ajoute ou supprime des types d'entité - Il supprime les attributs des types d'entité @@ -560,4 +560,4 @@ Le subgraph greffé peut utiliser un schema GraphQL qui n'est pas identique à c - Il ajoute ou supprime des interfaces - Cela change pour quels types d'entités une interface est implémentée -> **[Gestion des fonctionnalités](#experimental-features):** `grafting` doit être déclaré sous `features` dans le manifeste du subgraph. +> **[Gestion des fonctionnalités](#experimental-features):** `grafting` doit être déclaré sous `features` dans le Subgraph manifest. diff --git a/website/src/pages/fr/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/fr/subgraphs/developing/creating/assemblyscript-mappings.mdx index 7bb87fa69ab6..7a7febddbebb 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ Les mappages prennent des données d'une source particulière et les transformen Pour chaque gestionnaire d'événements défini dans `subgraph.yaml` sous `mapping.eventHandlers`, créez une fonction exportée du même nom. Chaque gestionnaire doit accepter un seul paramètre appelé `event` avec un type correspondant au nom de l'événement traité. -Dans le subgraph d'exemple, `src/mapping.ts` contient des gestionnaires pour les événements `NewGravatar` et `UpdatedGravatar`: +Dans l'exemple Subgraph, `src/mapping.ts` contient des gestionnaires pour les événements `NewGravatar` et `UpdatedGravatar` : ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ Si aucune valeur n'est définie pour un champ de la nouvelle entité avec le mê ## Génération de code -Afin de faciliter et de sécuriser le travail avec les contrats intelligents, les événements et les entités, la CLI Graph peut générer des types AssemblyScript à partir du schéma GraphQL du subgraph et des ABI de contrat inclus dans les sources de données. +Afin de faciliter le travail avec les contrats intelligents, les événements et les entités, Graph CLI peut générer des types AssemblyScript à partir du schéma GraphQL du Subgraph et des ABI des contrats inclus dans les sources de données. Cela se fait avec @@ -80,7 +80,7 @@ Cela se fait avec graph codegen [--output-dir ] [] ``` -mais dans la plupart des cas, les subgraphs sont déjà préconfigurés via `package.json` pour vous permettre d'exécuter simplement l'un des éléments suivants pour obtenir le même résultat : +mais dans la plupart des cas, les Subgraph sont déjà préconfigurés via `package.json` pour vous permettre d'exécuter simplement l'un des éléments suivants pour obtenir le même résultat : ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -Cela va générer une classe AssemblyScript pour chaque contrat intelligent dans les fichiers ABI mentionnés dans `subgraph.yaml`, vous permettant de lier ces contrats à des adresses spécifiques dans les mappagess et d'appeler des méthodes de contrat en lecture seule sur le bloc en cours de traitement. Il génère également une classe pour chaque événement de contrat afin de fournir un accès facile aux paramètres de l'événement, ainsi qu'au bloc et à la transaction d'où provient l'événement. Tous ces types sont écrits dans `//.ts`. Dans l'exemple du subgraph, ce serait `generated/Gravity/Gravity.ts`, permettant aux mappages d'importer ces types avec. +Cela va générer une classe AssemblyScript pour chaque contrat intelligent dans les fichiers ABI mentionnés dans `subgraph.yaml`, vous permettant de lier ces contrats à des adresses spécifiques dans les mappages et d'appeler des méthodes de contrat en lecture seule sur le bloc en cours de traitement. Il génère également une classe pour chaque événement de contrat afin de fournir un accès facile aux paramètres de l'événement, ainsi qu'au bloc et à la transaction d'où provient l'événement. Tous ces types sont écrits dans `//.ts`. Dans l'exemple Subgraph, ce serait `generated/Gravity/Gravity.ts`, permettant aux mappages d'importer ces types avec. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -En outre, une classe est générée pour chaque type d'entité dans le schéma GraphQL du subgraph. Ces classes fournissent un chargement sécurisé des entités, un accès en lecture et en écriture aux champs des entités ainsi qu'une méthode `save()` pour écrire les entités dans le store. Toutes les classes d'entités sont écrites dans le fichier `/schema.ts`, ce qui permet aux mappages de les importer avec la commande +En outre, une classe est générée pour chaque type d'entité dans le schéma GraphQL du Subgraph. Ces classes fournissent un chargement d'entité sécurisé, un accès en lecture et en écriture aux champs de l'entité ainsi qu'une méthode `save()` pour écrire les entités dans le store. Toutes les classes d'entités sont écrites dans le fichier `/schema.ts`, ce qui permet aux mappages de les importer avec la commande ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** La génération de code doit être exécutée à nouveau après chaque modification du schéma GraphQL ou des ABIs incluses dans le manifeste. Elle doit également être effectuée au moins une fois avant de construire ou de déployer le subgraphs. +> **Note:** La génération de code doit être exécutée à nouveau après chaque modification du schéma GraphQL ou des ABIs inclus dans le manifeste. Elle doit également être effectuée au moins une fois avant de construire ou de déployer le Subgraph. -La génération de code ne vérifie pas votre code de mappage dans `src/mapping.ts`. Si vous souhaitez vérifier cela avant d'essayer de déployer votre subgraph sur Graph Explorer, vous pouvez exécuter `yarn build` et corriger les erreurs de syntaxe que le compilateur TypeScript pourrait trouver. +La génération de code ne vérifie pas votre code de mappage dans `src/mapping.ts`. Si vous voulez le vérifier avant d'essayer de déployer votre Subgraph dans Graph Explorer, vous pouvez lancer `yarn build` et corriger les erreurs de syntaxe que le compilateur TypeScript pourrait trouver. diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..e1411a2c1465 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,12 +1,18 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes - [#1843](https://github.com/graphprotocol/graph-tooling/pull/1843) [`c09b56b`](https://github.com/graphprotocol/graph-tooling/commit/c09b56b093f23c80aa5d217b2fd56fccac061145) - Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - Update all dependencies + Merci [@YaroShkvorets](https://github.com/YaroShkvorets) ! - Mise à jour de toutes les dépendances ## 0.36.0 @@ -14,16 +20,16 @@ - [#1754](https://github.com/graphprotocol/graph-tooling/pull/1754) [`2050bf6`](https://github.com/graphprotocol/graph-tooling/commit/2050bf6259c19bd86a7446410c7e124dfaddf4cd) - Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for subgraph datasource and - associated types. + Merci à [@incrypto32](https://github.com/incrypto32) ! - Ajout de la prise en charge de la source de données de Subgraphs et + types associés. ## 0.35.1 -### Patch Changes +### Changements dans les correctifs - [#1637](https://github.com/graphprotocol/graph-tooling/pull/1637) [`f0c583f`](https://github.com/graphprotocol/graph-tooling/commit/f0c583f00c90e917d87b707b5b7a892ad0da916f) - Thanks [@incrypto32](https://github.com/incrypto32)! - Update return type for ethereum.hasCode + Merci [@incrypto32](https://github.com/incrypto32) ! - Mise à jour du type de retour pour ethereum.hasCode ## 0.35.0 @@ -31,7 +37,7 @@ - [#1609](https://github.com/graphprotocol/graph-tooling/pull/1609) [`e299f6c`](https://github.com/graphprotocol/graph-tooling/commit/e299f6ce5cf1ad74cab993f6df3feb7ca9993254) - Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for eth.hasCode method + Merci [@incrypto32](https://github.com/incrypto32) ! - Ajouter la prise en charge de la méthode eth.hasCode ## 0.34.0 @@ -39,8 +45,8 @@ - [#1522](https://github.com/graphprotocol/graph-tooling/pull/1522) [`d132f9c`](https://github.com/graphprotocol/graph-tooling/commit/d132f9c9f6ea5283e40a8d913f3abefe5a8ad5f8) - Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL - `Timestamp` scalar as `i64` (AssemblyScript) + Merci [@dotansimha](https://github.com/dotansimha)! - Ajout d'un support pour la gestion de GraphQL + `Timestamp` scalaire en tant que `i64` (AssemblyScript) ## 0.33.0 @@ -48,7 +54,7 @@ - [#1584](https://github.com/graphprotocol/graph-tooling/pull/1584) [`0075f06`](https://github.com/graphprotocol/graph-tooling/commit/0075f06ddaa6d37606e42e1c12d11d19674d00ad) - Thanks [@incrypto32](https://github.com/incrypto32)! - Added getBalance call to ethereum API + Merci [@incrypto32](https://github.com/incrypto32) ! - Ajout de l'appel getBalance à l'API ethereum ## 0.32.0 @@ -56,7 +62,7 @@ - [#1523](https://github.com/graphprotocol/graph-tooling/pull/1523) [`167696e`](https://github.com/graphprotocol/graph-tooling/commit/167696eb611db0da27a6cf92a7390e72c74672ca) - Thanks [@xJonathanLEI](https://github.com/xJonathanLEI)! - add starknet data types + Merci [@xJonathanLEI](https://github.com/xJonathanLEI) ! - ajouter les types de données de starknet ## 0.31.0 @@ -64,12 +70,12 @@ - [#1340](https://github.com/graphprotocol/graph-tooling/pull/1340) [`2375877`](https://github.com/graphprotocol/graph-tooling/commit/23758774b33b5b7c6934f57a3e137870205ca6f0) - Thanks [@incrypto32](https://github.com/incrypto32)! - export `loadRelated` host function + Merci [@incrypto32](https://github.com/incrypto32) ! - export `loadRelated` host function - [#1296](https://github.com/graphprotocol/graph-tooling/pull/1296) [`dab4ca1`](https://github.com/graphprotocol/graph-tooling/commit/dab4ca1f5df7dcd0928bbaa20304f41d23b20ced) - Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL `Int8` - scalar as `i64` (AssemblyScript) + Merci à [@dotansimha](https://github.com/dotansimha) ! - Ajout du support de la gestion des scalaires GraphQL `Int8` en tant que `i64` (AssemblyScript). + scalaire GraphQL comme `i64` (AssemblyScript) ## 0.30.0 @@ -77,25 +83,25 @@ - [#1299](https://github.com/graphprotocol/graph-tooling/pull/1299) [`3f8b514`](https://github.com/graphprotocol/graph-tooling/commit/3f8b51440db281e69879be7d91d79cd43e45fe86) - Thanks [@saihaj](https://github.com/saihaj)! - introduce new Etherum utility to get a CREATE2 - Address + Merci [@saihaj](https://github.com/saihaj) ! - introduction d'un nouvel utilitaire Etherum pour obtenir un CREATE2 + Adresse - [#1306](https://github.com/graphprotocol/graph-tooling/pull/1306) [`f5e4b58`](https://github.com/graphprotocol/graph-tooling/commit/f5e4b58989edc5f3bb8211f1b912449e77832de8) - Thanks [@saihaj](https://github.com/saihaj)! - expose Host's `get_in_block` function + Merci [@saihaj](https://github.com/saihaj) ! - exposer la fonction `get_in_block` de l'hôte ## 0.29.3 -### Patch Changes +### Changements dans les correctifs - [#1057](https://github.com/graphprotocol/graph-tooling/pull/1057) [`b7a2ec3`](https://github.com/graphprotocol/graph-tooling/commit/b7a2ec3e9e2206142236f892e2314118d410ac93) - Thanks [@saihaj](https://github.com/saihaj)! - fix publihsed contents + Merci [@saihaj](https://github.com/saihaj) ! - Correction des contenus publiés ## 0.29.2 -### Patch Changes +### Changements dans les correctifs - [#1044](https://github.com/graphprotocol/graph-tooling/pull/1044) [`8367f90`](https://github.com/graphprotocol/graph-tooling/commit/8367f90167172181870c1a7fe5b3e84d2c5aeb2c) - Thanks [@saihaj](https://github.com/saihaj)! - publish readme with packages + Merci [@saihaj](https://github.com/saihaj) ! - publier le readme avec les paquets diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..1661eae0df70 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/README.md @@ -1,68 +1,66 @@ -# The Graph TypeScript Library (graph-ts) +# La bibliothèque Graph TypeScript (graph-ts) -[![npm (scoped)](https://img.shields.io/npm/v/@graphprotocol/graph-ts.svg)](https://www.npmjs.com/package/@graphprotocol/graph-ts) -[![Build Status](https://travis-ci.org/graphprotocol/graph-ts.svg?branch=master)](https://travis-ci.org/graphprotocol/graph-ts) +[ ![npm (cadré)](https://img.shields.io/npm/v/@graphprotocol/graph-ts.svg)](https://www.npmjs.com/package/@graphprotocol/graph-ts) +[ ![État de la construction](https://travis-ci.org/graphprotocol/graph-ts.svg?branch=master)](https://travis-ci.org/graphprotocol/graph-ts) -TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to +Bibliothèque TypeScript/AssemblyScript pour l'écriture de mappages de Subgraphs à déployer sur [The Graph](https://github.com/graphprotocol/graph-node). ## Usage -For a detailed guide on how to create a subgraph, please see the +Pour un guide détaillé sur la création d'un Subgraph, veuillez consulter le document suivant [Graph CLI docs](https://github.com/graphprotocol/graph-cli). -One step of creating the subgraph is writing mappings that will process blockchain events and will -write entities into the store. These mappings are written in TypeScript/AssemblyScript. +Une étape de la création du Subgraph consiste à écrire des mappages qui traiteront les événements de la blockchain et écriront des entités dans le magasin. +écrire des entités dans le store. Ces mappages sont écrits en TypeScript/AssemblyScript. -The `graph-ts` library provides APIs to access the Graph Node store, blockchain data, smart -contracts, data on IPFS, cryptographic functions and more. To use it, all you have to do is add a -dependency on it: +La bibliothèque `graph-ts` fournit des API pour accéder au store Graph Node, aux données de la blockchain, aux contrats intelligents, aux données sur IPFS, aux fonctions cryptographiques et plus encore. Pour l'utiliser, tout ce que vous avez à faire est d'ajouter une dépendance +une dépendance sur cette bibliothèque : ```sh npm install --dev @graphprotocol/graph-ts # NPM yarn add --dev @graphprotocol/graph-ts # Yarn ``` -After that, you can import the `store` API and other features from this library in your mappings. A -few examples: +Ensuite, vous pouvez importer l'API `store` et d'autres fonctionnalités de cette bibliothèque dans vos mappages. Quelques exemples : ```typescript import { crypto, store } from '@graphprotocol/graph-ts' -// This is just an example event type generated by `graph-cli` -// from an Ethereum smart contract ABI +// Ceci est juste un exemple de type d'événement généré par `graph-cli` +// à partir d'un contrat intelligent Ethereum ABI import { NameRegistered } from './types/abis/SomeContract' -// This is an example of an entity type generated from a -// subgraph's GraphQL schema +// Voici un exemple de type d'entité généré à partir du schéma GraphQL d'un subgraph. +// schéma GraphQL d'un subgraph import { Domain } from './types/schema' function handleNameRegistered(event: NameRegistered) { - // Example use of a crypto function + // Exemple d'utilisation d'une fonction crypto let id = crypto.keccak256(name).toHexString() - // Example use of the generated `Entry` class + // Exemple d'utilisation de la classe `Entry` générée let domain = new Domain() domain.name = name domain.owner = event.params.owner domain.timeRegistered = event.block.timestamp - // Example use of the store API + // Exemple d'utilisation du store API store.set('Name', id, entity) } ``` -## Helper Functions for AssemblyScript +## Fonctions d'aide pour AssemblyScript -Refer to the `helper-functions.ts` file in +Référez-vous au fichier `helper-functions.ts` dans [this](https://github.com/graphprotocol/graph-tooling/blob/main/packages/ts/helper-functions.ts) -repository for a few common functions that help build on top of the AssemblyScript library, such as -byte array concatenation, among others. +pour quelques fonctions communes qui aident à construire au-dessus de la bibliothèque AssemblyScript, comme la concaténation de tableaux de byte, entre autres. +la concaténation de tableaux byte, entre autres. ## API -Documentation on the API can be found -[here](https://thegraph.com/docs/en/developer/assemblyscript-api/). +La documentation sur l'API est disponible +[ici](https://thegraph.com/docs/en/developer/assemblyscript-api/). -For examples of `graph-ts` in use take a look at one of the following subgraphs: +Pour des exemples d'utilisation de `graph-ts`, regardez l'un des Subgraphs suivants : - https://github.com/graphprotocol/ens-subgraph - https://github.com/graphprotocol/decentraland-subgraph @@ -71,15 +69,15 @@ For examples of `graph-ts` in use take a look at one of the following subgraphs: - https://github.com/graphprotocol/aragon-subgraph - https://github.com/graphprotocol/dharma-subgraph -## License +## Licence -Copyright © 2018 Graph Protocol, Inc. and contributors. +Copyright © 2018 Graph Protocol, Inc. et contributeurs. -The Graph TypeScript library is dual-licensed under the -[MIT license](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT) and the +La bibliothèque TypeScript The Graph est soumise à une double licence, à savoir la licence +MIT et de la [licence Apache, version 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT). [Apache License, Version 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-APACHE). -Unless required by applicable law or agreed to in writing, software distributed under the License is -distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -implied. See the License for the specific language governing permissions and limitations under the -License. +Sauf obligation légale ou accord écrit, le logiciel distribué dans le cadre de la licence est +distribué « EN L’ÉTAT », SANS GARANTIE NI CONDITION DE QUELQUE NATURE QUE CE SOIT, expresse ou implicite. +explicites ou implicites. Voir la Licence pour le langage spécifique régissant les permissions et les limitations dans le cadre de la +La licence. diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/_meta-titles.json b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/_meta-titles.json index 5c5a85ba9a2e..5cde1b58c3ac 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/_meta-titles.json +++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/_meta-titles.json @@ -1,5 +1,5 @@ { "README": "Présentation", "api": "Référence API", - "common-issues": "Common Issues" + "common-issues": "Problèmes communs" } diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx index a74814844016..63c7591d8398 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: API AssemblyScript --- -> Note : Si vous avez créé un subgraph avant la version `graph-cli`/`graph-ts` `0.22.0`, alors vous utilisez une ancienne version d'AssemblyScript. Il est recommandé de consulter le [`Guide de Migration`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note : Si vous avez créé un subgraph avant la version `graph-cli`/`graph-ts` `0.22.0`, alors vous utilisez une ancienne version d'AssemblyScript. Il est recommandé de consulter le [`Guide de migration`](/resources/migration-guides/assemblyscript-migration-guide/). -Découvrez quelles APIs intégrées peuvent être utilisées lors de l'écriture des mappages de subgraph. Il existe deux types d'APIs disponibles par défaut : +Découvrez les API intégrées qui peuvent être utilisées lors de l'écriture de mappages de subgraphs. Deux types d'API sont disponibles sont disponibles nativement : - La [Bibliothèque TypeScript de The Graph](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code généré à partir des fichiers du subgraph par `graph codegen` +- Code généré à partir des fichiers de subgraphs par `graph codegen` Vous pouvez également ajouter d'autres bibliothèques comme dépendances, à condition qu'elles soient compatibles avec [AssemblyScript] (https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ La bibliothèque `@graphprotocol/graph-ts` fournit les API suivantes : ### Versions -La `apiVersion` dans le manifeste du subgraph spécifie la version de l'API de mappage exécutée par Graph Node pour un subgraph donné. +La `apiVersion` dans le manifeste du subgraph spécifie la version de l'API de mappage qui est exécutée par Graph Node pour un subgraph donné. -| Version | Notes de version | -| :-: | --- | -| 0.0.9 | Ajout de nouvelles fonctions hôtes [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Ajout de la validation pour l'existence des champs dans le schéma lors de l'enregistrement d'une entité. | -| 0.0.7 | Ajout des classes `TransactionReceipt` et `Log`aux types Ethereum
Ajout du champ `receipt` à l'objet Ethereum Event | -| 0.0.6 | Ajout du champ `nonce` à l'objet Ethereum Transaction
Ajout de `baseFeePerGas` à l'objet Ethereum Block | -| 0.0.5 | AssemblyScript a été mis à niveau à niveau vers la version 0.19.10 (cela inclut des changements brusques, veuillez consulter le [`Guide de migration`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renommé en `ethereum.transaction.gasLimit` | -| 0.0.4 | Ajout du champ `functionSignature` à l'objet Ethereum SmartContractCall | -| 0.0.3 | Ajout du champ `from` à l'objet Ethereum Call
`ethereum.call.address` renommé en `ethereum.call.to` | -| 0.0.2 | Ajout du champ `input` à l'objet Ethereum Transaction | +| Version | Notes de version | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 0.0.9 | Ajout de nouvelles fonctions hôtes [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Ajout de la validation pour l'existence des champs dans le schéma lors de l'enregistrement d'une entité. | +| 0.0.7 | Ajout des classes `TransactionReceipt` et `Log`aux types Ethereum
Ajout du champ `receipt` à l'objet Ethereum Event | +| 0.0.6 | Ajout du champ `nonce` à l'objet Ethereum Transaction
Ajout de `baseFeePerGas` à l'objet Ethereum Block | +| 0.0.5 | AssemblyScript mis à jour vers la version 0.19.10 (cela inclut des changements de rupture, veuillez consulter le [`Guide de migration`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renommé en `ethereum.transaction.gasLimit` | +| 0.0.4 | Ajout du champ `functionSignature` à l'objet Ethereum SmartContractCall | +| 0.0.3 | Ajout du champ `from` à l'objet Ethereum Call
`ethereum.call.address` renommé en `ethereum.call.to` | +| 0.0.2 | Ajout du champ `input` à l'objet Ethereum Transaction | ### Types intégrés @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' L'API `store` permet de charger, sauvegarder et supprimer des entités dans et depuis le magasin Graph Node. -Les entités écrites dans le magasin correspondent directement aux types `@entity` définis dans le schéma GraphQL du subgraph. Pour faciliter le travail avec ces entités, la commande `graph codegen` fournie par [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) génère des classes d'entités, qui sont des sous-classes du type `Entity` intégré, avec des accesseurs et des mutateurs pour les champs du schéma ainsi que des méthodes pour charger et sauvegarder ces entités. +Les entités écrites dans le store correspondent aux types `@entity` définis dans le schéma GraphQL du subgraph. Pour faciliter le travail avec ces entités, la commande `graph codegen` fournie par [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) génère des classes d'entités, qui sont des sous-classes du type intégré `Entity`, avec des getters et des setters de propriétés pour les champs du schéma ainsi que des méthodes pour charger et sauvegarder ces entités. #### Création d'entités @@ -282,8 +282,8 @@ Depuis `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 et `@graphprotoco L'API de store facilite la récupération des entités créées ou mises à jour dans le bloc actuel. Une situation typique pour cela est qu'un gestionnaire crée une transaction à partir d'un événement onchain et qu'un gestionnaire ultérieur souhaite accéder à cette transaction si elle existe. -- Dans le cas où la transaction n'existe pas, le subgraph devra interroger la base de données pour découvrir que l'entité n'existe pas. Si l'auteur du subgraph sait déjà que l'entité doit avoir été créée dans le même bloc, utiliser `loadInBlock` évite ce détour par la base de données. -- Pour certains subgraphs, ces recherches infructueuses peuvent contribuer de manière significative au temps d'indexation. +- Dans le cas où la transaction n'existe pas, le subgraph devra aller dans la base de données simplement pour découvrir que l'entité n'existe pas. Si l'auteur du subgraph sait déjà que l'entité a dû être créée dans le même bloc, l'utilisation de `loadInBlock` évite cet aller-retour dans la base de données. +- Pour certains subgraphs, ces recherches manquées peuvent contribuer de manière significative au temps d'indexation. ```typescript let id = event.transaction.hash // ou de toute autre manière dont l'ID est construit @@ -380,11 +380,11 @@ L'API Ethereum donne accès aux contrats intelligents, aux variables d'état pub #### Prise en charge des types Ethereum -Comme pour les entités, `graph codegen` génère des classes pour tous les contrats intelligents et événements utilisés dans un subgraph. Pour cela, les ABIs des contrats doivent faire partie de la source de données dans le manifeste du subgraph. En général, les fichiers ABI sont stockés dans un dossier `abis/` . +Comme pour les entités, `graph codegen` génère des classes pour tous les contrats intelligents et les événements utilisés dans un subgraph. Pour cela, les ABI des contrats doivent faire partie de la source de données dans le manifeste du subgraph. Typiquement, les fichiers ABI sont stockés dans un dossier `abis/`. -Avec les classes générées, les conversions entre les types Ethereum et [les types intégrés](#built-in-types) se font en arrière-plan afin que les auteurs de subgraph n'aient pas à s'en soucier. +Avec les classes générées, les conversions entre les types Ethereum et les [types intégrés](#built-in-types) ont lieu en coulisses, de sorte que les auteurs de subgraphs n'ont pas à s'en préoccuper. -L’exemple suivant illustre cela. Étant donné un schéma de subgraph comme +L'exemple suivant l'illustre. Étant donné un schéma de Subgraphs tel que ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Accès à l'état du contrat intelligent -Le code généré par `graph codegen` inclut également des classes pour les contrats intelligents utilisés dans le subgraph. Celles-ci peuvent être utilisées pour accéder aux variables d'état publiques et appeler des fonctions du contrat au bloc actuel. +Le code généré par `graph codegen` comprend également des classes pour les contrats intelligents utilisés dans le subgraph. Celles-ci peuvent être utilisées pour accéder aux variables d'état publiques et appeler les fonctions du contrat dans le bloc actuel. Un modèle courant consiste à accéder au contrat dont provient un événement. Ceci est réalisé avec le code suivant : @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // renvoie false import { log } from '@graphprotocol/graph-ts' ``` -L'API `log` permet aux subgraphs d'enregistrer des informations sur la sortie standard de Graph Node ainsi que sur Graph Explorer. Les messages peuvent être enregistrés en utilisant différents niveaux de journalisation. Une syntaxe de chaîne de caractère de format de base est fournie pour composer des messages de journal à partir de l'argument. +L'API `log` permet aux subgraphs de consigner des informations sur la sortie standard de Graph Node ainsi que sur Graph Explorer. Les messages peuvent être enregistrés à différents niveaux. Une syntaxe de chaîne de caractère de format de base est fournie pour composer les messages de journal à partir d'un argument. L'API `log` inclut les fonctions suivantes : @@ -590,7 +590,7 @@ L'API `log` inclut les fonctions suivantes : - `log.info(fmt: string, args: Array): void` - enregistre un message d'information. - `log.warning(fmt: string, args: Array): void` - enregistre un avertissement. - `log.error(fmt: string, args: Array): void` - enregistre un message d'erreur. -- `log.critical(fmt: string, args: Array): void` – enregistre un message critique _et_ met fin au subgraph. +- `log.critical(fmt : string, args : Array) : void` - enregistre un message critique _et_ met fin au Subgraph. L'API `log` prend une chaîne de caractères de format et un tableau de valeurs de chaîne de caractères. Elle remplace ensuite les espaces réservés par les valeurs de chaîne de caractères du tableau. Le premier espace réservé `{}` est remplacé par la première valeur du tableau, le second `{}` est remplacé par la deuxième valeur, et ainsi de suite. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) Le seul indicateur actuellement pris en charge est `json`, qui doit être passé à `ipfs.map`. Avec l'indicateur `json` , le fichier IPFS doit consister en une série de valeurs JSON, une valeur par ligne. L'appel à `ipfs.map` lira chaque ligne du fichier, la désérialisera en un `JSONValue` et appellera le callback pour chacune d'entre elles. Le callback peut alors utiliser des opérations des entités pour stocker des données à partir du `JSONValue`. Les modifications d'entité ne sont enregistrées que lorsque le gestionnaire qui a appelé `ipfs.map` se termine avec succès ; en attendant, elles sont conservées en mémoire, et la taille du fichier que `ipfs.map` peut traiter est donc limitée. -En cas de succès, `ipfs.map` renvoie `void`. Si une invocation du callback provoque une erreur, le gestionnaire qui a invoqué `ipfs.map` est interrompu et le subgraph marqué comme échoué. +En cas de succès, `ipfs.map` renvoie `void`. Si une invocation du callback provoque une erreur, le gestionnaire qui a invoqué `ipfs.map` est interrompu, et le subgraph est marqué comme ayant échoué. ### Crypto API @@ -770,44 +770,44 @@ Lorsque le type d'une valeur est certain, il peut être converti en un [type int ### Référence des conversions de types -| Source(s) | Destination | Fonctions de conversion | -| -------------------- | -------------------- | ---------------------------- | -| Address | Bytes | aucune | -| Address | String | s.toHexString() | -| BigDecimal | String | s.toString() | -| BigInt | BigDecimal | s.toBigDecimal() | -| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() | -| BigInt | String (unicode) | s.toString() | -| BigInt | i32 | s.toI32() | -| Boolean | Boolean | aucune | -| Bytes (signé) | BigInt | BigInt.fromSignedBytes(s) | -| Bytes (non signé) | BigInt | BigInt.fromUnsignedBytes(s) | -| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() | -| Bytes | String (unicode) | s.toString() | -| Bytes | String (base58) | s.toBase58() | -| Bytes | i32 | s.toI32() | -| Bytes | u32 | s.toU32() | -| Bytes | JSON | json.fromBytes(s) | -| int8 | i32 | aucune | -| int32 | i32 | aucune | -| int32 | BigInt | BigInt.fromI32(s) | -| uint24 | i32 | aucune | -| int64 - int256 | BigInt | aucune | -| uint32 - uint256 | BigInt | aucune | -| JSON | boolean | s.toBool() | -| JSON | i64 | s.toI64() | -| JSON | u64 | s.toU64() | -| JSON | f64 | s.toF64() | -| JSON | BigInt | s.toBigInt() | -| JSON | string | s.toString() | -| JSON | Array | s.toArray() | -| JSON | Object | s.toObject() | -| String | Address | Address.fromString(s) | -| Bytes | Address | Address.fromBytes(s) | -| String | BigInt | BigInt.fromString(s) | -| String | BigDecimal | BigDecimal.fromString(s) | -| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | -| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | +| Source(s) | Destination | Fonctions de conversion | +| --------------------- | -------------------- | -------------------------------- | +| Address | Bytes | aucune | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | aucune | +| Bytes (signé) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (non signé) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | aucune | +| int32 | i32 | aucune | +| int32 | BigInt | BigInt.fromI32(s) | +| uint24 | i32 | aucune | +| int64 - int256 | BigInt | aucune | +| uint32 - uint256 | BigInt | aucune | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toI64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| Bytes | Address | Address.fromBytes(s) | +| String | BigInt | BigInt.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | ### Métadonnées de la source de données @@ -836,7 +836,7 @@ La classe de base `Entity` et la classe enfant `DataSourceContext` disposent d'a ### DataSourceContext in Manifest -La section `context` de `dataSources` vous permet de définir des paires clé-valeur qui sont accessibles dans vos mappages de subgraphs. Les types disponibles sont `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, et `BigInt`. +La section `context` de `dataSources` vous permet de définir des paires clé-valeur accessibles dans vos mappages de subgraphs. Les types disponibles sont `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, et `BigInt`. Voici un exemple YAML illustrant l'utilisation de différents types dans la section `context` : @@ -887,4 +887,4 @@ dataSources: - `List` : Spécifie une liste d'éléments. Chaque élément doit spécifier son type et ses données. - `BigInt` : Spécifie une grande valeur entière. Elle doit être mise entre guillemets en raison de sa grande taille. -Ce contexte est ensuite accessible dans vos fichiers de mappage de subgraphs, permettant des subgraphs plus dynamiques et configurables. +Ce contexte est ensuite accessible dans vos fichiers de mappage de Subgraph, ce qui permet de créer des Subgraphs plus dynamiques et configurables. diff --git a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/common-issues.mdx index a946b30a71b1..ec5500baac76 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -Il existe certains problèmes courants avec [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) lors du développement de subgraph. Ces problèmes varient en termes de difficulté de débogage, mais les connaître peut être utile. Voici une liste non exhaustive de ces problèmes : +Il existe certains problèmes [AssemblyScript] (https://github.com/AssemblyScript/assemblyscript) que l'on rencontre fréquemment au cours du développement d'un subgraph. Ils varient en difficulté de débugage, cependant, être conscient d'eux peut aider. Voici une liste non exhaustive de ces problèmes : - Les variables de classe `Private` ne sont pas appliquées dans [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). Il n'y a aucun moyen de protéger les variables de classe d'une modification directe à partir de l'objet de la classe. - La portée n'est pas héritée dans les [fonctions de fermeture] (https://www.assemblyscript.org/status.html#on-closures), c'est-à-dire que les variables déclarées en dehors des fonctions de fermeture ne peuvent pas être utilisées. Explication dans les [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/fr/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/fr/subgraphs/developing/creating/install-the-cli.mdx index 0376a713f058..eaa6d4601d27 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Installation du Graph CLI --- -> Pour utiliser votre subgraph sur le réseau décentralisé de The Graph, vous devrez [créer une clé API](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) dans [Subgraph Studio](https://thegraph.com/studio/apikeys/). Il est recommandé d'ajouter un signal à votre subgraph avec au moins 3 000 GRT pour attirer 2 à 3 Indexeurs. Pour en savoir plus sur la signalisation, consultez [curation](/resources/roles/curating/). +> Afin d'utiliser votre subgraph sur le réseau décentralisé de The Graph, vous devrez [créer une clé API](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) dans [Subgraph Studio](https://thegraph.com/studio/apikeys/). Il est recommandé d'ajouter un signal à votre subgraph avec au moins 3 000 GRT pour attirer 2 ou 3 Indexeurs. Pour en savoir plus sur la signalisation, consultez [Curation](/resources/roles/curating/). ## Aperçu -[Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) est une interface de ligne de commande qui facilite les commandes des développeurs pour The Graph. Il traite un [manifeste de subgraph](/subgraphs/developing/creating/subgraph-manifest/) et compile les [mappages](/subgraphs/developing/creating/assemblyscript-mappings/) pour créer les fichiers dont vous aurez besoin pour déployer le subgraph sur [Subgraph Studio](https://thegraph.com/studio/) et le réseau. +Le [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) est une interface de ligne de commande qui facilite les commandes des développeurs pour The Graph. Elle traite un [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) et compile les [mappages](/subgraphs/developing/creating/assemblyscript-mappings/) pour créer les fichiers dont vous aurez besoin pour déployer le subgraph dans [Subgraph Studio](https://thegraph.com/studio/) et sur le réseau. ## Introduction @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -La commande `graph init` peut être utilisée pour configurer un nouveau projet de subgraph, soit à partir d'un contrat existant, soit à partir d'un exemple de subgraph. Si vous avez déjà déployé un contrat intelligent sur votre réseau préféré, vous pouvez démarrer un nouveau subgraph à partir de ce contrat pour commencer. +La commande `graph init` peut être utilisée pour mettre en place un nouveau projet Subgraph, soit à partir d'un contrat existant, soit à partir d'un exemple de Subgraph. Si vous avez déjà un contrat intelligent déployé sur votre réseau préféré, vous pouvez démarrer un nouveau Subgraph à partir de ce contrat pour commencer. ## Créer un subgraph ### À partir d'un contrat existant -La commande suivante crée un subgraph qui indexe tous les événements d'un contrat existant : +La commande suivante crée un Subgraph qui indexe tous les événements d'un contrat existant : ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - Si certains arguments optionnels manquent, il vous guide à travers un formulaire interactif. -- Le `` est l'ID de votre subgraph dans [Subgraph Studio](https://thegraph.com/studio/). Il se trouve sur la page de détails de votre subgraph. +- Le `` est l'identifiant de votre Subgraph dans [Subgraph Studio](https://thegraph.com/studio/). Il se trouve sur la page de détails de votre Subgraph. ### À partir d'un exemple de subgraph -La commande suivante initialise un nouveau projet à partir d'un exemple de subgraph : +La commande suivante permet d'initialiser un nouveau projet à partir d'un exemple de Subgraph : ```sh graph init --from-example=example-subgraph ``` -- Le [subgraph d'exemple](https://github.com/graphprotocol/example-subgraph) est basé sur le contrat Gravity de Dani Grant, qui gère les avatars des utilisateurs et émet des événements `NewGravatar` ou `UpdateGravatar` chaque fois que des avatars sont créés ou mis à jour. +- Le [Subgraph d'exemple](https://github.com/graphprotocol/example-subgraph) est basé sur le contrat Gravity de Dani Grant, qui gère les avatars des utilisateurs et émet des événements `NewGravatar` ou `UpdateGravatar` à chaque fois que des avatars sont créés ou mis à jour. -- Le subgraph gère ces événements en écrivant des entités `Gravatar` dans le store de Graph Node et en veillant à ce qu'elles soient mises à jour en fonction des événements. +- Le Subgraph gère ces événements en écrivant des entités `Gravatar` dans le store de Graph Node et en veillant à ce qu'elles soient mises à jour en fonction des événements. ### Ajouter de nouvelles `sources de données` à un subgraph existant -Les `dataSources` sont des composants clés des subgraphs. Ils définissent les sources de données que le subgraphs indexe et traite. Une `dataSource` spécifie quel smart contract doit être écouté, quels événements doivent être traités et comment les traiter. +Les `sources de données` sont des composants clés des subgraphs. Ils définissent les sources de données que le subgraph indexe et traite. Une `dataSource` spécifie quel contrat intelligent écouter, quels événements traiter et comment les traiter. -Les versions récentes de Graph CLI permettent d'ajouter de nouvelles `dataSources` à un subgraph existant grâce à la commande `graph add` : +Les versions récentes de Graph CLI permettent d'ajouter de nouvelles `dataSources` à un Subgraph existant grâce à la commande `graph add` : ```sh graph add
[] @@ -101,19 +101,5 @@ La commande `graph add` récupère l'ABI depuis Etherscan (à moins qu'un chemin Le(s) fichier(s) ABI doivent correspondre à votre(vos) contrat(s). Il existe plusieurs façons d'obtenir des fichiers ABI : - Si vous construisez votre propre projet, vous aurez probablement accès à vos ABI les plus récents. -- Si vous construisez un subgraph pour un projet public, vous pouvez télécharger ce projet sur votre ordinateur et obtenir l'ABI en utilisant [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) ou en utilisant `solc` pour compiler. -- Vous pouvez également trouver l'ABI sur [Etherscan](https://etherscan.io/), mais ce n'est pas toujours fiable, car l'ABI qui y est téléchargé peut être obsolète. Assurez-vous d'avoir le bon ABI, sinon l'exécution de votre subgraph échouera. - -## Versions disponibles de SpecVersion - -| Version | Notes de version | -| :-: | --- | -| 1.2.0 | Ajout de la prise en charge du [filtrage des arguments indexés](/#indexed-argument-filters--topic-filters) et de la déclaration `eth_call` | -| 1.1.0 | Prend en charge [Timeseries & Aggregations](#timeseries-and-aggregations). Ajout de la prise en charge du type `Int8` pour `id`. | -| 1.0.0 | Prend en charge la fonctionnalité [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) pour élaguer les subgraphs | -| 0.0.9 | Prend en charge la fonctionnalité `endBlock` | -| 0.0.8 | Ajout de la prise en charge des [gestionnaires de blocs](/developing/creating-a-subgraph/#polling-filter) et des [gestionnaires d'initialisation](/developing/creating-a-subgraph/#once-filter) d'interrogation. | -| 0.0.7 | Ajout de la prise en charge des [fichiers sources de données](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Prend en charge la variante de calcul rapide de la [Preuve d'indexation](/indexing/overview/#what-is-a-proof-of-indexing-poi). | -| 0.0.5 | Ajout de la prise en charge des gestionnaires d'événement ayant accès aux reçus de transactions. | -| 0.0.4 | Ajout de la prise en charge du management des fonctionnalités de subgraph. | +- Si vous construisez un Subgraph pour un projet public, vous pouvez télécharger ce projet sur votre ordinateur et obtenir l'ABI en utilisant [`npx hardhat compile`] (https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) ou en utilisant `solc` pour compiler. +- Vous pouvez également trouver l'ABI sur [Etherscan] (https://etherscan.io/), mais ce n'est pas toujours fiable, car l'ABI qui y est téléchargé peut être obsolète. Assurez-vous d'avoir le bon ABI, sinon l'exécution de votre Subgraph échouera. diff --git a/website/src/pages/fr/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/fr/subgraphs/developing/creating/ql-schema.mdx index 0d6ae1beb2bf..ce441c0e525c 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: Schema The Graph QL ## Aperçu -Le schéma de votre subgraph se trouve dans le fichier `schema.graphql`. Les schémas GraphQL sont définis à l'aide du langage de définition d'interface GraphQL. +Le schéma de votre Subgraph se trouve dans le fichier `schema.graphql`. Les schémas GraphQL sont définis à l'aide du langage de définition d'interface GraphQL. > Remarque : si vous n'avez jamais écrit de schéma GraphQL, il est recommandé de consulter ce guide sur le système de types GraphQL. La documentation de référence pour les schémas GraphQL est disponible dans la section [API GraphQL](/subgraphs/querying/graphql-api/). @@ -12,7 +12,7 @@ Le schéma de votre subgraph se trouve dans le fichier `schema.graphql`. Les sch Avant de définir des entités, il est important de prendre du recul et de réfléchir à la manière dont vos données sont structurées et liées. -- Toutes les requêtes seront effectuées sur le modèle de données défini dans le schéma de subgraph. Par conséquent, la conception du schéma de subgraph doit être informée par les requêtes que votre application devra exécuter. +- Toutes les requêtes seront effectuées à partir du modèle de données défini dans le schéma du Subgraph. Par conséquent, la conception du schéma du Subgraph doit être guidée par les requêtes que votre application devra effectuer. - Il peut être utile d'imaginer les entités comme des "objets contenant des données", plutôt que comme des événements ou des fonctions. - Vous définissez les types d'entités dans `schema.graphql`, et Graph Node générera des champs de premier niveau pour interroger des instances uniques et des collections de ce type d'entité. - Chaque type qui doit être une entité doit être annoté avec une directive `@entity`. @@ -72,16 +72,16 @@ Pour certains types d'entités, l'`id` de `Bytes!` est construit à partir des i Les scalaires suivants sont supportés dans l'API GraphQL : -| Type | Description | -| --- | --- | -| `Bytes` | Tableau d'octets, représenté sous forme de chaîne hexadécimale. Couramment utilisé pour les hachages et adresses Ethereum. | -| `String` | Scalaire pour les valeurs de type `string`. Les caractères nuls ne sont pas pris en charge et sont automatiquement supprimés. | -| `Boolean` | Scalaire pour les valeurs de type `boolean` (booléennes). | -| `Int` | La spécification GraphQL définit `Int` comme un entier signé de 32 bits. | -| `Int8` | Un entier signé de 8 octets, également connu sous le nom d'entier signé de 64 bits, peut stocker des valeurs comprises entre -9 223 372 036 854 775 808 et 9 223 372 036 854 775 807. Il est préférable de l'utiliser pour représenter `i64` de l'ethereum. | -| `BigInt` | Grands entiers. Utilisé pour les types Ethereum `uint32`, `int64`, `uint64`, ..., `uint256`. Note : Tout ce qui est inférieur à `uint32`, comme `int32`, `uint24` ou `int8` est représenté par `i32`. | -| `BigDecimal` | `BigDecimal` Décimales de haute précision représentées par un significatif et un exposant. L'exposant est compris entre -6143 et +6144. Arrondi à 34 chiffres significatifs. | -| `Timestamp` | Il s'agit d'une valeur `i64` en microsecondes. Couramment utilisé pour les champs `timestamp` des séries chronologiques et des agrégations. | +| Type | Description | +| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Tableau d'octets, représenté sous forme de chaîne hexadécimale. Couramment utilisé pour les hachages et adresses Ethereum. | +| `String` | Scalaire pour les valeurs de type `string`. Les caractères nuls ne sont pas pris en charge et sont automatiquement supprimés. | +| `Boolean` | Scalaire pour les valeurs de type `boolean` (booléennes). | +| `Int` | La spécification GraphQL définit `Int` comme un entier signé de 32 bits. | +| `Int8` | Un entier signé de 8 octets, également connu sous le nom d'entier signé de 64 bits, peut stocker des valeurs comprises entre -9 223 372 036 854 775 808 et 9 223 372 036 854 775 807. Il est préférable de l'utiliser pour représenter `i64` de l'ethereum. | +| `BigInt` | Grands entiers. Utilisé pour les types Ethereum `uint32`, `int64`, `uint64`, ..., `uint256`. Note : Tout ce qui est inférieur à `uint32`, comme `int32`, `uint24` ou `int8` est représenté par `i32`. | +| `BigDecimal` | `BigDecimal` Décimales de haute précision représentées par un significatif et un exposant. L'exposant est compris entre -6143 et +6144. Arrondi à 34 chiffres significatifs. | +| `Timestamp` | Il s'agit d'une valeur `i64` en microsecondes. Couramment utilisé pour les champs `timestamp` des séries chronologiques et des agrégations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Les recherches inversées peuvent être définies sur une entité à travers le champ `@derivedFrom`. Cela crée un champ virtuel sur l'entité qui peut être interrogé mais qui ne peut pas être défini manuellement par l'intermédiaire de l'API des correspondances. Il est plutôt dérivé de la relation définie sur l'autre entité. Pour de telles relations, il est rarement utile de stocker les deux côtés de la relation, et l'indexation et les performances des requêtes seront meilleures si un seul côté est stocké et que l'autre est dérivé. -Pour les relations un-à-plusieurs, la relation doit toujours être stockée du côté « un » et le côté « plusieurs » doit toujours être dérivé. Stocker la relation de cette façon, plutôt que de stocker un tableau d'entités du côté « plusieurs », entraînera des performances considérablement meilleures pour l'indexation et l'interrogation du sous-graphe. En général, le stockage de tableaux d’entités doit être évité autant que possible. +Pour les relations "un à plusieurs", la relation doit toujours être stockée du côté "un" et le côté "plusieurs" doit toujours être dérivé. Le stockage de la relation de cette manière, plutôt que le stockage d'un tableau d'entités du côté "plusieurs", se traduira par des performances nettement meilleures pour l'indexation et l'interrogation du subgraph. En général, le stockage de tableaux d'entités doit être évité autant que possible. #### Exemple @@ -160,15 +160,15 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Voici un exemple de la façon d'écrire une correspondance pour un Subgraph avec des recherches inversées : ```typescript let token = new Token(event.address) // Create Token -token.save() // tokenBalances is derived automatically +token.save() // tokenBalances est dérivé automatiquement let tokenBalance = new TokenBalance(event.address) tokenBalance.amount = BigInt.fromI32(0) -tokenBalance.token = token.id // Reference stored here +tokenBalance.token = token.id // Référence stockée ici tokenBalance.save() ``` @@ -222,7 +222,7 @@ Cette approche nécessite que les requêtes descendent vers un niveau supplémen query usersWithOrganizations { users { organizations { - # ceci est une entité UserOrganization + # ceci est une entité UserOrganization organization { name } @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Cette manière plus élaborée de stocker des relations plusieurs-à-plusieurs entraînera moins de données stockées pour le subgraph, et donc vers un subgraph qui est souvent considérablement plus rapide à indexer et à interroger. +Cette façon plus élaborée de stocker les relations de plusieurs à plusieurs permettra de stocker moins de données pour le Subgraph et, par conséquent, d'obtenir un Subgraph dont l'indexation et l'interrogation sont souvent beaucoup plus rapides. ### Ajouter des commentaires au schéma @@ -287,7 +287,7 @@ query { } ``` -> **[Gestion des fonctionnalités](#experimental-features):** A partir de `specVersion` `0.0.4`, `fullTextSearch` doit être déclaré dans la section `features` du manifeste du subgraph. +> **[Gestion des fonctionnalités](#experimental-features):** A partir de `specVersion` `0.0.4`, `fullTextSearch` doit être déclaré dans la section `features` du manifeste Subgraph. ## Langues prises en charge @@ -295,30 +295,30 @@ Le choix d'une langue différente aura un effet définitif, bien que parfois sub Dictionnaires de langues pris en charge : -| Code | Dictionnaire | -| ------ | ------------ | -| simple | Général | -| da | Danois | -| nl | Néerlandais | -| en | Anglais | -| fi | Finlandais | -| fr | Français | -| de | Allemand | -| hu | Hongrois | -| it | Italien | -| no | Norvégien | -| pt | Portugais | -| ro | Roumain | -| ru | Russe | -| es | Espagnol | -| sv | Suédois | -| tr | Turc | +| Code | Dictionnaire | +| ------ | ---------------- | +| simple | Général | +| da | Danois | +| nl | Néerlandais | +| en | Anglais | +| fi | Finlandais | +| fr | Français | +| de | Allemand | +| hu | Hongrois | +| it | Italien | +| no | Norvégien | +| pt | Portugais | +| ro | Roumain | +| ru | Russe | +| es | Espagnol | +| sv | Suédois | +| tr | Turc | ### Algorithmes de classement Algorithmes de classement: -| Algorithme | Description | -| --- | --- | -| rank | Utilisez la qualité de correspondance (0-1) de la requête en texte intégral pour trier les résultats. | -| proximitéRang | Similaire au classement, mais inclut également la proximité des correspondances. | +| Algorithme | Description | +| -------------- | ----------------------------------------------------------------------------------------------------- | +| rank | Utilisez la qualité de correspondance (0-1) de la requête en texte intégral pour trier les résultats. | +| proximitéRang | Similaire au classement, mais inclut également la proximité des correspondances. | diff --git a/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx index 4030093310a4..2e161787acff 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Démarrer votre subgraph ## Aperçu -The Graph contient des milliers de subgraphs déjà disponibles pour des requêtes. Consultez [The Graph Explorer](https://thegraph.com/explorer) et trouvez-en un qui correspond déjà à vos besoins. +The Graph contient des milliers de subgraphs qui peuvent déjà être interrogés. Consultez [The Graph Explorer](https://thegraph.com/explorer) et trouvez-en un qui correspond déjà à vos besoins. -Lorsque vous créez un [subgraph](/subgraphs/developing/subgraphs/), vous créez une API ouverte personnalisée qui extrait des données d'une blockchain, les traite, les stocke et les rend faciles à interroger via GraphQL. +Lorsque vous créez un [Subgraph](/subgraphs/developing/subgraphs/), vous créez une API ouverte personnalisée qui extrait des données d'une blockchain, les traite, les stocke et les rend faciles à interroger via GraphQL. -Le développement de subgraphs peut aller de simples modèles « scaffold » à des subgraphs avancés, spécialement adaptés à vos besoins. +Le développement de subgraphs va de simples subgraphs basiques générés à partir d'un modèle, à des subgraphs avancés spécifiquement adaptés à vos besoins. ### Commencez à développer -Lancez le processus et construisez un subgraph qui correspond à vos besoins : +Commencez le processus et construisez un subgraph qui correspond à vos besoins : 1. [Installer la CLI](/subgraphs/developing/creating/install-the-cli/) - Configurez votre infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Comprenez le composant clé d'un subgraph +2. [Manifest du Subgraph](/subgraphs/developing/creating/subgraph-manifest/) - Comprendre la composante clé d'un subgraph 3. [Le schéma GraphQL](/subgraphs/developing/creating/ql-schema/) - Écrivez votre schéma 4. [Écrire les mappings AssemblyScript](/subgraphs/developing/creating/assemblyscript-mappings/) - Rédigez vos mappings -5. [Fonctionnalités avancées](/subgraphs/developing/creating/advanced/) - Personnalisez votre subgraphs avec des fonctionnalités avancées +5. [Fonctionnalités avancées](/subgraphs/developing/creating/advanced/) - Personnalisez votre subgraph avec des fonctionnalités avancées Explorez d'autres [ressources pour les API](/subgraphs/developing/creating/graph-ts/README/) et effectuez des tests en local avec [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Notes de version | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supporte la fonctionnalité [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) pour élaguer les subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/fr/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/fr/subgraphs/developing/creating/subgraph-manifest.mdx index f3b29bd0de75..efe673d5eefd 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Manifeste de Subgraph ## Aperçu -Le manifeste du subgraph, `subgraph.yaml`, définit les contrats intelligents et le réseau que votre subgraph va indexer, les événements de ces contrats auxquels il faut prêter attention, et comment faire correspondre les données d'événements aux entités que Graph Node stocke et permet d'interroger. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -La **définition du subgraph** se compose des fichiers suivants : +The **Subgraph definition** consists of the following files: -- `subgraph.yaml` : Contient le manifeste du subgraph +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql` : Un schéma GraphQL définissant les données stockées pour votre subgraph et comment les interroger via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts` : [Mappage AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code qui traduit les données d'événements en entités définies dans votre schéma (par exemple `mapping.ts` dans ce guide) ### Capacités des subgraphs -Un seul subgraph peut : +Un seul Subgraph peut : - Indexer les données de plusieurs contrats intelligents (mais pas de plusieurs réseaux). @@ -24,102 +24,102 @@ Un seul subgraph peut : - Ajouter une entrée pour chaque contrat nécessitant une indexation dans le tableau `dataSources`. -La spécification complète des manifestes de subgraphs est disponible [ici](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +La spécification complète des manifestes de Subgraphs est disponible [ici](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -Pour l'exemple de subgraph cité ci-dessus, `subgraph.yaml` est : +Pour l'exemple de Subgraph cité ci-dessus, `subgraph.yaml` est : ```yaml -version spec : 0.0.4 -description : Gravatar pour Ethereum -référentiel : https://github.com/graphprotocol/graph-tooling -schéma: - fichier : ./schema.graphql -indexeurConseils : - tailler : automatique -les sources de données: - - genre : ethereum/contrat - nom: Gravité - réseau : réseau principal - source: - adresse : '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' - abi : Gravité - bloc de démarrage : 6175244 - bloc de fin : 7175245 - contexte: - foo : - tapez : Booléen - données : vrai - bar: - tapez : chaîne - données : 'barre' - cartographie : - genre : ethereum/événements - Version api : 0.0.6 - langage : wasm/assemblyscript - entités : - -Gravatar - abis : - - nom : Gravité - fichier : ./abis/Gravity.json - Gestionnaires d'événements : - - événement : NewGravatar(uint256,adresse,chaîne,chaîne) - gestionnaire : handleNewGravatar - - événement : UpdatedGravatar (uint256, adresse, chaîne, chaîne) - gestionnaire : handleUpdatedGravatar - Gestionnaires d'appels : - - fonction : createGravatar(string,string) - gestionnaire : handleCreateGravatar - gestionnaires de blocs : - - gestionnaire : handleBlock - - gestionnaire : handleBlockWithCall - filtre: - genre : appeler - fichier : ./src/mapping.ts +specVersion: 1.3.0 +description: Gravatar for Ethereum +repository: https://github.com/graphprotocol/graph-tooling +schema: + file: ./schema.graphql +indexerHints: + prune: auto +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi: Gravity + startBlock: 6175244 + endBlock: 7175245 + context: + foo: + type: Bool + data: true + bar: + type: String + data: 'bar' + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Gravatar + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + - event: UpdatedGravatar(uint256,address,string,string) + handler: handleUpdatedGravatar + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCall + filter: + kind: call + file: ./src/mapping.ts ``` ## Entrées de subgraphs -> Remarque importante : veillez à remplir le manifeste de votre subgraph avec tous les gestionnaires et [entités](/subgraphs/developing/creating/ql-schema/). +> Remarque importante : veillez à remplir votre manifeste de Subgraph avec tous les gestionnaires et [entités](/subgraphs/developing/creating/ql-schema/). Les entrées importantes à mettre à jour pour le manifeste sont : -- `specVersion` : une version de semver qui identifie la structure du manifeste et les fonctionnalités supportées pour le subgraph. La dernière version est `1.2.0`. Voir la section [versions de specVersion](#specversion-releases) pour plus de détails sur les fonctionnalités et les versions. +- `specVersion` : une version du semver qui identifie la structure du manifeste et les fonctionnalités supportées pour le Subgraph. La dernière version est `1.3.0`. Voir la section [specVersion releases](#specversion-releases) pour plus de détails sur les fonctionnalités et les releases. -- `description` : une description lisible par l'homme de ce qu'est le subgraph. Cette description est affichée dans Graph Explorer lorsque le subgraph est déployé dans Subgraph Studio. +- `description` : une description lisible par l'homme de ce qu'est le Subgraph. Cette description est affichée dans Graph Explorer lorsque le Subgraph est déployé dans Subgraph Studio. -- `repository` : l'URL du dépôt où le manifeste du subgraph peut être trouvé. Cette URL est également affichée dans Graph Explorer. +- `repository` : l'URL du dépôt où le manifeste du Subgraph peut être trouvé. Cette URL est également affichée dans Graph Explorer. - `features` : une liste de tous les noms de [fonctionnalités](#experimental-features) utilisés. -- `indexerHints.prune` : Définit la conservation des données de blocs historiques pour un subgraph. Voir [prune](#prune) dans la section [indexerHints](#indexer-hints). +- `indexerHints.prune` : Définit la conservation des données de blocs historiques pour un Subgraph. Voir [élaguage](#prune) dans la section [indexerHints](#indexer-hints). -- `dataSources.source` : l'adresse du contrat intelligent dont le subgraph est issu, et l'ABI du contrat intelligent à utiliser. L'adresse est optionnelle ; l'omettre permet d'indexer les événements correspondants de tous les contrats. +- `dataSources.source` : l'adresse du contrat intelligent dont le Subgraph s'inspire, et l'ABI du contrat intelligent à utiliser. L'adresse est optionnelle ; l'omettre permet d'indexer les événements correspondants de tous les contrats. - `dataSources.source.startBlock` : le numéro optionnel du bloc à partir duquel la source de données commence l'indexation. Dans la plupart des cas, nous suggérons d'utiliser le bloc dans lequel le contrat a été créé. - `dataSources.source.endBlock` : Le numéro optionnel du bloc sur lequel la source de données arrête l'indexation, y compris ce bloc. Version minimale de la spécification requise : `0.0.9`. -- `dataSources.context` : paires clé-valeur qui peuvent être utilisées dans les mappages de subgraphs. Supporte différents types de données comme `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, et `BigInt`. Chaque variable doit spécifier son `type` et ses `données`. Ces variables de contexte sont ensuite accessibles dans les fichiers de mappage, offrant plus d'options configurables pour le développement de subgraphs. +- `dataSources.context` : paires clé-valeur qui peuvent être utilisées dans les mappages de subgraphs. Supporte différents types de données comme `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, et `BigInt`. Chaque variable doit spécifier son `type` et ses `données`. Ces variables de contexte sont ensuite accessibles dans les fichiers de mappage, offrant plus d'options configurables pour le développement de Subgraph. - `dataSources.mapping.entities` : les entités que la source de données écrit dans le store. Le schéma de chaque entité est défini dans le fichier schema.graphql. - `dataSources.mapping.abis` : un ou plusieurs fichiers ABI nommés pour le contrat source ainsi que pour tous les autres contrats intelligents avec lesquels vous interagissez à partir des mappages. -- `dataSources.mapping.eventHandlers` : liste les événements du contrat intelligent auxquels ce subgraph réagit et les gestionnaires dans le mappage - ./src/mapping.ts dans l'exemple - qui transforment ces événements en entités dans le store. +- `dataSources.mapping.eventHandlers` : liste les événements du contrat intelligent auxquels ce Subgraph réagit et les gestionnaires dans le mappage - ./src/mapping.ts dans l'exemple - qui transforment ces événements en entités dans le store. -- `dataSources.mapping.callHandlers` : liste les fonctions de contrat intelligent auxquelles ce subgraph réagit et les handlers dans le mappage qui transforment les entrées et sorties des appels de fonction en entités dans le store. +- `dataSources.mapping.callHandlers` : liste les fonctions du contrat intelligent auxquelles ce Subgraph réagit et les handlers dans le mappage qui transforment les entrées et sorties des appels de fonction en entités dans le store. - `dataSources.mapping.blockHandlers` : liste les blocs auxquels ce subgraph réagit et les gestionnaires du mappage à exécuter lorsqu'un bloc est ajouté à la blockchain. Sans filtre, le gestionnaire de bloc sera exécuté à chaque bloc. Un filtre d'appel optionnel peut être fourni en ajoutant un champ `filter` avec `kind : call` au gestionnaire. Ceci ne lancera le gestionnaire que si le bloc contient au moins un appel au contrat de la source de données. -Un seul subgraph peut indexer des données provenant de plusieurs contrats intelligents. Ajoutez une entrée pour chaque contrat dont les données doivent être indexées dans le tableau `dataSources`. +Un seul Subgraph peut indexer les données de plusieurs contrats intelligents. Ajoutez une entrée pour chaque contrat dont les données doivent être indexées dans le tableau `dataSources`. ## Gestionnaires d'événements -Les gestionnaires d'événements dans un subgraph réagissent à des événements spécifiques émis par des contrats intelligents sur la blockchain et déclenchent des gestionnaires définis dans le manifeste du subgraph. Ceci permet aux subgraphs de traiter et de stocker les données des événements selon une logique définie. +Les gestionnaires d'événements d'un Subgraph réagissent à des événements spécifiques émis par des contrats intelligents sur la blockchain et déclenchent des gestionnaires définis dans le manifeste du Subgraph. Cela permet aux Subgraphs de traiter et de stocker les données d'événements selon une logique définie. ### Définition d'un gestionnaire d'événements -Un gestionnaire d'événements est déclaré dans une source de données dans la configuration YAML du subgraph. Il spécifie quels événements écouter et la fonction correspondante à exécuter lorsque ces événements sont détectés. +Un gestionnaire d'événements est déclaré dans une source de données dans la configuration YAML du Subgraph. Il spécifie les événements à écouter et la fonction correspondante à exécuter lorsque ces événements sont détectés. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -144,16 +144,16 @@ dataSources: handler: handleApproval - event: Transfer(address,address,uint256) handler: handleTransfer - topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Filtre de rubrique optionnel qui filtre uniquement les événements avec la rubrique spécifiée. + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Filtre thématique facultatif permettant de filtrer uniquement les événements ayant trait au thème spécifié. ``` ## Gestionnaires d'appels -Si les événements constituent un moyen efficace de collecter les modifications pertinentes de l'état d'un contrat, de nombreux contrats évitent de générer des logs afin d'optimiser les coûts de gaz. Dans ce cas, un subgraph peut s'abonner aux appels faits au contrat de source de données. Pour ce faire, il suffit de définir des gestionnaires d'appels faisant référence à la signature de la fonction et au gestionnaire de mappage qui traitera les appels à cette fonction. Pour traiter ces appels, le gestionnaire de mappage recevra un `ethereum.Call` comme argument avec les entrées et sorties typées de l'appel. Les appels effectués à n'importe quel niveau de la blockchain d'appels d'une transaction déclencheront le mappage, ce qui permettra de capturer l'activité avec le contrat de source de données par le biais de contrats proxy. +Bien que les événements constituent un moyen efficace de collecter les modifications pertinentes de l'état d'un contrat, de nombreux contrats évitent de générer des logs afin d'optimiser les coûts de gaz. Dans ce cas, un subgraph peut s'abonner aux appels faits au contrat de source de données. Pour ce faire, il définit des gestionnaires d'appels référençant la signature de la fonction et le gestionnaire de mappage qui traitera les appels à cette fonction. Pour traiter ces appels, le gestionnaire de mappage recevra un `ethereum.Call` comme argument avec les entrées et sorties typées de l'appel. Les appels effectués à n'importe quel niveau de la chaîne d'appel d'une transaction déclencheront le mappage, ce qui permettra de capturer l'activité avec le contrat de source de données par le biais de contrats proxy. Les gestionnaires d'appels ne se déclencheront que dans l'un des deux cas suivants : lorsque la fonction spécifiée est appelée par un compte autre que le contrat lui-même ou lorsqu'elle est marquée comme externe dans Solidity et appelée dans le cadre d'une autre fonction du même contrat. -> **Note:** Les gestionnaires d'appels dépendent actuellement de l'API de traçage de Parité. Certains réseaux, tels que BNB chain et Arbitrum, ne supportent pas cette API. Si un subgraph indexant l'un de ces réseaux contient un ou plusieurs gestionnaires d'appels, il ne commencera pas à se synchroniser. Les développeurs de subgraphs devraient plutôt utiliser des gestionnaires d'événements. Ceux-ci sont bien plus performants que les gestionnaires d'appels et sont pris en charge par tous les réseaux evm. +> **Note:** Les gestionnaires d'appels dépendent actuellement de l'API de traçage de Parity. Certains réseaux, tels que BNB chain et Arbitrum, ne supportent pas cette API. Si un subgraph indexant l'un de ces réseaux contient un ou plusieurs gestionnaires d'appels, il ne commencera pas à se synchroniser. Les développeurs de subgraphs devraient plutôt utiliser des gestionnaires d'événements. Ceux-ci sont bien plus performants que les gestionnaires d'appels et sont pris en charge par tous les réseaux evm. ### Définir un gestionnaire d'appels @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ La propriété `function` est la signature de la fonction normalisée pour filtr ### Fonction de cartographie -Chaque gestionnaire d'appel prend un seul paramètre qui a un type correspondant au nom de la fonction appelée. Dans l'exemple du subgraph ci-dessus, le mapping contient un gestionnaire d'appel lorsque la fonction `createGravatar` est appelée et reçoit un paramètre `CreateGravatarCall` en tant qu'argument : +Chaque gestionnaire d'appel prend un seul paramètre qui a un type correspondant au nom de la fonction appelée. Dans l'exemple du Subgraph ci-dessus, le mappage contient un gestionnaire pour l'appel de la fonction `createGravatar` qui reçoit un paramètre `CreateGravatarCall` comme argument : ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ La fonction `handleCreateGravatar` prend un nouveau `CreateGravatarCall` qui est ## Block Handlers -En plus de s'abonner à des événements de contrat ou à des appels de fonction, un subgraph peut souhaiter mettre à jour ses données à mesure que de nouveaux blocs sont ajoutés à la chaîne. Pour y parvenir, un subgraph peut exécuter une fonction après chaque bloc ou après des blocs correspondant à un filtre prédéfini. +Outre l'abonnement à des événements contractuels ou à des appels de fonction, un Subgraph peut vouloir mettre à jour ses données lorsque de nouveaux blocs sont ajoutés à la blockchain. Pour ce faire, un Subgraph peut exécuter une fonction après chaque bloc ou après les blocs qui correspondent à un filtre prédéfini. ### Filtres pris en charge @@ -218,7 +218,7 @@ filter: _Le gestionnaire défini sera appelé une fois pour chaque bloc qui contient un appel au contrat (source de données) sous lequel le gestionnaire est défini._ -> **Note:** Le filtre `call` dépend actuellement de l'API de traçage de Parité. Certains réseaux, tels que BNB chain et Arbitrum, ne supportent pas cette API. Si un subgraph indexant un de ces réseaux contient un ou plusieurs gestionnaire de bloc avec un filtre `call`, il ne commencera pas à se synchroniser. +> **Note:** Le filtre `call` dépend actuellement de l'API de traçage de Parity. Certains réseaux, tels que BNB chain et Arbitrum, ne supportent pas cette API. Si un subgraph indexant un de ces réseaux contient un ou plusieurs block handlers avec un filtre `call`, il ne commencera pas à se synchroniser. L'absence de filtre pour un gestionnaire de bloc garantira que le gestionnaire est appelé à chaque bloc. Une source de données ne peut contenir qu'un seul gestionnaire de bloc pour chaque type de filtre. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -Le gestionnaire défini sera appelé une fois tous les `n` blocs, où `n` est la valeur fournie dans le champ `every`. Cette configuration permet au subgraph d'effectuer des opérations spécifiques à intervalles réguliers. +Le gestionnaire défini sera appelé une fois tous les `n` blocs, où `n` est la valeur fournie dans le champ `every`. Cette configuration permet au Subgraph d'effectuer des opérations spécifiques à intervalles réguliers. #### Le filtre Once @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Le gestionnaire défini avec le filtre once ne sera appelé qu'une seule fois avant l'exécution de tous les autres gestionnaires. Cette configuration permet au subgraph d'utiliser le gestionnaire comme gestionnaire d'initialisation, effectuant des tâches spécifiques au début de l'indexation. +Le gestionnaire défini avec le filtre once ne sera appelé qu'une seule fois avant l'exécution de tous les autres gestionnaires. Cette configuration permet au Subgraph d'utiliser le gestionnaire comme gestionnaire d'initialisation, en exécutant des tâches spécifiques au début de l'indexation. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Fonction de cartographie -La fonction de mappage recevra une `ethereum.Block` comme seul argument. Comme les fonctions de mappage pour les événements, cette fonction peut accéder aux entités de subgraphs existantes dans le store, appeler des contrats intelligents et créer ou mettre à jour des entités. +La fonction de mappage recevra une `ethereum.Block` comme seul argument. Comme les fonctions de mappage pour les événements, cette fonction peut accéder aux entités Subgraph existantes dans le store, appeler des contrats intelligents et créer ou mettre à jour des entités. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ Un événement ne sera déclenché que si la signature et le sujet 0 corresponde A partir de `specVersion` `0.0.5` et `apiVersion` `0.0.7`, les gestionnaires d'événements peuvent avoir accès au reçu de la transaction qui les a émis. -Pour ce faire, les gestionnaires d'événements doivent être déclarés dans le manifeste du subgraph avec la nouvelle clé `receipt : true`, qui est facultative et prend par défaut la valeur false. +Pour ce faire, les gestionnaires d'événements doivent être déclarés dans le manifeste Subgraph avec la nouvelle clé `receipt : true`, qui est facultative et prend par défaut la valeur false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -381,7 +381,7 @@ Ensuite, vous ajoutez des _modèles de sources de données_ au manifeste. Ceux-c dataSources: - kind: ethereum/contract name: Factory - # ... other source fields for the main contract ... + # ... d'autres champs sources pour le contrat principal ... templates: - name: Exchange kind: ethereum/contract @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ Il existe des setters et getters comme `setString` et `getString` pour tous les ## Blocs de démarrage -Le `startBlock` est un paramètre optionnel qui vous permet de définir à partir de quel bloc de la chaîne la source de données commencera l'indexation. Définir le bloc de départ permet à la source de données de sauter potentiellement des millions de blocs qui ne sont pas pertinents. En règle générale, un développeur de subgraphs définira `startBlock` au bloc dans lequel le contrat intelligent de la source de données a été créé. +Le `startBlock` est un paramètre optionnel qui vous permet de définir à partir de quel bloc de la chaîne la source de données commencera l'indexation. La définition du bloc de départ permet à la source de données de sauter des millions de blocs potentiellement non pertinents. Typiquement, un développeur de Subgraph définira `startBlock` au bloc dans lequel le contrat intelligent de la source de données a été créé. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Conseils pour l'indexeur -Le paramètre `indexerHints` dans le manifeste d'un subgraph fournit des directives aux Indexeurs sur le traitement et la gestion d'un subgraph. Il influence les décisions opérationnelles concernant le traitement des données, les stratégies d'indexation et les optimisations. Actuellement, il propose l'option `prune` pour gérer la rétention ou suppression des données historiques. +Le paramètre `indexerHints` dans le manifeste d'un Subgraph fournit des directives aux Indexeurs sur le traitement et la gestion d'un Subgraph. Il influence les décisions opérationnelles concernant le traitement des données, les stratégies d'indexation et les optimisations. Actuellement, il comporte l'option `prune` pour gérer la rétention ou l'élagage des données historiques. > Cette fonctionnalité est disponible à partir de `specVersion : 1.0.0` ### Prune -`indexerHints.prune` : Définit la rétention des données de blocs historiques pour un subgraph. Les options sont les suivantes : +`indexerHints.prune` : Définit la conservation des données de blocs historiques pour un Subgraph. Les options comprennent : 1. `"never"`: Aucune suppression des données historiques ; conserve l'ensemble de l'historique. 2. `"auto"`: Conserve l'historique minimum nécessaire tel que défini par l'Indexeur, optimisant ainsi les performances de la requête. @@ -505,19 +505,19 @@ Le paramètre `indexerHints` dans le manifeste d'un subgraph fournit des directi prune: auto ``` -> Le terme "historique" dans ce contexte des subgraphs concerne le stockage des données qui reflètent les anciens états des entités mutables. +> Dans le contexte des Subgraphs, le terme "historique" désigne le stockage de données reflétant les anciens états d'entités mutables. L'historique à partir d'un bloc donné est requis pour : -- Les [requêtes chronologiques](/subgraphs/querying/graphql-api/#time-travel-queries), qui permettent d'interroger les états passés de ces entités à des moments précis de l'histoire du subgraph -- Utilisation du subgraph comme [base de greffage](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) dans un autre subgraph, à ce bloc -- Rembobiner le subgraph jusqu'à ce bloc +- Les [requêtes chronologiques] (/subgraphs/querying/graphql-api/#time-travel-queries), qui permettent d'interroger les états passés de ces entités à des moments précis de l'histoire du Subgraph +- Utiliser le Subgraph comme [base de greffage](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) dans un autre Sugraph, au niveau de ce bloc +- Remonter le Subgraph jusqu'à ce bloc Si les données historiques à partir du bloc ont été purgées, les capacités ci-dessus ne seront pas disponibles. > L'utilisation de `"auto"` est généralement recommandée car elle maximise les performances des requêtes et est suffisante pour la plupart des utilisateurs qui n'ont pas besoin d'accéder à des données historiques étendues. -Pour les subgraphs exploitant les [requêtes chronologiques](/subgraphs/querying/graphql-api/#time-travel-queries), il est conseillé de définir un nombre spécifique de blocs pour la conservation des données historiques ou d'utiliser `prune: never` pour conserver tous les états d'entité historiques. Vous trouverez ci-dessous des exemples de configuration des deux options dans les paramètres de votre subgraphs : +Pour les Subgraphs utilisant les [requêtes chronologiques](/subgraphs/querying/graphql-api/#time-travel-queries), il est conseillé de définir un nombre spécifique de blocs pour la conservation des données historiques ou d'utiliser `prune : never` pour conserver tous les états historiques de l'entité. Vous trouverez ci-dessous des exemples de configuration de ces deux options dans les paramètres de votre Subgraph : Pour conserver une quantité spécifique de données historiques : @@ -532,3 +532,18 @@ Préserver l'histoire complète des États de l'entité : indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Notes de version | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Ajout de la prise en charge de la [Composition de Subgraphs] (/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Ajout de la prise en charge pour le [Filtrage des arguments indexés](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & les `eth_call` déclarés | +| 1.1.0 | Prend en charge les [Séries Chronologiques & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Ajout de la prise en charge du type `Int8` pour `id`. | +| 1.0.0 | Supporte la fonctionnalité [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) pour élaguer les Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Ajout de la prise en charge de l'interrogation des [Gestionnaires de blocs](/developing/creating/subgraph-manifest/#polling-filter) et des [Gestionnaires d'initialisation](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Ajout de la prise en charge des [fichiers sources de données](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx index 4ba4ab8d4111..44f1d8adb180 100644 --- a/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/fr/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Cadre pour les tests unitaires --- -Apprenez à utiliser Matchstick, un framework de test unitaire développé par [LimeChain](https://limechain.tech/). Matchstick permet aux développeurs de subgraphs de tester leur logique de mappages dans un environnement sandbox et de déployer avec succès leurs subgraphs. +Apprenez à utiliser Matchstick, un cadre de test unitaire développé par [LimeChain](https://limechain.tech/). Matchstick permet aux développeurs de subgraphs de tester leur logique de mappages dans un environnement sandbox et de déployer avec succès leurs subgraphs. ## Avantages de l'utilisation de Matchstick - Il est écrit en Rust et optimisé pour des hautes performances. -- Il vous donne accès à des fonctionnalités pour développeurs, y compris la possibilité de simuler des appels de contrat, de faire des assertions sur l'état du store, de surveiller les échecs de subgraph, de vérifier les performances des tests, et bien plus encore. +- Il vous donne accès à des fonctions de développement, notamment la possibilité de simuler des appels de contrat, de faire des assertions sur l'état du store, de surveiller les échecs du subgraph, de vérifier les performances des tests, et bien d'autres choses encore. ## Introduction @@ -87,7 +87,7 @@ Et enfin, n'utilisez pas `graph test` (qui utilise votre installation globale de ### En utilisant Matchstick -Pour utiliser **Matchstick** dans votre projet de ssubgraph, ouvrez un terminal, naviguez jusqu'au dossier racine de votre projet et exécutez simplement `graph test [options] ` - il télécharge le dernier binaire **Matchstick** et exécute le test spécifié ou tous les tests dans un dossier de test (ou tous les tests existants si aucun flag de source de données n'est spécifié). +Pour utiliser **Matchstick** dans votre projet Subgraph, ouvrez simplement un terminal, naviguez jusqu'au dossier racine de votre projet et lancez simplement `graph test [options] ` - il télécharge le dernier binaire **Matchstick** et exécute le test spécifié ou tous les tests dans un dossier de test (ou tous les tests existants si aucun flag de source de données n'est spécifié). ### CLI options @@ -112,13 +112,13 @@ graph test path/to/file.test.ts **Options:** ```sh --c, --coverage Exécuter les tests en mode couverture --d, --docker Exécuter les tests dans un conteneur docker (Note : Veuillez exécuter à partir du dossier racine du subgraph) --f, --force Binaire : Retélécharge le binaire. Docker : Retélécharge le fichier Docker et reconstruit l'image Docker. --h, --help Affiche les informations d'utilisation --l, --logs Enregistre dans la console des informations sur le système d'exploitation, le modèle de processeur et l'URL de téléchargement (à des fins de débogage). --r, --recompile Force les tests à être recompilés --v, --version Choisissez la version du binaire rust que vous souhaitez télécharger/utiliser +-c, --coverage Exécute les tests en mode couverture +-d, --docker Exécute les tests dans un conteneur Docker (Note : Exécute à partir du dossier racine du subgraph). +-f, --force Binaire : Redécharge le binaire. Docker : Redécharge le fichier Docker et reconstruit l'image Docker. +-h, --help Affiche les informations sur l'utilisation +-l, --logs Enregistre dans la console des informations sur le système d'exploitation, le modèle de processeur et l'adresse de téléchargement (à des fins de débogage). +-r, --recompile Oblige à recompiler les tests +-v, --version Choisi la version du binaire rust que vous souhaitez télécharger/utiliser ``` ### Docker @@ -145,13 +145,13 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Subgraph démonstration +### Subgraph Demo Vous pouvez essayer et jouer avec les exemples de ce guide en clonant le [dépôt du Demo Subgraph.](https://github.com/LimeChain/demo-subgraph) ### Tutoriels vidéos -Vous pouvez également consulter la série de vidéos sur [" Comment utiliser Matchstick pour écrire des tests unitaires pour vos subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Vous pouvez également consulter la série de vidéos sur ["Comment utiliser Matchstick pour écrire des tests unitaires pour vos subgraphs" ](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Structure des tests @@ -662,7 +662,7 @@ Cela fait beaucoup à décortiquer ! Tout d'abord, une chose importante à noter Et voilà, nous avons formulé notre premier test ! 👏 -Maintenant, afin d'exécuter nos tests, il suffit d'exécuter ce qui suit dans le dossier racine de votre subgraph : +Maintenant, pour exécuter nos tests, il vous suffit d'exécuter ce qui suit dans le dossier racine de Subgraph : `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Les utilisateurs peuvent simuler des fichiers IPFS en utilisant la fonction `mockIpfsFile(hash, filePath)`. La fonction accepte deux arguments, le premier étant le hash/chemin du fichier IPFS et le second le chemin d'un fichier local. -NOTE : Lorsque l'on teste `ipfs.map/ipfs.mapJSON`, la fonction callback doit être exportée depuis le fichier de test afin que matchstck la détecte, comme la fonction `processGravatar()` dans l'exemple de test ci-dessous : +NOTE : Lorsque l'on teste `ipfs.map/ipfs.mapJSON`, la fonction callback doit être exportée depuis le fichier de test afin que matchtick la détecte, comme la fonction `processGravatar()` dans l'exemple de test ci-dessous : Ficher `.test.ts` : @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Exporter le callback ipfs.map() pour que matchstck le détecte +// Exporter le callback ipfs.map() pour qu'il soit détecté par matchstick export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1164,14 +1164,14 @@ De même que pour les sources de données dynamiques de contrat, les utilisateur ##### Exemple `subgraph.yaml` ```yaml ---- +... templates: - - kind: file/ipfs + - kind: file/ipfs name: GraphTokenLockMetadata network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1216,8 +1216,8 @@ type TokenLockMetadata @entity { ##### Exemple de gestionnaire ```typescript -export function handleMetadata(content: Bytes): void { - // dataSource.stringParams() renvoie le CID du fichier de la source de données +export function handleMetadata(content : Bytes) : void { + // dataSource.stringParams() renvoie le CID du fichier de la source de données // stringParam() sera simulé dans le test du gestionnaire // pour plus d'informations https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files let tokenMetadata = new TokenLockMetadata(dataSource.stringParam()) @@ -1289,11 +1289,11 @@ test('exemple de création d'une dataSource file/ipfs', () => { ## Couverture de test -En utilisant **Matchstick**, les développeurs de subgraphs peuvent exécuter un script qui calculera la couverture des tests unitaires écrits. +En utilisant **Matchstick**, les développeurs de Subgraph peuvent exécuter un script qui calculera la couverture des tests unitaires écrits. L'outil de couverture des tests prend les binaires de test compilés `wasm` et les convertit en fichiers `wat`, qui peuvent alors être facilement inspectés pour voir si les gestionnaires définis dans `subgraph.yaml` ont été appelés ou non. Comme la couverture du code (et les tests dans leur ensemble) n'en est qu'à ses débuts en AssemblyScript et WebAssembly, **Matchstick** ne peut pas vérifier la couverture des branches. Au lieu de cela, nous nous appuyons sur l'affirmation que si un gestionnaire donné a été appelé, l'événement/la fonction correspondant(e) a été correctement simulé(e). -### Prerequisites +### Prérequis Pour utiliser la fonctionnalité de couverture des tests fournie dans **Matchstick**, il y a quelques éléments à préparer à l'avance : @@ -1395,7 +1395,7 @@ La non-concordance des arguments est causée par la non-concordance de `graph-ts ## Ressources supplémentaires -Pour toute aide supplémentaire, consultez cette [démo de subgraph utilisant Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +Pour toute aide supplémentaire, consultez cette [démo Subgraph repo utilisant Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Réaction diff --git a/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx index a72771045069..2916c6fa07ad 100644 --- a/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/fr/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Déploiement d'un subgraph sur plusieurs réseaux +sidebarTitle: Déploiement sur plusieurs réseaux --- Cette page explique comment déployer un subgraph sur plusieurs réseaux. Pour déployer un subgraph, vous devez d'abord installer [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). Si vous n'avez pas encore créé de subgraph, consultez [Créer un subgraph](/developing/creating-a-subgraph/). -## Déploiement du subgraph sur plusieurs réseaux +## Déployer le Subgraph sur plusieurs réseaux -Dans certains cas, vous souhaiterez déployer le même subgraph sur plusieurs réseaux sans dupliquer tout son code. Le principal défi qui en découle est que les adresses contractuelles sur ces réseaux sont différentes. +Dans certains cas, vous souhaiterez déployer le même Subgraph sur plusieurs réseaux sans dupliquer l'ensemble de son code. La principale difficulté réside dans le fait que les adresses contractuelles de ces réseaux sont différentes. ### En utilisant `graph-cli` @@ -19,7 +20,7 @@ Options: --network-file Chemin du fichier de configuration des réseaux (par défaut : "./networks.json") ``` -Vous pouvez utiliser l'option `--network` pour spécifier une configuration de réseau à partir d'un fichier standard `json` (par défaut networks.json) pour facilement mettre à jour votre subgraph pendant le développement. +Vous pouvez utiliser l'option `--network` pour spécifier une configuration réseau à partir d'un fichier standard `json` (par défaut `networks.json`) pour mettre à jour facilement votre Subgraph pendant le développement. > Note : La commande `init` générera désormais automatiquement un fichier networks.json en se basant sur les informations fournies. Vous pourrez ensuite mettre à jour les réseaux existants ou en ajouter de nouveaux. @@ -53,7 +54,7 @@ Si vous n'avez pas de fichier `networks.json`, vous devrez en créer un manuelle > Note : Vous n'avez besoin de spécifier aucun des `templates` (si vous en avez) dans le fichier de configuration, uniquement les `dataSources`. Si des `templates` sont déclarés dans le fichier `subgraph.yaml`, leur réseau sera automatiquement mis à jour vers celui spécifié avec l'option `--network`. -Supposons maintenant que vous souhaitiez déployer votre subgraph sur les réseaux `mainnet` et `sepolia`, et que ceci est votre fichier subgraph.yaml : +Maintenant, supposons que vous vouliez être capable de déployer votre Subgraph sur les réseaux `mainnet` et `sepolia`, et voici votre `subgraph.yaml` : ```yaml # ... @@ -95,7 +96,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file chemin/à/configurer ``` -La commande `build` mettra à jour votre fichier `subgraph.yaml` avec la configuration `sepolia` puis recompilera le subgraph. Votre fichier `subgraph.yaml` devrait maintenant ressembler à ceci: +La commande `build` va mettre à jour votre `subgraph.yaml` avec la configuration `sepolia` et ensuite recompiler le Subgraph. Votre fichier `subgraph.yaml` devrait maintenant ressembler à ceci : ```yaml # ... @@ -126,7 +127,7 @@ yarn deploy --network sepolia --network-file chemin/à/configurer Une façon de paramétrer des aspects tels que les adresses de contrat en utilisant des versions plus anciennes de `graph-cli` est de générer des parties de celui-ci avec un système de creation de modèle comme [Mustache](https://mustache.github.io/) ou [Handlebars](https://handlebarsjs.com/). -Pour illustrer cette approche, supposons qu'un subgraph doive être déployé sur le réseau principal (mainnet) et sur Sepolia en utilisant des adresses de contrat différentes. Vous pourriez alors définir deux fichiers de configuration fournissant les adresses pour chaque réseau : +Pour illustrer cette approche, supposons qu'un Subgraph doive être déployé sur le réseau principal et sur Sepolia en utilisant des adresses contractuelles différentes. Vous pourriez alors définir deux fichiers de configuration fournissant les adresses pour chaque réseau : ```json { @@ -178,7 +179,7 @@ Pour générer un manifeste pour l'un ou l'autre réseau, vous pourriez ajouter } ``` -Pour déployer ce subgraph pour mainnet ou Sepolia, vous devez simplement exécuter l'une des deux commandes suivantes : +Pour déployer ce Subgraph sur le Mainnet ou Sepolia, il vous suffit de lancer l'une des deux commandes suivantes : ```sh # Mainnet: @@ -192,25 +193,25 @@ Un exemple fonctionnel de ceci peut être trouvé [ici](https://github.com/graph Note : Cette approche peut également être appliquée à des situations plus complexes, dans lesquelles il est nécessaire de remplacer plus que les adresses des contrats et les noms de réseau ou où il est nécessaire de générer des mappages ou alors des ABI à partir de modèles également. -Cela vous donnera le `chainHeadBlock` que vous pouvez comparer avec le `latestBlock` sur votre subgraph pour vérifier s'il est en retard. `synced` vous informe si le subgraph a déjà rattrapé la chaîne. `health` peut actuellement prendre les valeurs de `healthy` si aucune erreur ne s'est produite, ou `failed` s'il y a eu une erreur qui a stoppé la progression du subgraph. Dans ce cas, vous pouvez vérifier le champ `fatalError` pour les détails sur cette erreur. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Politique d'archivage des subgraphs de Subgraph Studio +## Subgraph Studio Politique d'archivage de Subgraph -Une version de subgraph dans Studio est archivée si et seulement si elle répond aux critères suivants : +Une version de Subgraph dans Studio est archivée si et seulement si elle répond aux critères suivants : - La version n'est pas publiée sur le réseau (ou en attente de publication) - La version a été créée il y a 45 jours ou plus -- Le subgraph n'a pas été interrogé depuis 30 jours +- Le Subgraph n'a pas été interrogé depuis 30 jours -De plus, lorsqu'une nouvelle version est déployée, si le subgraph n'a pas été publié, la version N-2 du subgraph est archivée. +En outre, lorsqu'une nouvelle version est déployée, si le Subgraph n'a pas été publié, la version N-2 du Subgraph est archivée. -Chaque subgraph concerné par cette politique dispose d'une option de restauration de la version en question. +Chaque Subgraph concerné par cette politique a la possibilité de rétablir la version en question. -## Vérification de l'état des subgraphs +## Vérification de la santé du Subgraphs -Si un subgraph se synchronise avec succès, c'est un bon signe qu'il continuera à bien fonctionner pour toujours. Cependant, de nouveaux déclencheurs sur le réseau peuvent amener votre subgraph à rencontrer une condition d'erreur non testée ou il peut commencer à prendre du retard en raison de problèmes de performances ou de problèmes avec les opérateurs de nœuds. +Si un Subgraph se synchronise avec succès, c'est le signe qu'il continuera à fonctionner correctement pour toujours. Toutefois, de nouveaux déclencheurs sur le réseau peuvent entraîner une condition d'erreur non testée dans votre Subgraph ou un retard dû à des problèmes de performance ou à des problèmes avec les opérateurs de nœuds. -Graph Node expose un endpoint GraphQL que vous pouvez interroger pour vérifier l'état de votre subgraph. Sur le service hébergé, il est disponible à l'adresse `https://api.thegraph.com/index-node/graphql`. Sur un nœud local, il est disponible sur le port `8030/graphql` par défaut. Le schéma complet de cet endpoint peut être trouvé [ici](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Voici un exemple de requête qui vérifie l'état de la version actuelle d'un subgraph: +Graph Node expose un endpoint GraphQL que vous pouvez interroger pour vérifier l'état de votre subgraph. Sur le service hébergé, il est disponible à `https://api.thegraph.com/index-node/graphql`. Sur un nœud local, il est disponible sur le port `8030/graphql` par défaut. Le schéma complet de ce point d'accès peut être trouvé [ici](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Voici un exemple de requête qui vérifie le statut de la version actuelle d'un subgraph : ```graphql { @@ -237,4 +238,4 @@ Graph Node expose un endpoint GraphQL que vous pouvez interroger pour vérifier } ``` -Cela vous donnera le `chainHeadBlock` que vous pouvez comparer avec le `latestBlock` sur votre subgraph pour vérifier s'il est en retard. `synced` vous informe si le subgraph a déjà rattrapé la chaîne. `health` peut actuellement prendre les valeurs de `healthy` si aucune erreur ne s'est produite, ou `failed` s'il y a eu une erreur qui a stoppé la progression du subgraph. Dans ce cas, vous pouvez vérifier le champ `fatalError` pour les détails sur cette erreur. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx index f4e354e2bb21..4582f8643eb7 100644 --- a/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/fr/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Déploiement en utilisant Subgraph Studio --- -Apprenez à déployer votre subgraph sur Subgraph Studio. +Apprenez à déployer votre Subgraph dans Subgraph Studio. -> Remarque : lorsque vous déployez un subgraph, vous le transférez vers Subgraph Studio, où vous pourrez le tester. Il est important de se rappeler que le déploiement n'est pas la même chose que la publication. Lorsque vous publiez un subgraph, vous le publiez onchain. +> Note : lorsque vous déployez un Subgraph, vous l'envoyez au Subgraph Studio, où vous pourrez le tester. Il est important de se rappeler que le déploiement n'est pas la même chose que la publication. Lorsque vous publiez un Subgraph, vous le publiez onchain. ## Présentation de Subgraph Studio Dans [Subgraph Studio](https://thegraph.com/studio/), vous pouvez faire ce qui suit: -- Voir une liste des subgraphs que vous avez créés -- Gérer, voir les détails et visualiser l'état d'un subgraph spécifique -- Créez et gérez vos clés API pour des subgraphs spécifiques +- Afficher la liste des Subgraphs que vous avez créés +- Gérer, afficher les détails et visualiser l'état d'un Subgraph spécifique +- Créez et gérez vos clés API pour des Subgraphs spécifiques - Limitez vos clés API à des domaines spécifiques et autorisez uniquement certains Indexers à les utiliser pour effectuer des requêtes -- Créer votre subgraph -- Déployer votre subgraph en utilisant The Graph CLI -- Tester votre subgraph dans l'environnement de test -- Intégrer votre subgraph en staging en utilisant l'URL de requête du développement -- Publier votre subgraph sur The Graph Network +- Créez votre Subgraph +- Déployez votre Subgraph à l'aide de Graph CLI +- Testez votre Subgraph dans l'environnement du terrain de jeu +- Intégrez votre Subgraph dans staging à l'aide de l'URL de requête de développement +- Publier votre Subgraph sur le The Graph Network - Gérer votre facturation ## Installer The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Ouvrez [Subgraph Studio](https://thegraph.com/studio/). 2. Connectez votre portefeuille pour vous connecter. - Vous pouvez le faire via MetaMask, Coinbase Wallet, WalletConnect ou Safe. -3. Après vous être connecté, votre clé de déploiement unique sera affichée sur la page des détails de votre subgraph. - - La clé de déploiement vous permet de publier vos subgraphs ou de gérer vos clés d'API et votre facturation. Elle est unique mais peut être régénérée si vous pensez qu'elle a été compromise. +3. Après vous être connecté, votre clé de déploiement unique sera affichée sur la page de détails de votre Subgraph. + - La clé de déploiement vous permet de publier vos Subgraphs ou de gérer vos clés API et la facturation. Elle est unique mais peut être régénérée si vous pensez qu'elle a été compromise. -> Important : Vous avez besoin d'une clé API pour interroger les subgraphs +> Important : Vous avez besoin d'une clé API pour interroger les Subgraphs ### Comment créer un subgraph dans Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Compatibilité des subgraphs avec le réseau de The Graph -Pour être pris en charge par les Indexeurs sur The Graph Network, les subgraphs doivent : - -- Indexer un [réseau pris en charge](/supported-networks/) -- Ne doit utiliser aucune des fonctionnalités suivantes : - - ipfs.cat & ipfs.map - - Erreurs non fatales - - La greffe +Pour être pris en charge par les Indexeurs sur The Graph Network, les Subgraph doivent indexer un [réseau pris en charge](/supported-networks/). Pour une liste complète des fonctionnalités supportées et non supportées, consultez le repo [Matrice de prise en charge des fonctionnalités](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Initialisez votre Subgraph -Une fois que votre subgraph a été créé dans Subgraph Studio, vous pouvez initialiser son code via la CLI en utilisant cette commande : +Une fois que votre sous-graphe a été créé dans Subgraph Studio, vous pouvez initialiser son code la CLI à l'aide de cette commande : ```bash graph init ``` -Vous pouvez trouver la valeur `` sur la page des détails de votre subgraph dans Subgraph Studio, voir l'image ci-dessous : +Vous pouvez trouver la valeur `` sur la page de détails de votre Subgraph dans Subgraph Studio, voir l'image ci-dessous : ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -Après avoir exécuté la commande `graph init`, ilvous sera demandé de saisir l'adresse du contrat, le réseau, et un ABI que vous souhaitez interroger. Cela générera un nouveau dossier sur votre machine locale avec quelques codes de base pour commencer à travailler sur votre subgraph. Vous pouvez ensuite finaliser votre subgraph pour vous assurer qu'il fonctionne comme prévu. +Après avoir lancé `graph init`, il vous sera demandé d'entrer l'adresse du contrat, le réseau, et un ABI que vous souhaitez interroger. Cela générera un nouveau dossier sur votre machine locale avec du code de base pour commencer à travailler sur votre Subgraph. Vous pouvez ensuite finaliser votre Subgraph pour vous assurer qu'il fonctionne comme prévu. ## Authentification The Graph -Avant de pouvoir déployer votre subgraph sur Subgraph Studio, vous devez vous connecter à votre compte via la CLI. Pour le faire, vous aurez besoin de votre clé de déploiement, que vous pouvez trouver sur la page des détails de votre subgraph. +Avant de pouvoir déployer votre Subgraph dans le Subgraph Studio, vous devez vous connecter à votre compte dans la CLI. Pour ce faire, vous aurez besoin de votre clé de déploiement, que vous trouverez sur la page des détails de votre Subgraph. Ensuite, utilisez la commande suivante pour vous authentifier depuis la CLI : @@ -91,11 +85,11 @@ graph auth ## Déploiement d'un Subgraph -Une fois prêt, vous pouvez déployer votre subgraph sur Subgraph Studio. +Une fois que vous êtes prêt, vous pouvez déployer votre Subgraph dans Subgraph Studio. -> Déployer un subgraph avec la CLI le pousse vers le Studio, où vous pouvez le tester et mettre à jour les métadonnées. Cette action ne publiera pas votre subgraph sur le réseau décentralisé. +> Le déploiement d'un Subgraph à l'aide de la CLI le transfère dans le Studio, où vous pouvez le tester et mettre à jour les métadonnées. Cette action ne publie pas votre Subgraph sur le réseau décentralisé. -Utilisez la commande CLI suivante pour déployer votre subgraph : +Utilisez la commande CLI suivante pour déployer votre Subgraph : ```bash graph deploy @@ -108,30 +102,30 @@ Après avoir exécuté cette commande, la CLI demandera une étiquette de versio ## Tester votre Subgraph -Après le déploiement, vous pouvez tester votre subgraph (soit dans Subgraph Studio, soit dans votre propre application, avec l'URL de requête du déploiement), déployer une autre version, mettre à jour les métadonnées, et publier sur [Graph Explorer](https://thegraph.com/explorer) lorsque vous êtes prêt. +Après le déploiement, vous pouvez tester votre Subgraph (soit dans Subgraph Studio, soit dans votre propre application, avec l'URL de requête de déploiement), déployer une autre version, mettre à jour les métadonnées et publier sur [Graph Explorer](https://thegraph.com/explorer) lorsque vous êtes prêt. -Utilisez Subgraph Studio pour vérifier les journaux (logs) sur le tableau de bord et rechercher les erreurs éventuelles de votre subgraph. +Utilisez Subgraph Studio pour vérifier les journaux du tableau de bord et rechercher les erreurs éventuelles de votre Subgraph. ## Publiez votre subgraph -Afin de publier votre subgraph avec succès, consultez [publier un subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +Pour publier votre Subgraph avec succès, consultez [publier un Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versionning de votre subgraph avec le CLI -Si vous souhaitez mettre à jour votre subgraph, vous pouvez faire ce qui suit : +Si vous souhaitez mettre à jour votre Subgraph, vous pouvez procéder comme suit : - Vous pouvez déployer une nouvelle version dans Studio en utilisant la CLI (cette version sera privée à ce stade). - Une fois que vous en êtes satisfait, vous pouvez publier votre nouveau déploiement sur [Graph Explorer](https://thegraph.com/explorer). -- Cette action créera une nouvelle version de votre subgraph sur laquelle les Curateurs pourront commencer à signaler et que les Indexeurs pourront indexer. +- Cette action créera une nouvelle version de votre Subgraph que les Curateurs pourront commencer à signaler et que les Indexeurs pourront indexer. -Vous pouvez également mettre à jour les métadonnées de votre subgraph sans publier de nouvelle version. Vous pouvez mettre à jour les détails de votre subgraph dans Studio (sous la photo de profil, le nom, la description, etc.) en cochant une option appelée **Mettre à jour les détails** dans [Graph Explorer](https://thegraph.com/explorer). Si cette option est cochée, une transaction onchain sera générée qui mettra à jour les détails du subgraph dans Explorer sans avoir à publier une nouvelle version avec un nouveau déploiement. +Vous pouvez également mettre à jour les métadonnées de votre subgraph sans en publier une nouvelle version. Vous pouvez mettre à jour les détails de votre subgraph dans Studio (sous l'image de profil, le nom, la description, etc.) en cochant une option appelée **Mettre à jour les détails** dans [Graph Explorer](https://thegraph.com/explorer). Si cette option est cochée, une transaction onchain sera générée pour mettre à jour les détails du subgraph dans l'explorateur sans avoir à publier une nouvelle version avec un nouveau déploiement. -> Remarque : la publication d'une nouvelle version d'un subgraph sur le réseau entraîne des coûts. En plus des frais de transaction, vous devez également financer une partie de la taxe de curation sur le signal de migration automatique. Vous ne pouvez pas publier une nouvelle version de votre subgraph si les Curateurs ne l'ont pas signalé. Pour plus d'informations, veuillez lire la suite [ici](/resources/roles/curating/). +> Remarque : la publication d'une nouvelle version d'un subgraph dans le réseau entraîne des coûts. Outre les frais de transaction, vous devez également financer une partie de la taxe de curation sur le signal de migration automatique. Vous ne pouvez pas publier une nouvelle version de votre subgraph si les Curateurs ne l'ont pas signalé. Pour plus d'informations, veuillez lire [ici](/resources/roles/curating/). ## Archivage automatique des versions de subgraphs -Chaque fois que vous déployez une nouvelle version de subgraph dans Subgraph Studio, la version précédente sera archivée. Les versions archivées ne seront pas indexées/synchronisées et ne pourront donc pas être interrogées. Vous pouvez désarchiver une version de votre subgraph dans Subgraph Studio. +Chaque fois que vous déployez une nouvelle version de Subgraph dans Subgraph Studio, la version précédente est archivée. Les versions archivées ne seront pas indexées/synchronisées et ne pourront donc pas être interrogées. Vous pouvez désarchiver une version archivée de votre Subgraph dans Subgraph Studio. -> Remarque : les versions précédentes des subgraphs non publiés mais déployés dans Studio seront automatiquement archivées. +> Remarque : les versions précédentes des Subgraphs non publiés déployés dans Studio seront automatiquement archivées. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/fr/subgraphs/developing/developer-faq.mdx b/website/src/pages/fr/subgraphs/developing/developer-faq.mdx index e2bb16ce90af..bb34b94566de 100644 --- a/website/src/pages/fr/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/fr/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ Cette page résume certaines des questions les plus courantes pour les développ ## Relatif aux Subgraphs -### 1. Qu'est-ce qu'un subgraph ? +### 1. Qu'est-ce qu'un Subgraph ? -Un subgraph est une API personnalisée construite sur des données blockchain. Les subgraphs sont interrogés en utilisant le langage de requête GraphQL et sont déployés sur Graph Node en utilisant Graph CLI. Une fois déployés et publiés sur le réseau décentralisé de The Graph, les Indexeurs traitent les subgraphs et les rendent disponibles pour que les consommateurs de subgraphs puissent les interroger. +Un Subgraph est une API personnalisée construite sur les données de la blockchain. Les Subgraphs sont interrogés à l'aide du langage de requête GraphQL et sont déployés dans un Graph Node à l'aide de l'interface CLI de The Graph. Une fois déployés et publiés sur le réseau décentralisé de The Graph, les Indexeurs traitent les Subgraphs et les mettent à la disposition des consommateurs de Subgraphs pour qu'ils les interrogent. -### 2. Quelle est la première étape pour créer un subgraph ? +### 2. Quelle est la première étape pour créer un Subgraph ? -Pour créer un subgraph avec succès, vous devez installer Graph CLI. Consultez le [Démarrage rapide](/subgraphs/quick-start/) pour commencer. Pour des informations détaillées, consultez [Création d'un subgraph](/developing/creating-a-subgraph/). +Pour créer un Subgraph avec succès, vous devez installer Graph CLI. Consultez le [Démarrage rapide](/subgraphs/quick-start/) pour commencer. Pour des informations plus détaillées, voir [Créer un Subgraph](/developing/creating-a-subgraph/). -### 3. Suis-je toujours en mesure de créer un subgraph si mes smart contracts n'ont pas d'événements ? +### 3. Puis-je créer un Subgraph si mes contrats intelligents n'ont pas d'événements ? -Il est fortement recommandé de structurer vos smart contracts pour avoir des événements associés aux données que vous souhaitez interroger. Les gestionnaires d'événements du subgraph sont déclenchés par des événements de contrat et constituent le moyen le plus rapide de récupérer des données utiles. +Il est fortement recommandé de structurer vos contrats intelligents pour avoir des événements associés aux données que vous êtes intéressé à interroger. Les gestionnaires d'événements du Subgraph sont déclenchés par les événements du contrat et constituent le moyen le plus rapide de récupérer des données utiles. -Si les contrats avec lesquels vous travaillez ne contiennent pas d'événements, votre subgraph peut utiliser des gestionnaires d'appels et de blocs pour déclencher l'indexation. Cependant, ceci n'est pas recommandé, car les performances seront nettement plus lentes. +Si les contrats avec lesquels vous travaillez ne contiennent pas d'événements, votre subgraph peut utiliser des gestionnaires d'appels et de blocs pour déclencher l'indexation. Cette méthode n'est toutefois pas recommandée, car elle ralentit considérablement les performances. -### 4. Puis-je modifier le compte GitHub associé à mon subgraph ? +### 4. Puis-je changer le compte GitHub associé à mon Subgraph ? -Non. Une fois un subgraph créé, le compte GitHub associé ne peut pas être modifié. Veuillez vous assurer de bien prendre en compte ce détail avant de créer votre subgraph. +Non. Une fois qu'un Subgraph est créé, le compte GitHub associé ne peut pas être modifié. Veillez à bien prendre en compte ce point avant de créer votre Subgraph. -### 5. Comment mettre à jour un subgraph sur le mainnet ? +### 5. Comment mettre à jour un Subgraph sur le réseau principal ? -Vous pouvez déployer une nouvelle version de votre subgraph sur Subgraph Studio en utilisant la CLI. Cette action maintient votre subgraph privé, mais une fois que vous en êtes satisfait, vous pouvez le publier sur Graph Explorer. Cela créera une nouvelle version de votre subgraph sur laquelle les Curateurs pourront commencer à signaler. +Vous pouvez déployer une nouvelle version de votre Subgraph dans Subgraph Studio à l'aide de l'interface de commande. Cette action maintient votre Subgraph privé, mais une fois que vous en êtes satisfait, vous pouvez le publier dans Graph Explorer. Cela créera une nouvelle version de votre Subgraph sur laquelle les Curateurs pourront commencer à émettre des signaux. -### 6. Est-il possible de dupliquer un subgraph vers un autre compte ou endpoint sans le redéployer ? +### 6. Est-il possible de dupliquer un Subgraph vers un autre compte ou un autre endpoint sans le redéployer ? -Vous devez redéployer le subgraph, mais si l'ID de subgraph (hachage IPFS) ne change pas, il n'aura pas à se synchroniser depuis le début. +Vous devez redéployer le Subgraph, mais si l'ID du Subgraph (hash IPFS) ne change pas, il ne sera pas nécessaire de le synchroniser depuis le début. -### 7. Comment puis-je appeler une fonction d'un contrat ou accéder à une variable d'état publique depuis mes mappages de subgraph ? +### 7. Comment appeler une fonction du contrat ou accéder à une variable d'état publique à partir de mes mappages de Subgraphs ? Jetez un œil à l’état `Accès au contrat intelligent` dans la section [API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Puis-je importer `ethers.js` ou d'autres bibliothèques JS dans mes mappages de subgraphs ? +### 8. Puis-je importer `ethers.js` ou d'autres bibliothèques JS dans mes mappages de Subgraphs ? Actuellement non, car les mappages sont écrits en AssemblyScript. @@ -45,15 +45,15 @@ Une solution alternative possible serait de stocker des données brutes dans des ### 9. Lorsqu'on écoute plusieurs contrats, est-il possible de sélectionner l'ordre des contrats pour écouter les événements ? -Dans un subgraph, les événements sont toujours traités dans l'ordre dans lequel ils apparaissent dans les blocs, que ce soit sur plusieurs contrats ou non. +Dans un Subgraph, les événements sont toujours traités dans l'ordre dans lequel ils apparaissent dans les blocs, qu'il s'agisse ou non de contrats multiples. ### 10. En quoi les modèles sont-ils différents des sources de données ? -Les modèles vous permettent de créer rapidement des sources de données , pendant que votre subgraph est en cours d'indexation. Votre contrat peut générer de nouveaux contrats à mesure que les gens interagissent avec lui. Étant donné que vous connaissez la structure de ces contrats (ABI, événements, etc.) à l'avance, vous pouvez définir comment vous souhaitez les indexer dans un modèle. Lorsqu'ils sont générés, votre subgraph créera une source de données dynamique en fournissant l'adresse du contrat. +Les modèles vous permettent de créer rapidement des sources de données pendant que votre subgraph est indexé. Votre contrat peut engendrer de nouveaux contrats au fur et à mesure que les gens interagissent avec lui. Puisque vous connaissez la forme de ces contrats (ABI, événements, etc.) à l'avance, vous pouvez définir comment vous voulez les indexer dans un modèle. Lorsqu'ils sont créés, votre subgraph crée une source de données dynamique en fournissant l'adresse du contrat. Consultez la section "Instanciation d'un modèle de source de données" sur : [Modèles de sources de données](/developing/creating-a-subgraph/#data-source-templates). -### 11. Est-il possible de configurer un subgraph en utilisant `graph init` à partir de `graph-cli` avec deux contrats ? Ou dois-je ajouter manuellement une autre source de données dans `subgraph.yaml` après avoir lancé `graph init` ? +### 11. Est-il possible de configurer un Subgraph en utilisant `graph init` à partir de `graph-cli` avec deux contrats ? Ou dois-je ajouter manuellement une autre source de données dans `subgraph.yaml` après avoir lancé `graph init` ? Oui. Dans la commande `graph init` elle-même, vous pouvez ajouter plusieurs sources de données en entrant des contrats l'un après l'autre. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:dernier Si une seule entité est créée pendant l'événement et s'il n'y a rien de mieux disponible, alors le hash de la transaction + l'index du journal seront uniques. Vous pouvez les obscurcir en les convertissant en Bytes et en les faisant passer par `crypto.keccak256`, mais cela ne les rendra pas plus uniques. -### 15. Puis-je supprimer mon subgraph ? +### 15. Puis-je supprimer mon Subgraph ? -Oui, vous pouvez [supprimer](/subgraphs/developing/managing/deleting-a-subgraph/) et [transférer](/subgraphs/developing/managing/transferring-a-subgraph/) votre subgraph. +Oui, vous pouvez [supprimer](/subgraphs/developing/managing/deleting-a-subgraph/) et [transférer](/subgraphs/developing/managing/transferring-a-subgraph/) votre Subgraph. ## Relatif au Réseau @@ -110,11 +110,11 @@ Oui. Sepolia prend en charge les gestionnaires de blocs, les gestionnaires d'app Oui. `dataSources.source.startBlock` dans le fichier `subgraph.yaml` spécifie le numéro du bloc à partir duquel la source de données commence l'indexation. Dans la plupart des cas, nous suggérons d'utiliser le bloc où le contrat a été créé : [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. Quels sont quelques conseils pour augmenter les performances d'indexation? Mon subgraph prend beaucoup de temps à se synchroniser +### 20. Quelles sont les astuces pour améliorer la performance de l'indexation ? La synchronisation de mon Subgraph prend beaucoup de temps Oui, vous devriez jeter un coup d'œil à la fonctionnalité optionnelle de bloc de démarrage pour commencer l'indexation à partir du bloc où le contrat a été déployé : [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Existe-t-il un moyen d'interroger directement le subgraph pour déterminer le dernier numéro de bloc qu'il a indexé? +### 21. Existe-t-il un moyen d'interroger directement le Subgraph pour connaître le dernier numéro de bloc qu'il a indexé ? Oui ! Essayez la commande suivante, en remplaçant "organization/subgraphName" par l'organisation sous laquelle elle est publiée et le nom de votre subgraphe : @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. Si mon application décentralisée (dapp) utilise The Graph pour effectuer des requêtes, dois-je écrire ma clé API directement dans le code du frontend ? Et si nous payons les frais de requête pour les utilisateurs – des utilisateurs malveillants pourraient-ils faire augmenter considérablement nos frais de requête ? -Actuellement, l'approche recommandée pour une dapp est d'ajouter la clé au frontend et de l'exposer aux utilisateurs finaux. Cela dit, vous pouvez limiter cette clé à un nom d'hôte, comme _yourdapp.io_ et subgraph. La passerelle est actuellement gérée par Edge & Node. Une partie de la responsabilité d'une passerelle est de surveiller les comportements abusifs et de bloquer le trafic des clients malveillants. +Actuellement, l'approche recommandée pour une dapp est d'ajouter la clé au frontend et de l'exposer aux utilisateurs finaux. Cela dit, vous pouvez limiter cette clé à un nom d'hôte, comme _yourdapp.io_ et Subgraph. La passerelle est actuellement gérée par Edge & Node. Une partie de la responsabilité d'une passerelle est de surveiller les comportements abusifs et de bloquer le trafic des clients malveillants. ## Divers diff --git a/website/src/pages/fr/subgraphs/developing/introduction.mdx b/website/src/pages/fr/subgraphs/developing/introduction.mdx index 7956855d9d83..5ee0f03573ff 100644 --- a/website/src/pages/fr/subgraphs/developing/introduction.mdx +++ b/website/src/pages/fr/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ En tant que développeur, vous avez besoin de données pour construire et alimen Sur The Graph, vous pouvez : -1. Créer, déployer et publier des subgraphs sur The Graph à l'aide de Graph CLI et de [Subgraph Studio](https://thegraph.com/studio/). -2. Utiliser GraphQL pour interroger des subgraphs existants. +1. Créer, déployer et publier des Subgraphs sur The Graph à l'aide de Graph CLI et de [Subgraph Studio](https://thegraph.com/studio/). +2. Utiliser GraphQL pour interroger les Subgraphs existants. ### Qu'est-ce que GraphQL ? -- [GraphQL](https://graphql.org/learn/) est un langage de requête pour les API et un moteur d'exécution permettant d'exécuter ces requêtes avec vos données existantes. The Graph utilise GraphQL pour interroger les subgraphs. +- [GraphQL] (https://graphql.org/learn/) est un langage de requête pour les API et un moteur d'exécution permettant d'exécuter ces requêtes avec vos données existantes. Le graphe utilise GraphQL pour interroger les Subgraphs. ### Actions des Développeurs -- Interrogez les subgraphs construits par d'autres développeurs dans [The Graph Network](https://thegraph.com/explorer) et intégrez-les dans vos propres dapps. -- Créer des subgraphs personnalisés pour répondre à des besoins de données spécifiques, permettant une meilleure évolutivité et flexibilité pour les autres développeurs. -- Déployer, publier et signaler vos subgraphs au sein de The Graph Network. +- Interrogez les Subgraphs construits par d'autres développeurs dans [The Graph Network](https://thegraph.com/explorer) et intégrez-les dans vos propres dapps. +- Créer des Subgraphs personnalisés pour répondre à des besoins de données spécifiques, ce qui permet d'améliorer l'évolutivité et la flexibilité pour d'autres développeurs. +- Déployez, publiez et signalez vos Subgraphs au sein de The Graph Network. -### Que sont les subgraphs ? +### Qu'est-ce qu'un Subgraph ? -Un subgraph est une API personnalisée construite sur des données blockchain. Il extrait des données d'une blockchain, les traite et les stocke afin qu'elles puissent être facilement interrogées via GraphQL. +Un Subgraph est une API personnalisée construite sur les données de la blockchain. Elle extrait les données d'une blockchain, les traite et les stocke de manière à ce qu'elles puissent être facilement interrogées via GraphQL. -Consultez la documentation sur les [subgraphs](/subgraphs/developing/subgraphs/) pour en savoir plus. +Consultez la documentation sur les [Subgraphs](/subgraphs/developing/subgraphs/) pour en savoir plus. diff --git a/website/src/pages/fr/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/fr/subgraphs/developing/managing/deleting-a-subgraph.mdx index c74be2b234dd..480046bd10c8 100644 --- a/website/src/pages/fr/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/fr/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Suppression d'un Subgraph --- -Supprimez votre subgraph en utilisant [Subgraph Studio](https://thegraph.com/studio/). +Supprimez votre Subgraph en utilisant [Subgraph Studio](https://thegraph.com/studio/). -> En supprimant votre subgraph, vous supprimez toutes les versions publiées de The Graph Network, mais il restera visible sur Graph Explorer et Subgraph Studio pour les utilisateurs qui l'ont signalé. +> Si votre Subgraph est éligible aux récompenses, il est recommandé de créer votre propre Subgraph avec au moins 3 000 GRT afin d'attirer des Indexeurs supplémentaires pour indexer votre Subgraph. ## Étape par Étape -1. Visitez la page du subgraph sur [Subgraph Studio](https://thegraph.com/studio/). +1. Visitez la page du Subgraph sur [Subgraph Studio](https://thegraph.com/studio/). 2. Cliquez sur les trois points à droite du bouton "publier". -3. Cliquez sur l'option "delete this subgraph": +3. Cliquez sur l'option "supprimer ce subgraph" : ![Delete-subgraph](/img/Delete-subgraph.png) -4. En fonction de l'état du subgraph, différentes options vous seront proposées. +4. En fonction de l'état du Subgraph, différentes options vous seront proposées. - - Si le subgraph n'est pas publié, il suffit de cliquer sur “delete“ et de confirmer. - - Si le subgraph est publié, vous devrez le confirmer sur votre portefeuille avant de pouvoir le supprimer de Studio. Si un subgraph est publié sur plusieurs réseaux, tels que testnet et mainnet, des étapes supplémentaires peuvent être nécessaires. + - Si le Subgraph n'est pas publié, il suffit de cliquer sur "supprimer" et de confirmer. + - Si le Subgraph est publié, vous devrez le confirmer dans votre portefeuille avant de pouvoir le supprimer de Studio. Si un Subgraph est publié sur plusieurs réseaux, tels que testnet et mainnet, des étapes supplémentaires peuvent être nécessaires. -> Si le propriétaire du subgraph l'a signalé, les GRT signalés seront renvoyés au propriétaire. +> Si le propriétaire du Subgraph l'a signalé, les GRT signalés seront renvoyé au propriétaire. ### Rappels importants -- Une fois que vous avez supprimé un subgraph, il **n'apparaîtra plus** sur la page d'accueil de Graph Explorer. Toutefois, les utilisateurs qui ont signalé sur ce subgraph pourront toujours le voir sur leurs pages de profil et supprimer leur signal. -- Les curateurs ne seront plus en mesure de signaler le subgraph. -- Les Curateurs qui ont déjà signalé sur le subgraph peuvent retirer leur signal à un prix moyen par action. -- Les subgraphs supprimés afficheront un message d'erreur. +- Une fois que vous avez supprimé un Subgraph, il n'apparaîtra **plus** sur la page d'accueil de The Graph Explorer. Cependant, les utilisateurs qui ont émis un signal sur ce Subgraph pourront toujours le voir sur leurs pages de profil et supprimer leur signal. +- Les curateurs ne pourront plus signaler le subgraph. +- Les curateurs ayant déjà signalé le subgraph pourront retirer leur signal à un prix moyen par part. +- Les Subgraphs supprimés afficheront un message d'erreur. diff --git a/website/src/pages/fr/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/fr/subgraphs/developing/managing/transferring-a-subgraph.mdx index fe386614b198..197bb29de363 100644 --- a/website/src/pages/fr/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/fr/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transfer d'un Subgraph --- -Les subgraphs publiés sur le réseau décentralisé possèdent un NFT minté à l'adresse qui a publié le subgraph. Le NFT est basé sur la norme ERC721, ce qui facilite les transferts entre comptes sur The Graph Network. +Les subgraphs publiés sur le réseau décentralisé ont un NFT mintés à l'adresse qui a publié le subgraph. Le NFT est basé sur un standard ERC721, qui facilite les transferts entre comptes sur The Graph Network. ## Rappels -- Quiconque possède le NFT contrôle le subgraph. -- Si le propriétaire décide de vendre ou de transférer le NFT, il ne pourra plus éditer ou mettre à jour ce subgraph sur le réseau. -- Vous pouvez facilement déplacer le contrôle d'un subgraph vers un multi-sig. -- Un membre de la communauté peut créer un subgraph au nom d'une DAO. +- Celui qui possède le NFT contrôle le Subgraph. +- Si le propriétaire décide de vendre ou de transférer le NFT, il ne pourra plus modifier ou mettre à jour ce Subgraph sur le réseau. +- Vous pouvez facilement transférer le contrôle d'un Subgraph à un multi-sig. +- Un membre de la communauté peut créer un Subgraph pour le compte d'une DAO. ## Voir votre Subgraph en tant que NFT -Pour voir votre subgraph en tant que NFT, vous pouvez visiter une marketplace NFT telle que **OpenSea**: +Pour visualiser votre Subgraph en tant que NFT, vous pouvez visiter Marketplace NFT comme **OpenSea** : ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/adresse-de-votre-portefeuille ## Étape par Étape -Pour transférer la propriété d'un subgraph, procédez comme suit : +Pour transférer la propriété d'un Subgraph, procédez comme suit : 1. Utilisez l'interface utilisateur intégrée dans Subgraph Studio : ![Transfert de propriété de subgraph](/img/subgraph-ownership-transfer-1.png) -2. Choisissez l'adresse vers laquelle vous souhaitez transférer le subgraph : +2. Choisissez l'adresse à laquelle vous souhaitez transférer le Subgraph : ![Transfert de propriété d'un subgraph](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 19a14a1b0eb2..88b91fcd179c 100644 --- a/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/fr/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publication d'un subgraph sur le réseau décentralisé +sidebarTitle: Publier sur le réseau décentralisé --- -Une fois que vous avez [déployé votre sous-graphe dans Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) et qu'il est prêt à être mis en production, vous pouvez le publier sur le réseau décentralisé. +Une fois que vous avez [déployé votre Subgraph dans Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) et qu'il est prêt à être mis en production, vous pouvez le publier sur le réseau décentralisé. -Lorsque vous publiez un subgraph sur le réseau décentralisé, vous le rendez disponible pour : +Lorsque vous publiez un Subgraph sur le réseau décentralisé, vous le rendez disponible pour : - [Curateurs](/resources/roles/curating/) pour commencer la curation. - [Indexeurs](/indexing/overview/) pour commencer à l'indexer. @@ -17,33 +18,33 @@ Consultez la liste des [réseaux pris en charge](/supported-networks/). 1. Accédez au tableau de bord de [Subgraph Studio](https://thegraph.com/studio/) 2. Cliquez sur le bouton **Publish** -3. Votre subgraph est désormais visible dans [Graph Explorer](https://thegraph.com/explorer/). +3. Votre Subgraph sera désormais visible dans [Graph Explorer](https://thegraph.com/explorer/). -Toutes les versions publiées d'un subgraph existant peuvent : +Toutes les versions publiées d'un Subgraph existant peuvent : - Être publié sur Arbitrum One. [En savoir plus sur The Graph Network sur Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Indexer les données sur n'importe lequel des [réseaux pris en charge](/supported-networks/), quel que soit le réseau sur lequel le subgraph a été publié. +- Indexer des données sur n'importe lequel des [réseaux pris en charge](/supported-networks/), quel que soit le réseau sur lequel le Subgraph a été publié. -### Mise à jour des métadonnées d'un subgraph publié +### Mise à jour des métadonnées d'un Subgraph publié -- Après avoir publié votre subgraph sur le réseau décentralisé, vous pouvez mettre à jour les métadonnées à tout moment dans Subgraph Studio. +- Après avoir publié votre Subgraph sur le réseau décentralisé, vous pouvez mettre à jour les métadonnées à tout moment dans Subgraph Studio. - Une fois que vous avez enregistré vos modifications et publié les mises à jour, elles apparaîtront dans Graph Explorer. - Il est important de noter que ce processus ne créera pas une nouvelle version puisque votre déploiement n'a pas changé. ## Publication à partir de la CLI -Depuis la version 0.73.0, vous pouvez également publier votre subgraph avec [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +Depuis la version 0.73.0, vous pouvez également publier votre Subgraph avec [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Ouvrez le `graph-cli`. 2. Utilisez les commandes suivantes : `graph codegen && graph build` puis `graph publish`. -3. Une fenêtre s'ouvrira, vous permettant de connecter votre portefeuille, d'ajouter des métadonnées et de déployer votre subgraph finalisé sur le réseau de votre choix. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Personnalisation de votre déploiement -Vous pouvez uploader votre build de subgraph sur un nœud IPFS spécifique et personnaliser davantage votre déploiement avec les options suivantes : +Vous pouvez télécharger votre Subgraph sur un nœud IPFS spécifique et personnaliser davantage votre déploiement à l'aide des flags suivants : ``` UTILISATION @@ -61,33 +62,33 @@ FLAGS ``` -## Ajout de signal à votre subgraph +## Adding signal to your Subgraph -Les développeurs peuvent ajouter des signaux GRT à leurs subgraphs pour inciter les Indexeurs à interroger le subgraph. +Les développeurs peuvent ajouter un signal GRT à leurs Subgraphs pour inciter les Indexeurs à interroger le Subgraphs. -- Si un subgraph est éligible aux récompenses d'indexation, les Indexeurs qui fournissent une "preuve d'indexation" recevront une récompense en GRT, basée sur la quantité de GRT signalée. +- Si un Subgraph est éligible pour des récompenses d'indexation, les Indexeurs qui fournissent une "preuve d'indexation" recevront une récompense GRT, basée sur la quantité de GRT signalée. -- Vous pouvez vérifier l'éligibilité de la récompense d'indexation en fonction de l'utilisation des caractéristiques du subgraph [ici](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- Vous pouvez vérifier l'éligibilité de la récompense d'indexation en fonction de l'utilisation de la fonction Subgraph [ici](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Les réseaux spécifiques pris en charge peuvent être vérifiés [ici](/supported-networks/). -> Ajouter un signal à un subgraph non éligible aux récompenses n'attirera pas d'Indexeurs supplémentaires. +> L'ajout d'un signal à un subgraph qui n'est pas éligible aux récompenses n'attirera pas d'indexeurs supplémentaires. > -> Si votre subgraph est éligible aux récompenses, il est recommandé de curer votre propre subgraph avec au moins 3 000 GRT afin d'attirer des indexeurs supplémentaires pour indexer votre subgraph. +> Si votre Subgraph est éligible aux récompenses, il est recommandé de créer votre propre Subgraph avec au moins 3 000 GRT afin d'attirer des Indexeurs supplémentaires pour indexer votre Subgraph. -Le [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) assure l'indexation de tous les subgraphs. Cependant, le fait de signaler un GRT sur un subgraph particulier attirera plus d'Indexeurs vers celui-ci. Cette incitation à la création d'Indexeurs supplémentaires par le biais de la curation vise à améliorer la qualité de service pour les requêtes en réduisant la latence et en améliorant la disponibilité du réseau. +Le [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) assure l'indexation de tous les Subgraphs. Cependant, le fait de signaler des GRT sur un Subgraph particulier attirera plus d'Indexeurs vers celui-ci. Cette incitation à la création d'Indexeurs supplémentaires par le biais de la curation vise à améliorer la qualité de service pour les requêtes en réduisant la latence et en améliorant la disponibilité du réseau. -Lors du signalement, les Curateurs peuvent décider de signaler une version spécifique du subgraph ou de signaler en utilisant l'auto-migration. S'ils signalent en utilisant l'auto-migration, les parts d'un Curateur seront toujours mises à jour vers la dernière version publiée par le développeur. S'ils décident de signaler une version spécifique, les parts resteront toujours sur cette version spécifique. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Les Indexeurs peuvent trouver des subgraphs à indexer en fonction des signaux de curation qu'ils voient dans Graph Explorer. +Les Indexeurs peuvent trouver des Subgraphs à indexer sur la base des signaux de curation qu'ils voient dans Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio vous permet d'ajouter des signaux à votre subgraph en ajoutant des GRT au pool de curation de votre subgraph dans la même transaction où il est publié. +Subgraph Studio vous permet d'ajouter un signal à votre Subgraph en ajoutant des GRT au pool de curation de votre Subgraph lors de la même transaction que celle de sa publication. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternativement, vous pouvez ajouter des signaux GRT à un subgraph publié à partir de Graph Explorer. +Vous pouvez également ajouter un signal GRT à un Subgraph publié à partir de Graph Explorer. ![Signal provenant de l'Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/fr/subgraphs/developing/subgraphs.mdx b/website/src/pages/fr/subgraphs/developing/subgraphs.mdx index d042af3b7930..8addd4e2ebda 100644 --- a/website/src/pages/fr/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/fr/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## Qu'est-ce qu'un subgraph ? -Un subgraph est une API ouverte et personnalisée qui extrait des données d'une blockchain, les traite et les stocke de manière à ce qu'elles puissent être facilement interrogées via GraphQL. +Un Subgraph est une API ouverte personnalisée qui extrait des données d'une blockchain, les traite et les stocke de manière à ce qu'elles puissent être facilement interrogées via GraphQL. ### Capacités des subgraphs - **Accès aux données:** Les subgraphs permettent d'interroger et d'indexer les données de la blockchain pour le web3. -- \*\*Les développeurs peuvent créer, déployer et publier des subgraphs sur The Graph Network. Pour commencer, consultez le [Démarrage Rapide](quick-start/) du développeur de subgraphs. -- **Indexation et interrogation:** Une fois qu'un subgraph est indexé, tout le monde peut l'interroger. Explorez et interrogez tous les subgraphs publiés sur le réseau dans [Graph Explorer](https://thegraph.com/explorer). +- \*\*Les développeurs peuvent créer, déployer et publier des Subgraphs sur The Graph Network. Pour commencer, consultez le [Démarrage rapide] du développeur de Subgraphs (quick-start/). +- **Indexation et interrogation:** Une fois qu'un Subgraph est indexé, tout le monde peut l'interroger. Explorez et interrogez tous les Subgraphs publiés sur le réseau dans [Graph Explorer](https://thegraph.com/explorer). ## À l'intérieur d'un subgraph -Le manifeste du subgraph, `subgraph.yaml`, définit les contrats intelligents et le réseau que votre subgraph va indexer, les événements de ces contrats auxquels il faut prêter attention, et comment faire correspondre les données d'événements aux entités que Graph Node stocke et permet d'interroger. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -La **définition du subgraph** se compose des fichiers suivants : +The **Subgraph definition** consists of the following files: -- `subgraph.yaml` : Contient le manifeste du subgraph +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql` : Un schéma GraphQL définissant les données stockées pour votre subgraph et comment les interroger via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts` : [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code qui traduit les données d'événements en entités définies dans votre schéma -Pour en savoir plus sur chaque composant d'un subgraph, consultez [créer un subgraph](/developing/creating-a-subgraph/). +Pour en savoir plus sur chaque composant du Subgraph, consultez [créer un Subgraph](/developing/creating-a-subgraph/). ## Flux du cycle de vie des subgraphes -Voici un aperçu général du cycle de vie d'un subgraph : +Voici un aperçu général du cycle de vie d'un Subgraph : ![Cycle de vie d'un Subgraph](/img/subgraph-lifecycle.png) ## Développement de subgraphs -1. [Créer un subgraph](/developing/creating-a-subgraph/) -2. [Déployer un subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Tester un subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publier un subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signaler sur un subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Créer un Subgraph](/developing/creating-a-subgraph/) +2. [Déployer un Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Tester un Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publier un Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signaler sur un Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Développement en local -Les meilleurs subgraphs commencent par un environnement de développement local et des tests unitaires. Les développeurs utilisent [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), un outil d'interface de ligne de commande pour construire et déployer des subgraphs sur The Graph. Ils peuvent également utiliser [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) et [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) pour créer des subgraphs robustes. +Les grands subgraphs commencent par un environnement de développement local et des tests unitaires. Les développeurs utilisent [Graph CLI] (https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), un outil d'interface de ligne de commande pour construire et déployer des subgraphs sur The Graph. Ils peuvent également utiliser [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) et [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) pour créer des subgraphs robustes. ### Déployer sur Subgraph Studio -Une fois défini, un subgraph peut être [déployé dans Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). Dans Subgraph Studio, vous pouvez effectuer les opérations suivantes : +Une fois défini, un Subgraph peut être [déployé dans le Subgraph Studio] (/deploying/deploying-a-subgraph-to-studio/). Dans Subgraph Studio, vous pouvez effectuer les opérations suivantes : -- Utiliser son environnement de test pour indexer le subgraph déployé et le mettre à disposition pour évaluation. -- Vérifiez que votre subgraph ne présente aucune erreur d'indexation et qu'il fonctionne comme prévu. +- Utiliser l'environnement d'essai pour indexer le Subgraph déployé et le mettre à disposition pour examen. +- Vérifiez que votre Subgraph ne présente aucune erreur d'indexation et qu'il fonctionne comme prévu. ### Publier sur le réseau -Lorsque vous êtes satisfait de votre subgraph, vous pouvez le [publier](/subgraphs/developing/publishing/publishing-a-subgraph/) sur The Graph Network. +Lorsque vous êtes satisfait de votre Subgraph, vous pouvez le [publier](/subgraphs/developing/publishing/publishing-a-subgraph/) sur The Graph Network. -- Il s'agit d'une action onchain, qui enregistre le subgraph et le rend accessible aux Indexeurs. -- Les subgraphs publiés ont un NFT correspondant, qui définit la propriété du subgraph. Vous pouvez [transférer la propriété du subgraph](/subgraphes/developing/managing/transferring-a-subgraph/) en envoyant le NFT. -- Les subgraphs publiés sont associés à des métadonnées qui fournissent aux autres participants du réseau un contexte et des informations utiles. +- Il s'agit d'une action onchain, qui enregistre le Subgraph et le rend accessible aux Indexeurs. +- Les Subgraphs publiés ont un NFT correspondant, qui définit la propriété du Subgraphs. Vous pouvez [transférer la propriété du Subgraph](/subgraphs/developing/managing/transferring-a-subgraph/) en envoyant le NFT. +- Les Subgraph publiés sont associés à des métadonnées qui fournissent aux autres participants du réseau un contexte et des informations utiles. ### Ajouter un signal de curation pour l'indexation -Les subgraphs publiés ont peu de chances d'être repérés par les Indexeurs s'ils ne sont pas accompagnés d'un signal de curation. Pour encourager l'indexation, vous devez ajouter un signal à votre subgraph. Consultez la signalisation et la [curation](/resources/roles/curating/) sur The Graph. +Les Subgraphs publiés ont peu de chances d'être repérés par les Indexeurs s'ils ne sont pas accompagnés d'un signal de curation. Pour encourager l'indexation, vous devez ajouter un signal à votre Subgraph. Pour en savoir plus sur la signalisation et la [curation](/resources/roles/curating/), consultez le site The Graph. #### Qu'est-ce qu'un signal ? -- Le signal correspond aux GRT verrouillés associé à un subgraph donné. Il indique aux Indexeurs qu'un subgraph donné recevra un volume de requêtes et contribue aux récompenses d'indexation disponibles pour le traiter. -- Les Curateurs tiers peuvent également signaler un subgraph donné s'ils estiment que ce subgraph est susceptible de générer un volume de requêtes. +- Le signal est constitué de GRT verrouillés associés à un Subgrqph donné. Il indique aux Indexeurs qu'un Subgraph donné recevra un volume de requêtes et contribue aux récompenses d'indexation disponibles pour le traiter. +- Les Curateurs tiers peuvent également signaler un Subgraph donné s'ils estiment que ce Subgraph est susceptible de générer un volume de requêtes. ### Intérrogation & Développement d'applications Les subgraphs sur The Graph Network reçoivent 100 000 requêtes gratuites par mois, après quoi les développeurs peuvent soit [payer les requêtes avec GRT ou une carte de crédit](/subgraphs/billing/). -En savoir plus sur [l'interrogation des subgraphs](/subgraphs/querying/introduction/). +En savoir plus sur [l'interrogation des Subgraphs](/subgraphs/querying/introduction/). ### Mise à jour des subgraphs -Pour mettre à jour votre subgraph avec des corrections de bug ou de nouvelles fonctionnalités, lancez une transaction pour le faire pointer vers la nouvelle version. Vous pouvez déployer les nouvelles versions de vos subgraphs dans le [Subgraph Studio](https://thegraph.com/studio/) à des fins de développement et de test. +Pour mettre à jour votre Subgraph avec des corrections de bogues ou de nouvelles fonctionnalités, lancez une transaction pour le faire pointer vers la nouvelle version. Vous pouvez déployer les nouvelles versions de vos Subgraph dans le [Subgraph Studio](https://thegraph.com/studio/) à des fins de développement et de test. -- Si vous avez sélectionné "migration automatique" lorsque vous avez appliqué le signal, la mise à jour du subgraph migrera tout signal vers la nouvelle version et entraînera une taxe de migration. -- Ce signal de migration devrait inciter les Indexeurs à commencer à indexer la nouvelle version du subgraph, qui devrait donc bientôt pouvoir être consultée. +- Si vous avez sélectionné "migration automatique" lorsque vous avez appliqué le signal, la mise à jour du Subgraph migrera tout signal vers la nouvelle version et entraînera une taxe de migration. +- Ce signal de migration devrait inciter les Indexeurs à commencer à indexer la nouvelle version du Subgraph, qui devrait donc bientôt pouvoir être consultée. ### Suppression et Transfert de Subgraphs -Si vous n'avez plus besoin d'un subgraph publié, vous pouvez le [supprimer](/subgraphs/developing/managing/deleting-a-subgraph/) ou le [transférer](/subgraphs/developing/managing/transferring-a-subgraph/). La suppression d'un subgraph renvoie tout les GRT signalés aux [Curateurs](/resources/roles/curating/). +Si vous n'avez plus besoin d'un Subgraph publié, vous pouvez le [supprimer](/subgraphs/developing/managing/deleting-a-subgraph/) ou le [transférer](/subgraphs/developing/managing/transferring-a-subgraph/). La suppression d'un Subgraph renvoie tout GRT signalé aux [Curateurs](/resources/roles/curating/). diff --git a/website/src/pages/fr/subgraphs/explorer.mdx b/website/src/pages/fr/subgraphs/explorer.mdx index 324c6b5602b3..7a7cf7e972db 100644 --- a/website/src/pages/fr/subgraphs/explorer.mdx +++ b/website/src/pages/fr/subgraphs/explorer.mdx @@ -2,70 +2,70 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Découvrez le monde des subgraphs et des données de réseau avec [Graph Explorer](https://thegraph.com/explorer). ## Aperçu -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer se compose de plusieurs parties où vous pouvez interagir avec les [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [déléguer](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engager les [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), voir les [informations sur le réseau](https://thegraph.com/explorer/network?chain=arbitrum-one) et accéder à votre profil d'utilisateur. -## Inside Explorer +## À l'intérieur de l'Explorer -The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide). +Vous trouverez ci-dessous une liste de toutes les fonctionnalités clés de Graph Explorer. Pour obtenir une assistance supplémentaire, vous pouvez regarder le [guide vidéo de Graph Explorer](/subgraphs/explorer/#video-guide). -### Subgraphs Page +### Page des subgraphs -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +Après avoir déployé et publié votre subgraph dans Subgraph Studio, allez sur [Graph Explorer](https://thegraph.com/explorer) et cliquez sur le lien "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" dans la barre de navigation pour accéder à ce qui suit : -- Vos propres subgraphs terminés +- Vos propres subgraphs finis - Les subgraphs publiés par d'autres -- Le subgraph exact que vous voulez (basé sur la date de création, le montant du signal ou le nom). +- Le Subgraph exact que vous souhaitez (sur la base de la date de création, de la quantité de signal ou du nom). -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![Image 1 de l'Explorer](/img/Subgraphs-Explorer-Landing.png) -Lorsque vous cliquez sur un subgraph, vous pourrez faire ce qui suit : +Lorsque vous cliquez sur un subgraph, vous pouvez effectuer les opérations suivantes : - Tester des requêtes dans le l'environnement de test et utiliser les détails du réseau pour prendre des décisions éclairées. -- Signaler des GRT sur votre propre subgraph ou sur les subgraphs des autres pour informer les Indexeurs de son importance et de sa qualité. +- Signalez des GRT sur votre propre subgraph ou sur les subgraphs d'autres personnes afin de sensibiliser les Indexeurs sur son importance et sa qualité. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - Ce point est essentiel, car le fait de signaler un subgraph incite à être indexé, ce qui signifie qu'il finira par faire surface sur le réseau pour répondre aux requêtes. -![Explorer Image 2](/img/Subgraph-Details.png) +![Image 2 de l'Explorer](/img/Subgraph-Details.png) -Sur la page dédiée de chaque subgraph, vous pouvez faire ce qui suit : +Sur la page dédiée à chaque subgraph, vous pouvez effectuer les opérations suivantes : -- Signal/Un-signal sur les subgraphs +- Signaler/Dé-signaler sur les subgraphs - Afficher plus de détails tels que des graphs, l'ID de déploiement actuel et d'autres métadonnées - Passer d'une version à l'autre pour explorer les itérations passées du subgraph - Interroger les subgraphs via GraphQL - Tester les subgraphs dans le playground -- Afficher les indexeurs qui indexent sur un certain subgraph +- Voir les Indexeurs qui indexent un certain subgraph - Statistiques du subgraph (allocations, conservateurs, etc.) - Afficher l'entité qui a publié le subgraph -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![Image 3 de l'Explorer](/img/Explorer-Signal-Unsignal.png) -### Delegate Page +### Page de Délégué -On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer. +Sur la [page de Délégué](https://thegraph.com/explorer/delegate?chain=arbitrum-one), vous trouverez des informations sur la délégation, l'acquisition de GRT et le choix d'un Indexeur. -On this page, you can see the following: +Sur cette page, vous pouvez voir les éléments suivants : -- Indexers who collected the most query fees -- Indexers with the highest estimated APR +- Indexeurs ayant perçu le plus de frais de requête +- Indexeurs avec l'APR estimé le plus élevé -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +En outre, vous pouvez calculer votre retour sur investissement et rechercher les meilleurs Indexeurs par nom, adresse ou subgraph. -### Participants Page +### Page des participants -This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. +Cette page offre une vue d'ensemble de tous les "participants," c'est-à-dire de toutes les personnes qui participent au réseau, telles que les Indexeurs, les Déléguateurs et les Curateurs. #### 1. Indexeurs -![Explorer Image 4](/img/Indexer-Pane.png) +![Image 4 de l'Explorer](/img/Indexer-Pane.png) -Les Indexeurs sont la colonne vertébrale du protocole. Ils stakent sur les subgraphs, les indexent et servent les requêtes à quiconque consomme les subgraphs. +Les Indexeurs constituent la colonne principale du protocole. Ils s'intéressent aux subgraphs, les indexent et servent des requêtes à tous ceux qui consomment des subgraphs. -Dans le tableau des Indexeurs, vous pouvez voir les paramètres de délégation des Indexeurs, leur staking, combien ils ont staké sur chaque subgraph et combien de revenus ils ont généré à partir des frais de requête et des récompenses d'indexation. +Dans le tableau des Indexeurs, vous pouvez voir les paramètres de délégation d'un Indexeur, son staking, le montant qu'il a staké sur chaque subgraph et le revenu qu'il a tiré des frais de requête et des récompenses d'indexation. Spécificités @@ -74,7 +74,7 @@ Spécificités - Cooldown Remaining - le temps restant avant que l'Indexeur puisse modifier les paramètres de délégation ci-dessus. Les périodes de cooldown sont définies par les Indexeurs lorsqu'ils mettent à jour leurs paramètres de délégation. - Owned - Il s'agit du staking de l'Indexeur, qui peut être partiellement confisquée en cas de comportement malveillant ou incorrect. - Delegated - Le staking des Délégateurs qui peut être allouée par l'Indexeur, mais ne peut pas être confisquée. -- Allocated - Le staking les Indexeurs allouent activement aux subgraphs qu'ils indexent. +- Alloué - Le Staking que les Indexeurs allouent activement aux subgraphs qu'ils indexent. - Available Delegation Capacity - le staking délégué que les Indexeurs peuvent encore recevoir avant d'être sur-délégués. - Capacité de délégation maximale : montant maximum de participation déléguée que l'indexeur peut accepter de manière productive. Une mise déléguée excédentaire ne peut pas être utilisée pour le calcul des allocations ou des récompenses. - Query Fees - il s'agit du total des frais que les utilisateurs finaux ont payés pour les requêtes d'un Indexeur au fil du temps. @@ -84,16 +84,16 @@ Les Indexeurs peuvent gagner à la fois des frais de requête et des récompense - Les paramètres d'indexation peuvent être définis en cliquant sur le côté droit du tableau ou en accédant au profil d'un Indexeur et en cliquant sur le bouton "Delegate ". -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +Pour en savoir plus sur la façon de devenir Indexeur, vous pouvez consulter la [documentation officielle](/indexing/overview/) ou les [guides de l'Indexeur de The Graph Academy](https://thegraph.academy/delegators/choosing-indexers/) -![Indexing details pane](/img/Indexing-Details-Pane.png) +![Volet Détails de l'indexation](/img/Indexing-Details-Pane.png) #### 2. Curateurs -Les Curateurs analysent les subgraphs pour identifier ceux de la plus haute qualité. Une fois qu'un Curateur a trouvé un subgraph potentiellement de haute qualité, il peut le curer en le signalant sur sa courbe de liaison. Ce faisant, les Curateurs informent les Indexeurs des subgraphs de haute qualité qui doivent être indexés. +Les Curateurs analysent les subgraphs afin d'identifier ceux qui sont de la plus haute qualité. Une fois qu'un Curateur a trouvé un subgraph potentiellement de haute qualité, il peut le curer en signalant sa courbe de liaison. Ce faisant, les Curateurs indiquent aux Indexeurs quels subgraphs sont de haute qualité et devraient être indexés. - Les Curateurs peuvent être des membres de la communauté, des consommateurs de données ou même des développeurs de subgraphs qui signalent leurs propres subgraphs en déposant des jetons GRT dans une courbe de liaison. - - En déposant des GRT, les Curateurs mintent des actions de curation d'un subgraph. En conséquence, ils peuvent gagner une partie des frais de requête générés par le subgraph sur lequel ils ont signalé. + - En déposant des GRT, les Curateurs acquièrent des parts de curation d'un subgraph. Ils peuvent ainsi gagner une partie des frais de requête générés par le subgraph qu'ils ont signalé. - La courbe de liaison incite les Curateurs à curer les sources de données de la plus haute qualité. Dans le tableau des Curateurs ci-dessous, vous pouvez voir : @@ -102,9 +102,9 @@ Dans le tableau des Curateurs ci-dessous, vous pouvez voir : - Le nombre de GRT déposés - Nombre d'actions détenues par un curateur -![Explorer Image 6](/img/Curation-Overview.png) +![Image 6 de l'Explorer](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/). +Si vous souhaitez en savoir plus sur le rôle de Curateur, vous pouvez consulter la [documentation officielle](/resources/roles/curating/) ou [The Graph Academy](https://thegraph.academy/curators/). #### 3. Délégués @@ -112,24 +112,24 @@ Les Délégateurs jouent un rôle clé dans le maintien de la sécurité et de l - Sans Délégateurs, les Indexeurs sont moins susceptibles de gagner des récompenses et des frais importants. Par conséquent, les Indexeurs attirent les Délégateurs en leur offrant une partie de leurs récompenses d'indexation et de leurs frais de requête. - Les Délégateurs sélectionnent leurs Indexeurs selon divers critères, telles que les performances passées, les taux de récompense d'indexation et le partage des frais. -- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/). +- La réputation au sein de la communauté peut également jouer un rôle dans le processus de sélection. Il est recommandé d'entrer en contact avec les Indexeurs sélectionnés via le [Discord de The Graph](https://discord.gg/graphprotocol) ou le [Forum de The Graph] (https://forum.thegraph.com/). -![Explorer Image 7](/img/Delegation-Overview.png) +![Image 7 de l'Explorer](/img/Delegation-Overview.png) Dans le tableau des Délégateurs, vous pouvez voir les Délégateurs actifs dans la communauté et les métriques importantes : - Le nombre d’indexeurs auxquels un délégant délègue -- A Delegator's original delegation +- La Délégation initiale d'un Déléguateur - Les récompenses qu'ils ont accumulées mais qu'ils n'ont pas retirées du protocole - Les récompenses obtenues qu'ils ont retirées du protocole - Quantité totale de GRT qu'ils ont actuellement dans le protocole - La date de leur dernière délégation -If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +Si vous souhaitez en savoir plus sur la façon de devenir Déléguateur, consultez la [documentation officielle](/resources/roles/delegating/delegating/) ou [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). -### Network Page +### Page de réseau -On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +Sur cette page, vous pouvez voir les KPIs globaux et avoir la possibilité de passer à une base par époque et d'analyser les métriques du réseau plus en détail. Ces détails vous donneront une idée des performances du réseau au fil du temps. #### Aperçu @@ -144,10 +144,10 @@ La section d'aperçu présente à la fois toutes les métriques actuelles du ré Quelques détails clés à noter : -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Les frais de requête représentent les frais générés par les consommateurs**. Ils peuvent être réclamés (ou non) par les Indexeurs après une période d'au moins 7 époques (voir ci-dessous) après que leurs allocations vers les subgraphs ont été clôturées et que les données qu'ils ont servies ont été validées par les consommateurs. +- **Les récompenses d'indexation représentent le montant des récompenses que les Indexeurs ont réclamé de l'émission du réseau au cours de l'époque.** Bien que l'émission du protocole soit fixe, les récompenses ne sont mintées qu'une fois que les Indexeurs ont fermé leurs allocations vers les subgraphs qu'ils ont indexés. Ainsi, le nombre de récompenses par époque varie (par exemple, au cours de certaines époques, les Indexeurs peuvent avoir fermé collectivement des attributions qui étaient ouvertes depuis plusieurs jours). -![Explorer Image 8](/img/Network-Stats.png) +![Image 8 de l'Explorer](/img/Network-Stats.png) #### Époques @@ -161,7 +161,7 @@ Dans la section Époques, vous pouvez analyser, époque par époque, des métriq - Les époques de distribution sont les époques au cours desquelles les canaux d'État pour les époques sont réglés et les indexeurs peuvent réclamer leurs remises sur les frais de requête. - Les époques finalisées sont les époques qui n'ont plus de remboursements de frais de requête à réclamer par les Indexeurs. -![Explorer Image 9](/img/Epoch-Stats.png) +![Image 9 de l'Explorer](/img/Epoch-Stats.png) ## Votre profil d'utilisateur @@ -174,19 +174,19 @@ Dans cette section, vous pouvez voir ce qui suit : - Toutes les actions en cours que vous avez effectuées. - Les informations de votre profil, description et site web (si vous en avez ajouté un). -![Explorer Image 10](/img/Profile-Overview.png) +![Image 10 de l'Explorer](/img/Profile-Overview.png) ### Onglet Subgraphs -Dans l'onglet Subgraphs, vous verrez vos subgraphs publiés. +Dans l'onglet Subgraphs, vous verrez les subgraphs publiés. -> Ceci n'inclura pas les subgraphs déployés avec la CLI à des fins de test. Les subgraphs n'apparaîtront que lorsqu'ils sont publiés sur le réseau décentralisé. +> Cela n'inclut pas les subgraphs déployés avec la CLI à des fins de test. Les subgraphs n'apparaîtront que lorsqu'ils seront publiés sur le réseau décentralisé. -![Explorer Image 11](/img/Subgraphs-Overview.png) +![Image 11 de l'Explorer](/img/Subgraphs-Overview.png) ### Onglet Indexation -Dans l'onglet Indexation, vous trouverez un tableau avec toutes les allocations actives et historiques via-à-vis des subgraphs. Vous trouverez également des graphiques où vous pourrez voir et analyser vos performances passées en tant qu'Indexeur. +Dans l'onglet Indexation, vous trouverez un tableau avec toutes les allocations actives et historiques vers les subgraphs. Vous trouverez également des graphiques qui vous permettront de voir et d'analyser vos performances passées en tant qu'Indexeur. Cette section comprendra également des détails sur vos récompenses nettes d'indexeur et vos frais de requête nets. Vous verrez les métriques suivantes : @@ -197,7 +197,7 @@ Cette section comprendra également des détails sur vos récompenses nettes d'i - Récompenses de l'indexeur - le montant total des récompenses de l'indexeur que vous avez reçues, en GRT - Possédé : votre mise déposée, qui pourrait être réduite en cas de comportement malveillant ou incorrect -![Explorer Image 12](/img/Indexer-Stats.png) +![Image 12 de l'Explorer](/img/Indexer-Stats.png) ### Onglet Délégation @@ -219,20 +219,20 @@ Les boutons situés à droite du tableau vous permettent de gérer votre délég Gardez à l'esprit que ce graph peut être parcouru horizontalement, donc si vous le faites défiler jusqu'à la droite, vous pouvez également voir le statut de votre délégation (en cours de délégation, non-déléguée, en cours de retrait). -![Explorer Image 13](/img/Delegation-Stats.png) +![Image 13 de l'Explorer](/img/Delegation-Stats.png) ### Onglet Conservation -Dans l'onglet Curation, vous trouverez tous les subgraphs vous signalez (vous permettant ainsi de recevoir des frais de requête). La signalisation permet aux conservateurs de mettre en évidence aux indexeurs quels subgraphs sont précieux et dignes de confiance, signalant ainsi qu'ils doivent être indexés. +Dans l'onglet Curation, vous trouverez tous les subgraphs que vous signalez (ce qui vous permet de recevoir des frais de requête). La signalisation permet aux Curateurs d'indiquer aux Indexeurs les subgraphs qui ont de la valeur et qui sont dignes de confiance, signalant ainsi qu'ils doivent être indexés. Dans cet onglet, vous trouverez un aperçu de : -- Tous les subgraphs sur lesquels vous êtes en train de curer avec les détails du signal -- Partager les totaux par subgraph -- Récompenses de requête par subraph +- Tous les subgraphs sur lesquels vous êtes Curateur avec les détails des signaux +- Total des parts par Subgraph +- Récompenses pour les requêtes par subgraph - Détails mis à jour -![Explorer Image 14](/img/Curation-Stats.png) +![Image 14 de l'Explorer](/img/Curation-Stats.png) ### Paramètres de votre profil @@ -241,11 +241,11 @@ Dans votre profil utilisateur, vous pourrez gérer les détails de votre profil - Les opérateurs effectuent des actions limitées dans le protocole au nom de l'indexeur, telles que l'ouverture et la clôture des allocations. Les opérateurs sont généralement d'autres adresses Ethereum, distinctes de leur portefeuille de jalonnement, avec un accès sécurisé au réseau que les indexeurs peuvent définir personnellement - Les paramètres de délégation vous permettent de contrôler la répartition des GRT entre vous et vos délégués. -![Explorer Image 15](/img/Profile-Settings.png) +![Image 15 de l'Explorer](/img/Profile-Settings.png) En tant que portail officiel dans le monde des données décentralisées, Graph Explorer vous permet de prendre diverses actions, quel que soit votre rôle dans le réseau. Vous pouvez accéder aux paramètres de votre profil en ouvrant le menu déroulant à côté de votre adresse, puis en cliquant sur le bouton Paramètres. -![Wallet details](/img/Wallet-Details.png) +![détails du portefeuille](/img/Wallet-Details.png) ## Ressources supplémentaires diff --git a/website/src/pages/fr/subgraphs/guides/arweave.mdx b/website/src/pages/fr/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..f888e87bd16e --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Construction de subgraphs pour Arweave +--- + +> La prise en charge d'Arweave dans Graph Node et dans Subgraph Studio est en beta : n'hésitez pas à nous contacter sur [Discord](https://discord.gg/graphprotocol) pour toute question concernant la construction de subgraphs Arweave ! + +Dans ce guide, vous apprendrez comment créer et déployer des subgraphs pour indexer la blockchain Arweave. + +## Qu’est-ce qu’Arweave ? + +Arweave est un protocole qui permet aux développeurs de stocker des données de façon permanente. C'est cette caractéristique qui constitue la principale différence entre Arweave et IPFS. En effet, IPFS n'a pas la caractéristique de permanence, et les fichiers stockés sur Arweave ne peuvent pas être modifiés ou supprimés. + +Arweave a déjà construit de nombreuses bibliothèques pour intégrer le protocole dans plusieurs langages de programmation différents. Pour plus d'informations, vous pouvez consulter : + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Ressources Arweave](https://www.arweave.org/build) + +## À quoi servent les subgraphs d'Arweave ? + +The Graph vous permet de créer des API ouvertes personnalisées appelées "Subgraphs". Les subgraphs sont utilisés pour indiquer aux Indexeurs (opérateurs de serveur) quelles données indexer sur une blockchain et enregistrer sur leurs serveurs afin que vous puissiez les interroger à tout moment à l'aide de [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) est désormais capable d'indexer les données sur le protocole Arweave. L'intégration actuelle indexe uniquement Arweave en tant que blockchain (blocs et transactions), elle n'indexe pas encore les fichiers stockés. + +## Construire un subgraph Arweave + +Pour pouvoir créer et déployer des Arweave Subgraphs, vous avez besoin de deux packages : + +1. `@graphprotocol/graph-cli` au-dessus de la version 0.30.2 - C'est un outil en ligne de commande pour construire et déployer des subgraphs. [Cliquez ici](https://www.npmjs.com/package/@graphprotocol/graph-cli) pour le télécharger en utilisant `npm`. +2. `@graphprotocol/graph-ts` au-dessus de la version 0.27.0 - Il s'agit d'une bibliothèque de types spécifiques aux subgraphs. [Cliquez ici](https://www.npmjs.com/package/@graphprotocol/graph-ts) pour télécharger en utilisant `npm`. + +## Caractéristique des subgraphs + +Un subgraph se compose de trois éléments : + +### 1. Le Manifest - `subgraph.yaml` + +Définit les sources de données intéressantes et la manière dont elles doivent être traitées. Arweave est un nouveau type de source de données. + +### 2. Schéma - `schema.graphql` + +Vous définissez ici les données que vous souhaitez pouvoir interroger après avoir indexé votre subgraph à l'aide de GraphQL. Ceci est en fait similaire à un modèle pour une API, où le modèle définit la structure d'un corps de requête. + +Les exigences relatives aux subgraphs Arweave sont couvertes par la [documentation existante](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. Mappages en AssemblyScript - `mapping.ts` + +Il s'agit de la logique qui détermine comment les données doivent être récupérées et stockées lorsqu'une personne interagit avec les sources de données que vous interrogez. Les données sont traduites et stockées sur la base du schema que vous avez répertorié. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## Définition du manifeste du subgraph + +Le manifeste du subgraph `subgraph.yaml` identifie les sources de données pour le subgraph, les déclencheurs intéressants et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Voir ci-dessous un exemple de manifeste de subgraph pour un subgraph Arweave : + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Les subgraphs Arweave introduisent un nouveau type de source de données (`arweave`) +- Le réseau doit correspondre à un réseau sur le Graph Node hôte. Dans Subgraph Studio, le réseau principal d'Arweave est `arweave-mainnet` +- Les sources de données Arweave introduisent un champ source.owner facultatif, qui est la clé publique d'un portefeuille Arweave + +Les sources de données Arweave prennent en charge deux types de gestionnaires : + +- `blockHandlers` - Exécuté sur chaque nouveau bloc Arweave. Aucun source.owner n'est requis. +- `transactionHandlers` - Exécute chaque transaction dont le propriétaire est `source.owner` de la source de données. Actuellement, un propriétaire est requis pour `transactionHandlers`, si les utilisateurs veulent traiter toutes les transactions, ils doivent fournir "" comme `source.owner` + +> Source.owner peut être l’adresse du propriétaire ou sa clé publique. +> +> Les transactions sont les éléments constitutifs du permaweb Arweave et ce sont des objets créés par les utilisateurs finaux. +> +> Note : Les transactions [Irys (anciennement Bundlr)](https://irys.xyz/) ne sont pas encore prises en charge. + +## Définition de schéma + +La définition du schéma décrit la structure de la base de données Subgraph résultante et les relations entre les entités. Elle est indépendante de la source de données d'origine. Vous trouverez plus de détails sur la définition du schéma du subgraph [ici](/developing/creating-a-subgraph/#the-graphql-schema). + +## Cartographies AssemblyScript + +Les gestionnaires d'événements sont écrits en [AssemblyScript](https://www.assemblyscript.org/). + +L'indexation Arweave introduit des types de données spécifiques à Arweave dans l'[API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Les gestionnaires de blocs reçoivent un `Block`, tandis que les transactions reçoivent un `Transaction`. + +L'écriture des mappages d'un subgraph Arweave est très similaire à l'écriture des mappages d'un subgraph Ethereum. Pour plus d'informations, cliquez [ici](/developing/creating-a-subgraph/#writing-mappings). + +## Déploiement d'un subgraph Arweave dans Subgraph Studio + +Une fois que votre subgraph a été créé sur le tableau de bord de Subgraph Studio, vous pouvez le déployer en utilisant la commande CLI `graph deploy`. + +```bash +graph deploy --access-token +``` + +## Interroger un subgraph d'Arweave + +L'Endpoint GraphQL pour les subgraphs Arweave est déterminé par la définition du schéma, avec l'interface API existante. Veuillez consulter la [documentation API GraphQL](/subgraphs/querying/graphql-api/) pour plus d'informations. + +## Exemples de subgraphs + +Voici un exemple de subgraph à titre de référence : + +- [Exemple de sous-graphe pour Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Un subgraph peut-il indexer Arweave et d'autres blockchains ? + +No, a Subgraph can only support data sources from one chain/network. + +### Puis-je indexer les fichiers enregistrés sur Arweave ? + +Actuellement, The Graph n'indexe Arweave qu'en tant que blockchain (ses blocs et ses transactions). + +### Puis-je identifier les packages de Bundlr dans mon subgraph ? + +Cette fonction n'est pas prise en charge actuellement. + +### Comment puis-je filtrer les transactions sur un compte spécifique ? + +La source.owner peut être la clé publique de l'utilisateur ou l'adresse de son compte. + +### Quel est le format de chiffrement actuel ? + +Les données sont généralement passées dans les mappages sous forme de Bytes, qui, s'ils sont stockés directement, sont renvoyés dans le subgraph dans un format `hex` (par exemple, les hash de blocs et de transactions). Vous pouvez vouloir convertir en un format `base64` ou `base64 URL` dans vos mappages, afin de correspondre à ce qui est affiché dans les explorateurs de blocs comme [Arweave Explorer](https://viewblock.io/arweave/). + +La fonction d'assistant `bytesToBase64(bytes : Uint8Array, urlSafe : boolean) : string` suivante peut être utilisée, et sera ajoutée à `graph-ts` : + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet à écrire + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets à écrire + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/fr/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/fr/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..f6ba3015de68 --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Aperçu + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prérequis + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +ou bien + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/fr/subgraphs/guides/enums.mdx b/website/src/pages/fr/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..53daa9ce4993 --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Catégoriser les marketplaces NFT à l’aide d’Enums +--- + +Utilisez des Enums pour rendre votre code plus propre et moins sujet aux erreurs. Voici un exemple complet d'utilisation des Enums sur les marketplaces NFT. + +## Que sont les Enums ? + +Les Enums, ou types d'énumération, sont un type de données spécifique qui vous permet de définir un ensemble de valeurs spécifiques et autorisées. + +### Exemple d'Enums dans Votre Schéma + +Si vous construisez un subgraph pour suivre l'historique de la propriété des jetons sur une marketplace, chaque jeton peut passer par différents propriétaires, tels que `OriginalOwner`, `SecondOwner`, et `ThirdOwner`. En utilisant des enums, vous pouvez définir ces propriétaires spécifiques, en vous assurant que seules des valeurs prédéfinies sont assignées. + +Vous pouvez définir des enums dans votre schéma et, une fois définis, vous pouvez utiliser la représentation en chaîne de caractères des valeurs enum pour définir un champ enum sur une entité. + +Voici à quoi pourrait ressembler une définition d'enum dans votre schéma, basée sur l'exemple ci-dessus : + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +Ceci signifie que lorsque vous utilisez le type `TokenStatus` dans votre schéma, vous attendez qu'il soit exactement l'une des valeurs prédéfinies : `OriginalOwner`, `SecondOwner`, ou `ThirdOwner`, garantissant la cohérence et la validité des données. + +Pour en savoir plus sur les enums, consultez [Création d'un Subgraph](/developing/creating-a-subgraph/#enums) et [documentation GraphQL ](https://graphql.org/learn/schema/#enumeration-types). + +## Avantages de l'Utilisation des Enums + +- **Clarté** : Les enums fournissent des noms significatifs pour les valeurs, rendant les données plus faciles à comprendre. +- **Validation** : Les enums imposent des définitions de valeurs strictes, empêchant les entrées de données invalides. +- **Maintenabilité** : Lorsque vous avez besoin de changer ou d'ajouter de nouvelles catégories, les enums vous permettent de le faire de manière ciblée. + +### Sans Enums + +Si vous choisissez de définir le type comme une chaîne de caractères au lieu d'utiliser un Enum, votre code pourrait ressembler à ceci : + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Propriétaire du jeton + tokenStatus: String! # Champ de type chaîne pour suivre l'état du jeton + timestamp: BigInt! +} +``` + +Dans ce schéma, `TokenStatus` est une simple chaîne de caractères sans valeurs spécifiques autorisées. + +#### Pourquoi est-ce un problème ? + +- Il n'y a aucune restriction sur les valeurs de `TokenStatus` : n’importe quelle chaîne de caractères peut être affectée par inadvertance. Difficile donc de s'assurer que seules des valeurs valides comme comme `OriginalOwner`, `SecondOwner`, ou `ThirdOwner` soient utilisées. +- Il est facile de faire des fautes de frappe comme `Orgnalowner` au lieu de `OriginalOwner`, rendant les données et les requêtes potentielles peu fiables. + +### Avec Enums + +Au lieu d'assigner des chaînes de caractères libres, vous pouvez définir un enum pour `TokenStatus` avec des valeurs spécifiques : `OriginalOwner`, `SecondOwner`, ou `ThirdOwner`. L'utilisation d'un enum garantit que seules les valeurs autorisées sont utilisées. + +Les Enums assurent la sécurité des types, minimisent les risques de fautes de frappe et garantissent des résultats cohérents et fiables. + +## Définition des Enums pour les Marketplaces NFT + +> Note: Le guide suivant utilise le smart contract CryptoCoven NFT. + +Pour définir des énumérations pour les différentes marketplaces où les NFT sont échangés, utilisez ce qui suit dans votre schéma de subgraph : + +```gql +# Enum pour les Marketplaces avec lesquelles le contrat CryptoCoven a interagi (probablement une vente ou un mint) +enum Marketplace { + OpenSeaV1 # Représente lorsque un NFT CryptoCoven est échangé sur la marketplace + OpenSeaV2 # Représente lorsque un NFT CryptoCoven est échangé sur la marketplace OpenSeaV2 + SeaPort # Représente lorsque un NFT CryptoCoven est échangé sur la marketplace SeaPort + LooksRare # Représente lorsque un NFT CryptoCoven est échangé sur la marketplace LooksRare + # ...et d'autres marketplaces +} +``` + +## Utilisation des Enums pour les Marketplaces NFT + +Une fois définis, les enums peuvent être utilisés dans l'ensemble du subgraph pour classer les transactions ou les événements. + +Par exemple, lors de la journalisation des ventes de NFT, vous pouvez spécifier la marketplace impliqué dans la transaction en utilisant l'enum. + +### Implémenter une Fonction pour les Marketplaces NFT + +Voici comment vous pouvez implémenter une fonction pour récupérer le nom de la marketplace à partir de l'enum sous forme de chaîne de caractères : + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Utilisation des instructions if-else pour mapper la valeur de l'enum à une chaîne de caractères + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // Si le marketplace est OpenSea, renvoie sa représentation en chaîne de caractères + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // Si le marketplace est SeaPort, renvoie sa représentation en chaîne de caractères + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // Si le marketplace est LooksRare, renvoie sa représentation en chaîne de caractères + // ... et d'autres marketplaces + } +} +``` + +## Bonnes Pratiques pour l'Utilisation des Enums + +- **Nommer avec cohérence** : Utilisez des noms clairs et descriptifs pour les valeurs d'enum pour améliorer la lisibilité. +- **Gestion Centralisée** : Gardez les enums dans un fichier unique pour plus de cohérence. Ainsi, il est plus simple de les mettre à jour et de garantir qu’ils sont votre unique source de vérité. +- **Documentation** : Ajoutez des commentaires aux enums pour clarifier leur objectif et leur utilisation. + +## Utilisation des Enums dans les Requêtes + +Les enums dans les requêtes aident à améliorer la qualité des données et à rendre les résultats plus faciles à interpréter. Ils fonctionnent comme des filtres et des éléments de réponse, assurant la cohérence et réduisant les erreurs dans les valeurs des marketplaces. + +Spécificités + +- **Filtrer avec des Enums**: Les Enums offrent des filtres clairs, vous permettant d’inclure ou d’exclure facilement des marketplaces spécifiques. +- **Enums dans les Réponses**: Les Enums garantissent que seules des valeurs de marketplace reconnues sont renvoyées, ce qui rend les résultats standardisés et précis. + +### Exemples de requêtes + +#### Requête 1 : Compte avec le Plus d'Interactions sur les Marketplaces NFT + +Cette requête fait ce qui suit : + +- Elle trouve le compte avec le plus grand nombre unique d'interactions sur les marketplaces NFT, ce qui est excellent pour analyser l'activité inter-marketplaces. +- Le champ marketplaces utilise l'enum marketplace, garantissant des valeurs de marketplace cohérentes et validées dans la réponse. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # Ce champ retourne la valeur enum représentant la marketplace + } + } +} +``` + +#### Résultats + +Cette réponse fournit les détails du compte et une liste des interactions uniques sur les marketplaces avec des valeurs enum pour une clarté standardisée : + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Requête 2 : Marketplace la Plus Active pour les Transactions CryptoCoven + +Cette requête fait ce qui suit : + +- Elle identifie la marketplace avec le plus grand volume de transactions CryptoCoven. +- Il utilise l'enum marketplace pour s'assurer que seuls les types de marketplace valides apparaissent dans la réponse, ajoutant fiabilité et cohérence à vos données. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Résultat 2 + +La réponse attendue inclut la marketplace et le nombre de transactions correspondant, en utilisant l'enum pour indiquer le type de marketplace : + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Requête 3: Interactions sur les marketplaces avec un haut volume de transactions + +Cette requête fait ce qui suit : + +- Elle récupère les quatre principales marketplaces avec plus de 100 transactions, en excluant les marketplaces "Unknown". +- Elle utilise des enums comme filtres pour s'assurer que seuls les types de marketplace valides sont inclus, augmentant ainsi la précision. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Résultat 3 + +La sortie attendue inclut les marketplaces qui répondent aux critères, chacune représentée par une valeur enum : + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Ressources supplémentaires + +Pour des informations supplémentaires, consultez le [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums) de ce guide. diff --git a/website/src/pages/fr/subgraphs/guides/grafting.mdx b/website/src/pages/fr/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..9a0dd2d5ca80 --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Remplacer un contrat et conserver son historique grâce au « greffage » +--- + +Dans ce guide, vous apprendrez à construire et à déployer de nouveaux subgraphs en greffant des subgraps existants. + +## Qu'est-ce qu'une greffe ? + +Le greffage permet de réutiliser les données d'un subgraph existant et de commencer à les indexer dans un bloc ultérieur. Cette méthode est utile au cours du développement pour surmonter rapidement de simples erreurs dans les mappages ou pour rétablir temporairement le fonctionnement d'un subgraph existant après une défaillance. Elle peut également être utilisée lors de l'ajout d'une fonctionnalité à un subgraph dont l'indexation à partir de zéro prend beaucoup de temps. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- Il ajoute ou supprime des types d'entité +- Il supprime les attributs des types d'entité +- Il ajoute des attributs nullables aux types d'entités +- Il transforme les attributs non nullables en attributs nuls +- Cela ajoute des valeurs aux énumérations +- Il ajoute ou supprime des interfaces +- Cela change pour quels types d'entités une interface est implémentée + +Pour plus d’informations, vous pouvez vérifier : + +- [Greffage](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +Dans ce tutoriel, nous allons couvrir un cas d'utilisation de base. Nous remplacerons un contrat existant par un contrat identique (avec une nouvelle adresse, mais le même code). Ensuite, nous grefferons le subgraph existant sur le subgraph "de base" qui suit le nouveau contrat. + +## Remarque importante sur le greffage lors de la mise à niveau vers le réseau + +> **Attention** : Il est recommandé de ne pas utiliser le greffage pour les subgraphs publiés sur The Graph Network + +### Pourquoi est-ce important? + +Le greffage est une fonction puissante qui vous permet de "greffer" un subgraph sur un autre, en transférant efficacement les données historiques du subgraph existant vers une nouvelle version. Il n'est pas possible de greffer un subgraph provenant de The Graph Network vers Subgraph Studio. + +### Les meilleures pratiques + +**Migration initiale** : lorsque vous déployez pour la première fois votre subgraph sur le réseau décentralisé, faites-le sans greffage. Assurez-vous que le subgraph est stable et fonctionne comme prévu. + +**Mises à jour ultérieures** : une fois que votre subgraph est en ligne et stable sur le réseau décentralisé, vous pouvez utiliser le greffage pour les versions ultérieures afin de faciliter la transition et de préserver les données historiques. + +En respectant ces lignes directrices, vous minimisez les risques et vous vous assurez que le processus de migration se déroule sans heurts. + +## Création d'un subgraph existant + +La construction de subgraphs est une partie essentielle de The Graph, décrite plus en profondeur [ici](/subgraphs/quick-start/). Pour pouvoir construire et déployer le subgraph existant utilisé dans ce tutoriel, la repo suivant est fourni : + +- [Dépôt d'exemples de subgraphs](https://github.com/Shiyasmohd/grafting-tutorial) + +> Remarque : le contrat utilisé dans le subgraph est tiré du [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit) suivant. + +## Définition du manifeste du subgraph + +Le manifeste du subgraph `subgraph.yaml` identifie les sources de données pour le subgraph, les déclencheurs interessants et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Vous trouverez ci-dessous un exemple de manifeste de subgraph que vous utiliserez : + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- La source de données `Lock` est l'adresse de l'abi et du contrat que nous obtiendrons lorsque nous compilerons et déploierons le contrat +- Le réseau doit correspondre à un réseau indexé qui est interrogé. Comme nous fonctionnons sur le réseau de test Sepolia, le réseau est `sepolia` +- La section `mapping` définit les déclencheurs intéressants et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Dans ce cas, nous écoutons l'événement `Withdrawal` et appelons la fonction `handleWithdrawal` lorsqu'il est émis. + +## Définition de manifeste de greffage + +Le greffage consiste à ajouter deux nouveaux éléments au manifeste original du subgraph : + +```yaml +--- +features: + - grafting # nom de la caractéristique +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` est une liste de tous les [noms de fonctionnalités](/developing/creating-a-subgraph/#experimental-features) utilisées. +- `graft:` est une carte du subgraph `base` et du bloc sur lequel se greffer. Le `bloc` est le numéro du bloc à partir duquel l'indexation doit commencer. The Graph copiera les données du subgraph de base jusqu'au bloc donné inclus, puis continuera à indexer le nouveau subgraph à partir de ce bloc. + +Les valeurs `base` et `block` peuvent être trouvées en déployant deux subgraphs : l'un pour l'indexation de base et l'autre avec le greffage + +## Déploiement du subgraph de base + +1. Allez sur [Subgraph Studio](https://thegraph.com/studio/) et créez un subgraph sur le réseau de test Sepolia appelé `graft-example` +2. Suivez les instructions dans la section `AUTH & DEPLOY` sur votre page Subgraph dans le dossier `graft-example` de la repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Cela renvoie quelque chose comme ceci : + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Une fois que vous avez vérifié que le subgraph est correctement indexé, vous pouvez rapidement le mettre à jour par greffage. + +## Déploiement du subgraph greffé + +Le subgraph.yaml de remplacement du greffon aura une nouvelle adresse de contrat. Cela peut arriver lorsque vous mettez à jour votre dapp, redéployez un contrat, etc. + +1. Allez sur [Subgraph Studio](https://thegraph.com/studio/) et créez un subgraph sur le réseau test de Sepolia appelé `graft-replacement` +2. Créer un nouveau manifeste. Le `subgraph.yaml` de `graph-replacement` contient une adresse de contrat différente et de nouvelles informations sur la façon dont il devrait se greffer. Il s'agit du `block` du [dernier événement émis](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) qui vous intéresse dans l'ancien contrat et de la `base` de l'ancien subgraph. L'ID du subgraph `base` est l'ID de déploiement de votre subgraph original `graph-example`. Vous pouvez le trouver dans Subgraph Studio. +3. Suivez les instructions de la section `AUTH & DEPLOY` sur votre page Subgraph dans le dossier `graft-replacement` de la repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Le résultat devrait être le suivant : + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +Vous pouvez voir que le subgraph `graft-replacement` indexe les anciennes données du `graph-example` et les nouvelles données de la nouvelle adresse du contrat. Le contrat original a émis deux événements `Withdrawal`, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) et [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). Le nouveau contrat a émis un seul événement de retrait, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). Les deux transactions précédemment indexées (événements 1 et 2) et la nouvelle transaction (événement 3) ont été combinées ensemble dans le subgraph de remplacement de greffe. + +Félicitations ! Vous avez réussi à greffer un subgraph sur un autre subgraph. + +## Ressources supplémentaires + +Si vous souhaitez acquérir plus d'expérience avec le greffage, voici quelques exemples pour des contrats populaires : + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +Pour devenir encore plus expert sur The Graph, vous pouvez vous familiariser avec d'autres méthodes de gestion des modifications apportées aux sources de données sous-jacentes. Des alternatives comme des [Modèles de sources de données](/developing/creating-a-subgraph/#data-source-templates) permettent d'obtenir des résultats similaires + +> Note : De nombreux éléments de cet article ont été repris de l'article [Arweave](/subgraphs/cookbook/arweave/) publié précédemment diff --git a/website/src/pages/fr/subgraphs/guides/near.mdx b/website/src/pages/fr/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..46465dd3f16c --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Construction de subgraphs sur NEAR +--- + +Ce guide est une introduction à la construction de subgraphs indexant des contrats intelligents sur la [blockchain NEAR](https://docs.near.org/). + +## Que signifie NEAR ? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## Que sont les subgraphs NEAR ? + +Le graphique fournit aux développeurs des outils pour traiter les événements de la blockchain et rendre les données résultantes facilement accessibles via une API GraphQL, connue individuellement sous le nom de subgraph. Le [Graph Node](https://github.com/graphprotocol/graph-node) est désormais capable de traiter les événements NEAR, ce qui signifie que les développeurs NEAR peuvent désormais créer des subgraphs pour indexer leurs contrats intelligents. + +Les subgraphs sont basés sur les événements, ce qui signifie qu'ils écoutent et traitent les événements de la blockchain. Il existe actuellement deux types de gestionnaires pour les subgraphs NEAR : + +- Gestionnaires de blocs : ceux-ci sont exécutés à chaque nouveau bloc +- Gestionnaires de reçus : exécutés à chaque fois qu'un message est exécuté sur un compte spécifié + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> Un reçu est le seul objet actionnable dans le système. Lorsque nous parlons de "traitement d'une transaction" sur la plateforme NEAR, cela signifie en fin de compte "appliquer des reçus" à un moment ou à un autre. + +## Construction d'un subgraph NEAR + +`@graphprotocol/graph-cli` est un outil en ligne de commande pour construire et déployer des subgraphs. + +`@graphprotocol/graph-ts` est une bibliothèque de types spécifiques aux subgraphs. + +Le développement de subgraphs NEAR nécessite `graph-cli` au-dessus de la version `0.23.0`, et `graph-ts` au-dessus de la version `0.23.0`. + +> La construction d'un subgraph NEAR est très similaire à la construction d'un subgraph qui indexe Ethereum. + +La définition d'un subgraph comporte trois aspects : + +**subgraph.yaml:** le manifeste du subgraph, définissant les sources de données intéressantes et la manière dont elles doivent être traitées. NEAR est un nouveau type de source de données. + +**schema.graphql:** un fichier de schéma qui définit les données stockées dans votre subgraph et la manière de les interroger via GraphQL. Les exigences pour les subgraphs NEAR sont couvertes par [la documentation existante](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### Définition du manifeste du subgraph + +Le manifeste du subgraph (`subgraph.yaml`) identifie les sources de données pour le subgraph, les déclencheurs intéressants et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Voir ci-dessous un exemple de manifeste de subgraph pour un subgraph NEAR : + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # lien vers le fichier de schéma +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # Cette source de données surveillera ce compte + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- Les subgraphs NEAR introduisent un nouveau `type` de source de données (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +comptes: + préfixes : + - application + - bien + suffixes : + - matin.près + - matin.testnet +``` + +Les fichiers de données NEAR prennent en charge deux types de gestionnaires : + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### Définition de schéma + +La définition du schéma décrit la structure de la base de données Subgraph résultante et les relations entre les entités. Elle est agnostique de la source de données d'origine. Vous trouverez plus de détails sur la définition du schéma du subgraph [ici](/developing/creating-a-subgraph/#the-graphql-schema). + +### Cartographies AssemblyScript + +Les gestionnaires d'événements sont écrits en [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Sinon, le reste de l'[AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) est à la disposition des développeurs de subgraphs NEAR pendant l'exécution du mappage. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## Déploiement d'un subgraph NEAR + +Une fois que vous avez construit un subgraph, il est temps de le déployer sur Graph Node pour l'indexation. Les subgraphs NEAR peuvent être déployés sur n'importe quel Graph Node `>=v0.26.x` (cette version n'a pas encore été étiquetée et publiée). + +Subgraph Studio et l'Indexeur de mise à niveau sur The Graph Network prennent en charge actuellement l'indexation du mainnet et du testnet NEAR en bêta, avec les noms de réseau suivants : + +- `near-mainnet` +- `near-testnet` + +De plus amples informations sur la création et le déploiement de subgraphs sur Subgraph Studio sont disponibles [ici](/deploying/deploying-a-subgraph-to-studio/). + +Pour commencer, la première étape consiste à "créer" votre subgraph, ce qui ne doit être fait qu'une seule fois. Sur Subgraph Studio, vous pouvez le faire à partir de [votre tableau de bord](https://thegraph.com/studio/) : "Créer un Subgraph". + +Une fois votre subgraph créé, vous pouvez le déployer en utilisant la commande CLI `graph deploy` : + +```sh +$ graph create --node # crée un subgrpah sur un Graph Node local (sur Subgraph Studio, cela se fait via l'interface utilisateur) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # upload les fichiers de build vers un endpoint IPFS spécifié, puis déploie le subgraph vers un Graph Node spécifié sur la base du hash IPFS du manifeste +``` + +La configuration des nœuds dépend de l'endroit où le subgraph est déployé. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Nœud Graph local ( en fonction de la configuration par défaut) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Une fois votre subgraph déployé, il sera indexé par Graph Node. Vous pouvez vérifier sa progression en interrogeant le subgraph lui-même : + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexation de NEAR avec un nœud The graph local + +L'exécution d'un nœud de Graph qui indexe NEAR répond aux exigences opérationnelles suivantes : + +- Cadre d'indexation NEAR avec instrumentation Firehose +- Composant(s) du NEAR Firehose +- Nœud Gaph avec point d'extrémité Firehose configuré + +Nous fournirons bientôt plus d'informations sur l'utilisation des composants ci-dessus. + +## Interrogation d'un subgraph NEAR + +L'endpoint GraphQL pour les subgraphs NEAR est déterminé par la définition du schéma, avec l'interface API existante. Veuillez consulter la [documentation API GraphQL](/subgraphs/querying/graphql-api/) pour plus d'informations. + +## Exemples de subgraphs + +Voici quelques exemples de Subgraphs pour référence : + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### Comment fonctionne la bêta ? + +La prise en charge de NEAR est en version bêta, ce qui signifie qu'il peut y avoir des changements dans l'API alors que nous continuons à travailler sur l'amélioration de l'intégration. Veuillez envoyer un email à near@thegraph.com afin que nous puissions vous aider à construire des subgraphs NEAR et vous tenir au courant des derniers développements ! + +### Un subgraph peut-il indexer simultanément les blockchains NEAR et EVM ? + +No, a Subgraph can only support data sources from one chain/network. + +### Les subgraphs peuvent-ils réagir à des déclencheurs plus spécifiques ? + +Actuellement, seuls les déclencheurs de blocage et de réception sont pris en charge. Nous étudions les déclencheurs pour les appels de fonction à un compte spécifique. Nous souhaitons également prendre en charge les déclencheurs d'événements, une fois que NEAR disposera d'un support natif pour les événements. + +### Les gestionnaires de reçus se déclencheront-ils pour les comptes et leurs sous-comptes ? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +comptes: + suffixes : + - mintbase1.near +``` + +### Les subgraphs NEAR peuvent-ils faire des appels de vue aux comptes NEAR pendant les mappages ? + +Cette fonction n'est pas prise en charge. Nous sommes en train d'évaluer si cette fonctionnalité est nécessaire pour l'indexation. + +### Puis-je utiliser des modèles de sources de données dans mon subgraph NEAR ? + +Ceci n’est actuellement pas pris en charge. Nous évaluons si cette fonctionnalité est requise pour l'indexation. + +### Les subgraphs Ethereum prennent en charge les versions "en attente"(pending) et "actuelles"(current), comment puis-je déployer une version "en attente" d'un subgraph NEAR ? + +La fonctionnalité d'attente n'est pas encore prise en charge pour les subgraphs NEAR. Dans l'intervalle, vous pouvez déployer une nouvelle version dans un subgraph "nommé" différemment, puis, lorsque celui-ci est synchronisé avec la tête de chaîne, vous pouvez le redéployer dans votre subgraph principal "nommé", qui utilisera le même ID de déploiement sous-jacent, de sorte que le subgraph principal sera instantanément synchronisé. + +### Ma question n'a pas reçu de réponse, où puis-je obtenir plus d'aide pour construire des subgraphs NEAR ? + +S'il s'agit d'une question générale sur le développement de Subgraph, il y a beaucoup plus d'informations dans le reste de la [Documentation du développeur](/subgraphs/quick-start/). Sinon, rejoignez [Le Discord de The Graph Protocol](https://discord.gg/graphprotocol) et posez votre question dans le canal #near ou envoyez un email à near@thegraph.com. + +## Les Références + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/fr/subgraphs/guides/polymarket.mdx b/website/src/pages/fr/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..f19f6c7aef53 --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Interroger les données de la blockchain à partir de Polymarket avec des subgraphs sur The Graph +sidebarTitle: Interroger les données Polymarket +--- + +Interroger les données onchain de Polymarket en utilisant GraphQL via Subgraphs sur The Graph Network. Les subgraphs sont des API décentralisées alimentées par The Graph, un protocole d'indexation et d'interrogation des données des blockchains. + +## Subgraph Polymarket sur Graph Explorer + +Vous pouvez voir un terrain de jeu (playground) interactif pour les requêtes sur la [page du subgraph Polymarket sur The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), où vous pouvez tester n'importe quelle requête. + +![Terrain de jeux Polymarket](/img/Polymarket-playground.png) + +## Comment utiliser l'éditeur visuel de requêtes + +L'éditeur visuel de requêtes vous aide à tester des exemples de requêtes à partir de votre subgraph. + +Vous pouvez utiliser l'explorateur GraphiQL pour composer vos requêtes GraphQL en cliquant sur les champs souhaités. + +### Exemple de requête : Obtenir les 5 paiements les plus élevés de Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Exemple de sortie + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Schéma GraphQL de Polymarket + +Le schéma de ce subgraph est défini [ici dans le GitHub de Polymarket](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Endpoint du Subgraph Polymarket + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +Le subgraph Polymarket est disponible sur [Graph Explorer](https://thegraph.com/explorer). + +![Endpoint Polymarket](/img/Polymarket-endpoint.png) + +## Comment obtenir votre propre clé API + +1. Aller à [https://thegraph.com/studio](http://thegraph.com/studio) et connectez votre portefeuille +2. Rendez-vous sur https://thegraph.com/studio/apikeys/ pour créer une clé API + +Vous pouvez utiliser cette clé API sur n'importe quel subgraph dans [Graph Explorer](https://thegraph.com/explorer), et ce n'est pas limité à Polymarket. + +100k requêtes par mois sont gratuites, ce qui est parfait pour votre projet secondaire ! + +## Subgraphs Additionels Polymarket + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Activité Polymarket de Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Profit & Pertes Polymarket ](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Intérêt Ouverts Polymarket](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## Comment interroger l'API + +Vous pouvez passer n'importe quelle requête GraphQL àl'endpoint Polymarket et recevoir des données au format json. + +L'exemple de code suivant renvoie exactement le même résultat que ci-dessus. + +### Exemple de code de node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Envoi de la requête GraphQL +axios(graphQLRequest) + .then((response) => { + //Traitez la réponse ici + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Traiter les erreurs éventuelles + console.error(error); + }); +``` + +### Ressources complémentaires + +Pour plus d'informations sur l'interrogation des données de votre subgraph, lisez [ici](/subgraphs/querying/introduction/). + +Pour découvrir toutes les façons d'optimiser et de personnaliser votre subgraph pour obtenir de meilleures performances, lisez davantage sur [la création d'un subgraph ici](/developing/creating-a-subgraph/). diff --git a/website/src/pages/fr/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/fr/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..965146218bef --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: Comment sécuriser les clés d'API en utilisant les composants serveur de Next.js +--- + +## Aperçu + +Nous pouvons utiliser les [Composants du serveur Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components) pour sécuriser correctement notre clé API et éviter qu'elle ne soit exposée dans le frontend de notre dapp. Pour renforcer la sécurité de notre clé API, nous pouvons également [restreindre notre clé API à certains subgraphs ou domaines dans Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +Dans ce cookbook, nous allons voir comment créer un composant serveur Next.js qui interroge un subgraph tout en cachant la clé API du frontend. + +### Mise en garde + +- Les composants serveur de Next.js ne protègent pas les clés API contre les attaques de déni de service. +- Les passerelles de The Graph Network disposent de stratégies de détection et d'atténuation des attaques de déni de service, cependant, l'utilisation des composants serveur peut affaiblir ces protections. +- Les composants serveur de Next.js introduisent des risques de centralisation car le serveur peut tomber en panne. + +### Pourquoi est-ce nécessaire + +Dans une application React standard, les clés API incluses dans le code frontend peuvent être exposées du côté client, posant un risque de sécurité. Bien que les fichiers `.env` soient couramment utilisés, ils ne protègent pas complètement les clés car le code de React est exécuté côté client, exposant ainsi la clé API dans les headers. Les composants serveur Next.js résolvent ce problème en gérant les opérations sensibles côté serveur. + +### Utiliser le rendu côté client pour interroger un subgraph + +![rendu côté client](/img/api-key-client-side-rendering.png) + +### Prérequis + +- Une clé API provenant de [Subgraph Studio](https://thegraph.com/studio) +- Une connaissance de base de Next.js et React. +- Un projet Next.js existant qui utilise l'[App Router](https://nextjs.org/docs/app). + +## Guide étape par étape + +### Étape 1 : Configurer les Variables d'Environnement + +1. À la racine de notre projet Next.js, créer un fichier `.env.local` . +2. Ajouter notre clé API :: `API_KEY=`. + +### Étape 2 : Créer un Composant Serveur + +1. Dans notre répertoire`components` , créer un nouveau fichier, `ServerComponent.js`. +2. Utiliser le code exemple fourni pour configurer le composant serveur. + +### Étape 3 : Implémenter la Requête API Côté Serveur + +Dans `ServerComponent.js`, ajouter le code suivant : + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Étape 4 : Utiliser le Composant Serveur + +1. Dans notre fichier de page (par exemple, `pages/index.js`), importer `ServerComponent`. +2. Rendu du composant: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Étape 5 : Lancer et tester notre Dapp + +Démarrez notre application Next.js en utilisant `npm run dev`. Vérifiez que le composant serveur récupère les données sans exposer la clé API. + +![Rendu côté serveur](/img/api-key-server-side-rendering.png) + +### Conclusion + +En utilisant les composants serveur de Next.js, nous avons effectivement caché la clé API du côté client, améliorant ainsi la sécurité de notre application. Cette méthode garantit que les opérations sensibles sont traitées côté serveur, à l'abri des vulnérabilités potentielles côté client. Enfin, n'oubliez pas d'explorer [d'autres mesures de sécurité des clés d'API](/subgraphs/querying/managing-api-keys/) pour renforcer encore davantage la sécurité de vos clés d'API. diff --git a/website/src/pages/fr/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/fr/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..ccf1f043fcb1 --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Agrégation de données par composition de subgraphs +sidebarTitle: Construire un subgraph composable avec plusieurs subgraphs +--- + +Tirez parti de la composition de subgraphs pour accélérer le temps de développement. Créez un subgraph de base avec les données essentielles, puis construisez d'autres subgraphs par-dessus. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Présentation + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Avantages de la composition + +La composition de subgraphs est une fonctionnalité puissante pour la mise à l'échelle, qui vous permet de.. : + +- Réutiliser, mélanger et combiner les données existantes +- Rationaliser le développement et les requêtes +- Utiliser plusieurs sources de données (jusqu'à cinq subgraphs sources) +- Accélérer la vitesse de synchronisation de votre subgraph +- Gérer les erreurs et optimiser la resynchronisation + +## Overview de l'architecture + +La configuration de cet exemple implique deux subgraphs : + +1. **Subgraph source** : Suit les données d'événements en tant qu'entités. +2. **Subgraph dépendant** : Utilise le subgraph source comme source de données. + +Vous pouvez les trouver dans les répertoires `source` et `dependent`. + +- Le **Subgraph Source** est un subgraph de base de suivi des événements qui enregistre les événements émis par les contrats concernés. +- Le **subgraph dépendant** fait référence au subgraph source en tant que source de données, en utilisant les entités de la source comme déclencheurs. + +Alors que le subgraph source est un subgraph standard, le subgraph dépendant utilise la fonction de composition de subgraphs. + +## Prérequis + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Commencer + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Spécificités⁠ + +- Pour que cet exemple reste simple, tous les subgraphs sources n'utilisent que des gestionnaires de blocs. Cependant, dans un environnement réel, chaque subgraph source utilisera des données provenant de différents contrats intelligents. +- Les exemples ci-dessous montrent comment importer et étendre le schéma d'un autre subgraph afin d'en améliorer les fonctionnalités. +- Chaque subgraphe source est optimisé avec une entité spécifique. +- Toutes les commandes listées installent les dépendances nécessaires, génèrent du code basé sur le schéma GraphQL, construisent le subgraph et le déploient sur votre instance locale de Graph Node. + +### Étape 1. Déployer le subgraph source de temps de bloc + +Ce premier subgraph source calcule le temps de bloc pour chaque bloc. + +- Il importe des schémas d'autres subgraphs et ajoute une entité `block` avec un champ `timestamp`, représentant l'heure à laquelle chaque bloc a été extrait. +- Il écoute les événements de la blockchain liés au temps (par exemple, les horodatages des blocs) et traite ces données pour mettre à jour les entités du subgraph en conséquence. + +Pour déployer ce subgraph localement, exécutez les commandes suivantes : + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Étape 2. Déployer le subgraph de la source de coût du bloc + +Ce deuxième subgraph source indexe le coût de chaque bloc. + +#### Principales fonctions + +- Il importe des schémas d'autres subgraphs et ajoute une entité `block` avec des champs liés aux coûts. +- Il écoute les événements de la blockchain liés aux coûts (par exemple, les frais de gaz, les coûts de transaction) et traite ces données pour mettre à jour les entités du subgraph en conséquence. + +Pour déployer ce subgraph localement, exécutez les mêmes commandes que ci-dessus. + +### Étape 3. Définition de la taille des blocs dans le subgraph source + +Ce troisième subgraph source indexe la taille de chaque bloc. Pour déployer ce subgraph localement, exécutez les mêmes commandes que ci-dessus. + +#### Principales fonctions + +- Il importe les schémas existants des autres subgraphs et ajoute une entité `block` avec un champ `size` représentant la taille de chaque bloc. +- Il écoute les événements de la blockchain liés à la taille des blocs (par exemple, le stockage ou le volume) et traite ces données pour mettre à jour les entités du subgraph en conséquence. + +### Étape 4. Combinaison en Subgraph Block Stats + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Toute modification apportée à un subgraph source est susceptible de générer un nouvel ID de déploiement. +> - Veillez à mettre à jour l'ID de déploiement dans l'adresse de la source de données du manifeste Subgraph pour bénéficier des dernières modifications. +> - Tous les subgraphs sources doivent être déployés avant le déploiement du subgraph composé. + +#### Principales fonctions + +- Il fournit un modèle de données consolidé qui englobe toutes les mesures de bloc pertinentes. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Principaux points à retenir + +- Cet outil puissant vous permettra de développer vos subgraphs et de combiner plusieurs subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- Cette caractéristique permet de débloquer l'évolutivité, simplifiant ainsi l'efficacité du développement et de la maintenance. + +## Ressources supplémentaires + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- Pour ajouter des fonctionnalités avancées à votre subgraph, consultez [Fonctionnalités avancées du subgraph](/developing/creating/advanced/). +- Pour en savoir plus sur les agrégations, consultez [Séries chronologiques et agrégations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/fr/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/fr/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..37a9815532d3 --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Débogage rapide et facile des subgraph à l'aide de Forks +--- + +Comme pour de nombreux systèmes traitant de grandes quantités de données, les Indexeurs de The Graph (Graph Nodes) peuvent prendre un certain temps pour synchroniser votre subgraph avec la blockchain cible. L'écart entre les changements rapides dans le but de déboguer et les longs temps d'attente nécessaires à l'indexation est extrêmement contre-productif et nous en sommes bien conscients. C'est pourquoi nous introduisons **Subgraph forking**, développé par [LimeChain](https://limechain.tech/), et dans cet article je vous montrerai comment cette fonctionnalité peut être utilisée pour accélérer considérablement le débogage du subgraph ! + +## D'accord, qu'est-ce que c'est ? + +**Le Subgraph forking** est le processus de récupération paresseuse d'entités à partir du store d'un autre subgraph (généralement un store distant). + +Dans le contexte du débogage, **Subgraph forking** vous permet de déboguer votre subgraph défaillant au bloc _X_ sans avoir besoin d'attendre la synchronisation au bloc _X_. + +## Quoi ? Comment ? + +Lorsque vous déployez un subgraph vers un Graph Node distant pour l'indexation et qu'il échoue au bloc _X_, la bonne nouvelle est que le Graph Node servira toujours les requêtes GraphQL à l'aide de son store, qui est synchronisé avec le bloc _X_. C'est formidable ! Cela signifie que nous pouvons tirer parti de ce store "à jour" pour corriger les bugs survenant lors de l'indexation du bloc _X_. + +En bref, nous allons _forker le subgraph défaillant_ à partir d'un Graph Node distant qui est garanti d'avoir le subgraph indexé jusqu'au bloc _X_ afin de fournir au subgraph déployé localement et débogué au bloc _X_ une vue à jour de l'état de l'indexation. + +## S'il vous plaît, montrez-moi du code ! + +Pour rester concentré sur le débogage des subgraphs, gardons les choses simples et exécutons le [Subgraph d'exemple](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexant le contrat intelligent Ethereum Gravity. + +Voici les gestionnaires définis pour indexer `Gravatar`s, sans aucun bug : + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar introuvable!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oups, comme c'est malheureux, quand je déploie mon parfait subgraph dans [Subgraph Studio](https://thegraph.com/studio/), il échoue avec l'erreur _"Gravatar not found!"_. + +La méthode habituelle pour tenter de résoudre le problème est la suivante : + +1. Apportez une modification à la source des mappages, ce qui, selon vous, résoudra le problème (même si je sais que ce ne sera pas le cas). +2. Redéployer le subgraph vers [Subgraph Studio](https://thegraph.com/studio/) (ou un autre Graph Node distant). +3. Attendez qu’il soit synchronisé. +4. S'il se casse à nouveau, revenez au point 1, sinon : Hourra ! + +Il s'agit en fait d'un processus assez familier à un processus de débogage ordinaire, mais il y a une étape qui ralentit terriblement le processus : _3. Attendez qu'il se synchronise._ + +L'utilisation du **Subgraph forking** permet d'éliminer cette étape. Voici à quoi cela ressemble : + +0. Créer un Graph Node local avec l'ensemble de **_base de fork approprié_**. +1. Apportez une modification à la source des mappings qui, selon vous, résoudra le problème. +2. Déployer sur le Graph Node local, **_forking du Subgraph Défaillant_** et **_à partir du bloc problématique_**. +3. S'il casse à nouveau, revenez à 1, sinon : Hourra ! + +Maintenant, vous pouvez avoir 2 questions : + +1. base de fourche quoi ??? +2. Fourcher qui ?! + +Je réponds : + +1. `fork-base` est l'URL "de base", de sorte que lorsque l'_id_ du subgraph est ajouté, l'URL résultante (`/`) est un endpoint GraphQL valide pour le store du subgraph. +2. Fourcher est facile, pas besoin de transpirer : + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +N'oubliez pas non plus de définir le champ `dataSources.source.startBlock` dans le manifeste Subgraph au numéro du bloc problématique, afin d'éviter d'indexer des blocs inutiles et de profiter du fork ! + +Voici donc ce que je fais : + +1. Je démarre un Graph Node local ([voici comment faire](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) avec l'option `fork-base` fixée à : `https://api.thegraph.com/subgraphs/id/`, puisque je vais créer un subgraph, le subgraph bogué que j'ai déployé plus tôt, à partir de [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NOM_RÉSEAU : [CAPABILITIES] :URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. Après une inspection minutieuse, j'ai remarqué qu'il y avait un décalage dans les représentations `id` utilisées lors de l'indexation des `Gravatar`s dans mes deux handlers. Alors que `handleNewGravatar` le convertit en hexadécimal (`event.params.id.toHex()`), `handleUpdatedGravatar` utilise un int32 (`event.params.id.toI32()`) ce qui fait paniquer `handleUpdatedGravatar` avec "Gravatar not found!". Je fais en sorte qu'ils convertissent tous les deux l'`id` en hexadécimal. +3. Après avoir fait les changements, je déploie mon Subgraph sur le Graph Node local, **_forking du Subgraph défaillant_** et configurer `dataSources.source.startBlock` à `6190343` dans `subgraph.yaml` : + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. J'inspecte les logs générés par le Graph Node local et, Hourra!, tout semble fonctionner. +5. Je déploie mon subgraph maintenant exempt de bugs vers un Graph Node distant et je vis heureux jusqu'à la fin des temps ! diff --git a/website/src/pages/fr/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/fr/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..60cb89d52da1 --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Générateur de code de subgraph sécurisé +--- + +[Subgraph Uncrashable] (https://float-capital.github.io/float-subgraph-uncrashable/) est un outil de génération de code qui génère un ensemble de fonctions d'aide à partir du schéma graphql d'un projet. Il garantit que toutes les interactions avec les entités de votre subgraph sont totalement sûres et cohérentes. + +## Pourquoi intégrer Subgraph Uncrashable ? + +- **Temps de fonctionnement continu**. Les entités mal gérées peuvent entraîner le plantage des subgraphs, ce qui peut perturber les projets qui dépendent de The Graph. Mettez en place des fonctions d'aide pour rendre vos subgraphs “incrashable” et assurer la continuité des activités. + +- **Tout à fait sûr**. Les problèmes courants rencontrés dans le développement de subgraphs sont le chargement d'entités non définies, l'absence de définition ou d'initialisation de toutes les valeurs des entités et les conditions de course lors du chargement et de l'enregistrement des entités. Assurez-vous que toutes les interactions avec les entités sont complètement atomiques. + +- **Configurable par l'utilisateur** Définissez les valeurs par défaut et configurez le niveau des contrôles de sécurité en fonction des besoins de votre projet. Des logs d'avertissement sont enregistrés, indiquant les cas de violation de la logique du subgraph, afin d'aider à résoudre le problème et de garantir l'exactitude des données. + +**Caractéristiques principales** + +- L'outil de génération de code prend en charge **tous** les types de subgraphs et est configurable pour que les utilisateurs puissent définir des valeurs par défaut saines. La génération de code utilisera cette configuration pour générer des fonctions d'aide conformes aux spécifications de l'utilisateur. + +- Le cadre comprend également un moyen (via le fichier de configuration) de créer des fonctions de définition personnalisées, mais sûres, pour des groupes de variables d'entité. De cette façon, il est impossible pour l'utilisateur de charger/utiliser une entité de graph obsolète et il est également impossible d'oublier de sauvegarder ou définissez une variable requise par la fonction. + +- Les logs d'avertissement sont enregistrés en tant que logs indiquant une violation de la logique du subgraph afin d'aider à résoudre le problème et de garantir l'exactitude des données. + +Subgraph Uncrashable peut être exécuté en tant qu'indicateur facultatif à l'aide de la commande Graph CLI codegen. + +```sh +graph codegen -u [options] [] +``` + +Visitez la [documentation sur les subgraphs incrashable](https://float-capital.github.io/float-subgraph-uncrashable/docs/) ou regardez ce [tutoriel vidéo] (https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) pour en savoir plus et commencer à développer des subgraphs plus sûrs. diff --git a/website/src/pages/fr/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/fr/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..0d51588d5ad4 --- /dev/null +++ b/website/src/pages/fr/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfert à The Graph +--- + +Mettez rapidement à niveau vos subgraphs de n'importe quelle plate-forme vers [le réseau décentralisé de The Graph](https://thegraph.com/networks/). + +## Avantages du passage à The Graph + +- Utilisez le même subgraph que vos applications utilisent déjà avec une migration sans temps mort. +- Améliorez la fiabilité grâce à un réseau mondial pris en charge par plus de 100 Indexers. +- Bénéficiez d'une assistance rapide pour Subgraphs 24h/24, 7j/7, avec une équipe d'ingénieurs sur appel. + +## Mettez à jour votre Subgraph vers The Graph en 3 étapes simples + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Configurer votre environnement Studio + +### Créer un subgraph dans Subgraph Studio + +- Accédez à [Subgraph Studio](https://thegraph.com/studio/) et connectez votre portefeuille. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Remarque : après la publication, le nom du subgraph sera modifiable, mais il nécessitera à chaque fois une action onchain, c'est pourquoi il faut le nommer correctement. + +### Installer Graph CLI + +Vous devez avoir Node.js et un gestionnaire de paquets de votre choix (`npm` or `pnpm`) installés pour utiliser Graph CLI. Vérifiez la version la [plus récente](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) de CLI. + +Sur votre machine locale, exécutez la commande suivante : + +Utilisation de [npm](https://www.npmjs.com/) : + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Utilisez la commande suivante pour créer un subgraph dans Studio à l'aide de la CLI : + +```sh +graph init --product subgraph-studio +``` + +### Authentifiez votre subgraph + +Dans Graph CLI, utilisez la commande auth vue dans Subgraph Studio : + +```sh +graph auth +``` + +## 2. Déployez votre Subgraph sur Studio + +Si vous avez votre code source, vous pouvez facilement le déployer dans Studio. Si vous ne l'avez pas, voici un moyen rapide de déployer votre subgraph. + +Dans Graph CLI, exécutez la commande suivante : + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Chaque subgraph a un hash IPFS (Deployment ID), qui ressemble à ceci : "Qmasdfad...". Pour déployer, il suffit d'utiliser cet **IPFS hash**. Vous serez invité à entrer une version (par exemple, v0.0.1). + +## 3. Publier votre Subgraph sur The Graph Network + +![bouton de publication](/img/publish-sub-transfer.png) + +### Interroger votre Subgraph + +> Pour inciter environ 3 Indexeurs à interroger votre subgraph, il est recommandé de curer au moins 3 000 GRT. Pour en savoir plus sur la curation, consultez [Curation](/resources/roles/curating/) sur The Graph. + +Vous pouvez commencer à [interroger](/subgraphs/querying/introduction/) n'importe quel subgraph en envoyant une requête GraphQL dans l'endpoint URL de requête du subgraph, qui se trouve en haut de sa page d'exploration dans Subgraph Studio. + +#### Exemple + +[Subgraph Ethereum CryptoPunks](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) par Messari: + +![L'URL de requête](/img/cryptopunks-screenshot-transfer.png) + +L'URL de la requête pour ce subgraph est la suivante : + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**votre-propre-clé-Api**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Maintenant, il vous suffit de remplir **votre propre clé API** pour commencer à envoyer des requêtes GraphQL à ce point de terminaison. + +### Obtenir votre propre clé API + +Vous pouvez créer des clés API dans Subgraph Studio sous le menu "API Keys" en haut de la page : + +![clés API](/img/Api-keys-screenshot.png) + +### Surveiller l'état du Subgraph + +Une fois la mise à niveau effectuée, vous pouvez accéder à vos subgraphs et les gérer dans [Subgraph Studio](https://thegraph.com/studio/) et explorer tous les subgraphs dans [The Graph Explorer](https://thegraph.com/networks/). + +### Ressources supplémentaires + +- Pour créer et publier rapidement un nouveau subgraph, consultez le [Démarrage Rapide](/subgraphs/quick-start/). +- Pour découvrir toutes les façons d'optimiser et de personnaliser votre subgraph pour obtenir de meilleures performances, lisez davantage sur [la création d'un subgraph ici](/developing/creating-a-subgraph/). diff --git a/website/src/pages/fr/subgraphs/querying/best-practices.mdx b/website/src/pages/fr/subgraphs/querying/best-practices.mdx index 7840723ca03d..bbc8135430f6 100644 --- a/website/src/pages/fr/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/fr/subgraphs/querying/best-practices.mdx @@ -2,19 +2,19 @@ title: Bonnes pratiques d'interrogation --- -The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. +The Graph offre un moyen décentralisé d'interroger les données des blockchains. Ses données sont exposées par le biais d'une API GraphQL, ce qui facilite l'interrogation avec le langage GraphQL. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Apprenez les règles essentielles du langage GraphQL et les meilleures pratiques pour optimiser votre subgraph. --- ## Interroger une API GraphQL -### The Anatomy of a GraphQL Query +### Anatomie d'une requête GraphQL Contrairement à l'API REST, une API GraphQL repose sur un schéma qui définit les requêtes qui peuvent être effectuées. -For example, a query to get a token using the `token` query will look as follows: +Par exemple, une requête pour obtenir un jeton en utilisant la requête `token` ressemblera à ce qui suit : ```graphql query GetToken($id: ID!) { @@ -25,7 +25,7 @@ query GetToken($id: ID!) { } ``` -which will return the following predictable JSON response (_when passing the proper `$id` variable value_): +qui retournera la réponse JSON prévisible suivante (_en passant la bonne valeur de la variable `$id`): ```json { @@ -36,9 +36,9 @@ which will return the following predictable JSON response (_when passing the pro } ``` -GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/). +Les requêtes GraphQL utilisent le langage GraphQL, qui est défini dans [une spécification](https://spec.graphql.org/). -The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders): +La requête `GetToken` ci-dessus est composée de plusieurs parties de langage (remplacées ci-dessous par des espaces réservés `[...]`) : ```graphql query [operationName]([variableName]: [variableType]) { @@ -50,33 +50,33 @@ query [operationName]([variableName]: [variableType]) { } ``` -## Rules for Writing GraphQL Queries +## Règles d'écriture des requêtes GraphQL -- Each `queryName` must only be used once per operation. -- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`) -- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/). +- Chaque `queryName` ne doit être utilisé qu'une seule fois par opération. +- Chaque `champ` ne doit être utilisé qu'une seule fois dans une sélection (nous ne pouvons pas interroger `id` deux fois sous `token`) +- Certains `champs` ou certaines requêtes (comme `tokens`) renvoient des types complexes qui nécessitent une sélection de sous-champs. Ne pas fournir de sélection quand cela est attendu (ou en fournir une quand cela n'est pas attendu - par exemple, sur `id`) lèvera une erreur. Pour connaître un type de champ, veuillez vous référer à [Graph Explorer] (/subgraphs/explorer/). - Toute variable affectée à un argument doit correspondre à son type. - Dans une liste de variables donnée, chacune d’elles doit être unique. - Toutes les variables définies doivent être utilisées. -> Note: Failing to follow these rules will result in an error from The Graph API. +> Remarque : le non-respect de ces règles entraînera une erreur de la part de The Graph API. -For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/). +Pour une liste complète des règles avec des exemples de code, consultez le [Guide des validations GraphQL](/resources/migration-guides/graphql-validations-migration-guide/). ### Envoi d'une requête à une API GraphQL -GraphQL is a language and set of conventions that transport over HTTP. +GraphQL est un langage et un ensemble de conventions qui se transportent sur HTTP. -It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). +Cela signifie que vous pouvez interroger une API GraphQL en utilisant le standard `fetch` (nativement ou via `@whatwg-node/fetch` ou `isomorphic-fetch`). -However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: +Cependant, comme mentionné dans ["Interrogation à partir d'une application"](/subgraphs/querying/from-an-application/), il est recommandé d'utiliser `graph-client`, qui supporte les caractéristiques uniques suivantes : -- Gestion des subgraphs inter-chaînes : interrogation à partir de plusieurs subgraphs en une seule requête -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Traitement des subgraphs multi-chaînes : Interrogation de plusieurs subgraphs en une seule requête +- [Suivi automatique des blocs](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [Pagination automatique] (https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Résultat entièrement typé -Here's how to query The Graph with `graph-client`: +Voici comment interroger The Graph avec `graph-client` : ```tsx import { execute } from '../.graphclient' @@ -100,7 +100,7 @@ async function main() { main() ``` -More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/). +D'autres alternatives au client GraphQL sont abordées dans ["Requête à partir d'une application"](/subgraphs/querying/from-an-application/). --- @@ -108,7 +108,7 @@ More GraphQL client alternatives are covered in ["Querying from an Application"] ### Écrivez toujours des requêtes statiques -A common (bad) practice is to dynamically build query strings as follows: +Une (mauvaise) pratique courante consiste à construire dynamiquement des chaînes de requête comme suit : ```tsx const id = params.id @@ -124,14 +124,14 @@ query GetToken { // Execute query... ``` -While the above snippet produces a valid GraphQL query, **it has many drawbacks**: +Bien que l'extrait ci-dessus produise une requête GraphQL valide, **il présente de nombreux inconvénients** : -- it makes it **harder to understand** the query as a whole -- developers are **responsible for safely sanitizing the string interpolation** -- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side** -- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools) +- cela rend **plus difficile la compréhension** de la requête dans son ensemble +- les développeurs sont **responsables de l'assainissement de l'interpolation de la chaîne de caractères** +- ne pas envoyer les valeurs des variables dans le cadre des paramètres de la requête **empêcher la mise en cache éventuelle côté serveur** +- il **empêche les outils d'analyser statiquement la requête** (ex : Linter, ou les outils de génération de types) -For this reason, it is recommended to always write queries as static strings: +C'est pourquoi il est recommandé de toujours écrire les requêtes sous forme de chaînes de caractères statiques : ```tsx import { execute } from 'your-favorite-graphql-client' @@ -153,18 +153,18 @@ const result = await execute(query, { }) ``` -Doing so brings **many advantages**: +Cela présente de **nombreux avantages** : -- **Easy to read and maintain** queries -- The GraphQL **server handles variables sanitization** -- **Variables can be cached** at server-level -- **Queries can be statically analyzed by tools** (more on this in the following sections) +- **Facile à lire et à entretenir** les requêtes +- Le **serveur GraphQL s’occupe de la validation des variables** +- **Les variables peuvent être mises en cache** au niveau du serveur +- **Les requêtes peuvent être analysées statiquement par des outils** (plus d'informations à ce sujet dans les sections suivantes) -### How to include fields conditionally in static queries +### Comment inclure des champs de manière conditionnelle dans des requêtes statiques -You might want to include the `owner` field only on a particular condition. +Il se peut que vous souhaitiez inclure le champ `owner` uniquement pour une condition particulière. -For this, you can leverage the `@include(if:...)` directive as follows: +Pour cela, vous pouvez utiliser la directive `@include(if :...)` comme suit : ```tsx import { execute } from 'your-favorite-graphql-client' @@ -187,18 +187,18 @@ const result = await execute(query, { }) ``` -> Note: The opposite directive is `@skip(if: ...)`. +> Note : La directive opposée est `@skip(if : ...)`. -### Ask for what you want +### Demandez ce que vous voulez -GraphQL became famous for its "Ask for what you want" tagline. +GraphQL est devenu célèbre grâce à son slogan "Ask for what you want" (demandez ce que vous voulez). -For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. +Pour cette raison, il n'existe aucun moyen, dans GraphQL, d'obtenir tous les champs disponibles sans avoir à les lister individuellement. - Lorsque vous interrogez les API GraphQL, pensez toujours à interroger uniquement les champs qui seront réellement utilisés. -- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- Assurez-vous que les requêtes ne récupèrent que le nombre d'entités dont vous avez réellement besoin. Par défaut, les requêtes récupèrent 100 entités dans une collection, ce qui est généralement beaucoup plus que ce qui sera réellement utilisé, par exemple pour l'affichage à l'utilisateur. Cela s'applique non seulement aux collections de premier niveau d'une requête, mais plus encore aux collections imbriquées d'entités. -For example, in the following query: +Par exemple, dans la requête suivante : ```graphql query listTokens { @@ -213,15 +213,15 @@ query listTokens { } ``` -The response could contain 100 transactions for each of the 100 tokens. +La réponse pourrait contenir 100 transactions pour chacun des 100 jetons. -If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field. +Si l'application n'a besoin que de 10 transactions, la requête doit explicitement définir `first: 10` dans le champ transactions. -### Use a single query to request multiple records +### Utiliser une seule requête pour demander plusieurs enregistrements -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +Par défaut, les subgraphs ont une entité singulière pour un enregistrement. Pour plusieurs enregistrements, utilisez les entités plurielles et le filtre : `where: {id_in:[X,Y,Z]}` ou `where: {volume_gt:100000}` -Example of inefficient querying: +Exemple de requête inefficace : ```graphql query SingleRecord { @@ -238,7 +238,7 @@ query SingleRecord { } ``` -Example of optimized querying: +Exemple de requête optimisée : ```graphql query ManyRecords { @@ -249,9 +249,9 @@ query ManyRecords { } ``` -### Combine multiple queries in a single request +### Combiner plusieurs requêtes en une seule -Your application might require querying multiple types of data as follows: +Votre application peut nécessiter l'interrogation de plusieurs types de données, comme suit : ```graphql import { execute } from "your-favorite-graphql-client" @@ -281,9 +281,9 @@ const [tokens, counters] = Promise.all( ) ``` -While this implementation is totally valid, it will require two round trips with the GraphQL API. +Bien que cette mise en œuvre soit tout à fait valable, elle nécessitera deux allers-retours avec l'API GraphQL. -Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows: +Heureusement, il est également possible d'envoyer plusieurs requêtes dans la même requête GraphQL, comme suit : ```graphql import { execute } from "your-favorite-graphql-client" @@ -304,13 +304,13 @@ query GetTokensandCounters { const { result: { tokens, counters } } = execute(query) ``` -This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**. +Cette approche **améliore les performances globales** en réduisant le temps passé sur le réseau (vous évite un aller-retour vers l'API) et fournit une **mise en œuvre plus concise**. ### Tirer parti des fragments GraphQL -A helpful feature to write GraphQL queries is GraphQL Fragment. +Une fonctionnalité utile pour écrire des requêtes GraphQL est GraphQL Fragment. -Looking at the following query, you will notice that some fields are repeated across multiple Selection-Sets (`{ ... }`): +En regardant la requête suivante, vous remarquerez que certains champs sont répétés dans plusieurs Ensembles de sélection (`{ ... }`) : ```graphql query { @@ -330,12 +330,12 @@ query { } ``` -Such repeated fields (`id`, `active`, `status`) bring many issues: +Ces champs répétés (`id`, `active`, `status`) posent de nombreux problèmes : -- More extensive queries become harder to read. -- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- Les requêtes plus longues deviennent plus difficiles à lire. +- Lorsque l'on utilise des outils qui génèrent des types TypeScript basés sur des requêtes (_plus d'informations à ce sujet dans la dernière section_), `newDelegate` et `oldDelegate` donneront lieu à deux interfaces inline distinctes. -A refactored version of the query would be the following: +Une version remaniée de la requête serait la suivante : ```graphql query { @@ -359,15 +359,15 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. +L'utilisation de GraphQL `fragment` améliorera la lisibilité (en particulier à grande échelle) et permettra une meilleure génération de types TypeScript. -When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). +Lorsque l'on utilise l'outil de génération de types, la requête ci-dessus génère un type `DelegateItemFragment` approprié (_voir la dernière section "Outils"). ### Bonnes pratiques et erreurs à éviter avec les fragments GraphQL ### La base du fragment doit être un type -A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: +Un fragment ne peut pas être basé sur un type non applicable, en bref, **sur un type n'ayant pas de champs** : ```graphql fragment MyFragment on BigInt { @@ -375,11 +375,11 @@ fragment MyFragment on BigInt { } ``` -`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. +`BigInt` est un **scalaire** (type natif "plain" ) qui ne peut pas être utilisé comme base d'un fragment. #### Comment diffuser un fragment -Fragments are defined on specific types and should be used accordingly in queries. +Les fragments sont définis pour des types spécifiques et doivent être utilisés en conséquence dans les requêtes. L'exemple: @@ -402,20 +402,20 @@ fragment VoteItem on Vote { } ``` -`newDelegate` and `oldDelegate` are of type `Transcoder`. +`newDelegate` et `oldDelegate` sont de type `Transcoder`. -It is not possible to spread a fragment of type `Vote` here. +Il n'est pas possible de diffuser un fragment de type `Vote` ici. #### Définir Fragment comme une unité commerciale atomique de données -GraphQL `Fragment`s must be defined based on their usage. +Les `Fragment` GraphQL doivent être définis en fonction de leur utilisation. -For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. +Pour la plupart des cas d'utilisation, la définition d'un fragment par type (dans le cas de l'utilisation répétée de champs ou de la génération de types) est suffisante. -Here is a rule of thumb for using fragments: +Voici une règle empirique pour l'utilisation des fragments : -- When fields of the same type are repeated in a query, group them in a `Fragment`. -- When similar but different fields are repeated, create multiple fragments, for instance: +- Lorsque des champs de même type sont répétés dans une requête, ils sont regroupés dans un `Fragment`. +- Lorsque des champs similaires mais différents se répètent, créer plusieurs fragments, par exemple : ```graphql # fragment de base (utilisé principalement pour les listes) @@ -438,51 +438,51 @@ fragment VoteWithPoll on Vote { --- -## The Essential Tools +## Les outils essentiels ### Explorateurs Web GraphQL -Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries. +Itérer sur des requêtes en les exécutant dans votre application peut s'avérer fastidieux. Pour cette raison, n'hésitez pas à utiliser [Graph Explorer](https://thegraph.com/explorer) pour tester vos requêtes avant de les ajouter à votre application. Graph Explorer vous fournira un terrain de jeu GraphQL préconfiguré pour tester vos requêtes. -If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql). +Si vous recherchez un moyen plus souple de déboguer/tester vos requêtes, d'autres outils web similaires sont disponibles, tels que [Altair](https://altairgraphql.dev/) et [GraphiQL](https://graphiql-online.com/graphiql). ### Linting GraphQL -In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools. +Afin de respecter les meilleures pratiques et les règles syntaxiques mentionnées ci-dessus, il est fortement recommandé d'utiliser les outils de workflow et d'IDE suivants. **GraphQL ESLint** -[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort. +[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) vous aidera à rester au fait des meilleures pratiques GraphQL sans effort. -[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as: +[La configuration "opérations-recommandées"](https://the-guild.dev/graphql/eslint/docs/configs) permet d'appliquer des règles essentielles telles que: -- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type? -- `@graphql-eslint/no-unused variables`: should a given variable stay unused? +- `@graphql-eslint/fields-on-correct-type` : un champ est-il utilisé sur un type correct ? +- `@graphql-eslint/no-unused variables` : une variable donnée doit-elle rester inutilisée ? - et plus ! -This will allow you to **catch errors without even testing queries** on the playground or running them in production! +Cela vous permettra de **récupérer les erreurs sans même tester les requêtes** sur le terrain de jeu ou les exécuter en production ! ### Plugins IDE -**VSCode and GraphQL** +**VSCode et GraphQL** -The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: +L'[extension GraphQL VSCode] (https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) est un excellent complément à votre workflow de développement : -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema +- Mise en évidence des syntaxes +- Suggestions d'auto-complétion +- Validation par rapport au schéma - Snippets -- Go to definition for fragments and input types +- Aller à la définition des fragments et des types d'entrée -If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. +Si vous utilisez `graphql-eslint`, l'extension [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) est indispensable pour visualiser correctement les erreurs et les avertissements dans votre code. -**WebStorm/Intellij and GraphQL** +**WebStorm/Intellij et GraphQL** -The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: +Le [JS GraphQL plugin] (https://plugins.jetbrains.com/plugin/8097-graphql/) améliorera considérablement votre expérience lorsque vous travaillez avec GraphQL en fournissant : -- Syntax highlighting -- Autocomplete suggestions -- Validation against schema +- Mise en évidence des syntaxes +- Suggestions d'auto-complétion +- Validation par rapport au schéma - Snippets -For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. +Pour plus d'informations sur ce sujet, consultez l'[article WebStorm](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) qui présente toutes les principales fonctionnalités du plugin. diff --git a/website/src/pages/fr/subgraphs/querying/distributed-systems.mdx b/website/src/pages/fr/subgraphs/querying/distributed-systems.mdx index 7c1f4526f7dc..8691a3b6ba86 100644 --- a/website/src/pages/fr/subgraphs/querying/distributed-systems.mdx +++ b/website/src/pages/fr/subgraphs/querying/distributed-systems.mdx @@ -29,22 +29,22 @@ Il est difficile de raisonner sur les implications des systèmes distribués, ma ## Demande de données actualisées -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph fournit l'API `block: { number_gte: $minBlock }` qui assure que la réponse est pour un seul bloc égal ou supérieur à `$minBlock`. Si la requête est faite à une instance de `graph-node` et que le bloc min n'est pas encore synchronisé, `graph-node` retournera une erreur. Si `graph-node` a synchronisé le bloc min, il exécutera la réponse pour le dernier bloc. Si la requête est faite à une passerelle Edge & Node, la passerelle filtrera tous les Indexeurs qui n'ont pas encore synchronisé le bloc min et fera la requête pour le dernier bloc que l'Indexeur a synchronisé. -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +Nous pouvons utiliser `number_gte` pour nous assurer que le temps ne recule jamais lors de l'interrogation des données dans une boucle. Voici un exemple : ```javascript -/// Updates the protocol.paused variable to the latest -/// known value in a loop by fetching it using The Graph. +/// Met à jour la variable protocol.paused avec la dernière valeur +/// connue dans une boucle en la récupérant à l'aide de The Graph. async function updateProtocolPaused() { - // It's ok to start with minBlock at 0. The query will be served - // using the latest block available. Setting minBlock to 0 is the - // same as leaving out that argument. + // Il n'y a pas de problème à commencer avec minBlock à 0. La requête sera servie + // en utilisant le dernier bloc disponible. Définir minBlock à 0 + // revient à ne pas utiliser cet argument. let minBlock = 0 for (;;) { - // Schedule a promise that will be ready once - // the next Ethereum block will likely be available. + // Programmer une promesse qui sera prête une fois que + // le prochain bloc Ethereum sera probablement disponible. const nextBlock = new Promise((f) => { setTimeout(f, 14000) }) @@ -65,10 +65,10 @@ async function updateProtocolPaused() { const response = await graphql(query, variables) minBlock = response._meta.block.number - // TODO: Do something with the response data here instead of logging it. + // TODO : Faire quelque chose avec les données de réponse ici au lieu de les enregistrer. console.log(response.protocol.paused) - // Sleep to wait for the next block + // Dormir pour attendre le bloc suivant await nextBlock } } @@ -78,17 +78,17 @@ async function updateProtocolPaused() { Un autre cas d'utilisation est la récupération d'un grand ensemble ou, plus généralement, la récupération d'éléments liés entre plusieurs requêtes. Contrairement au cas des sondages (où la cohérence souhaitée était d'avancer dans le temps), la cohérence souhaitée est pour un seul point dans le temps. -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +Ici, nous utiliserons la méthode `block: { hash: $blockHash }` afin de rattacher tous nos résultats au même bloc. ```javascript -/// Gets a list of domain names from a single block using pagination +/// Obtient une liste de noms de domaine à partir d'un seul bloc en utilisant la pagination async function getDomainNames() { - // Set a cap on the maximum number of items to pull. + // Fixer un plafond pour le nombre maximum d'articles à retirer. let pages = 5 const perPage = 1000 - // The first query will get the first page of results and also get the block - // hash so that the remainder of the queries are consistent with the first. + // La première requête obtiendra la première page de résultats ainsi que le bloc + // afin que les autres requêtes soient cohérentes avec la première. const listDomainsQuery = ` query ListDomains($perPage: Int!) { domains(first: $perPage) { @@ -107,9 +107,9 @@ async function getDomainNames() { let blockHash = data._meta.block.hash let query - // Continue fetching additional pages until either we run into the limit of - // 5 pages total (specified above) or we know we have reached the last page - // because the page has fewer entities than a full page. + // Continuer à rechercher des pages supplémentaires jusqu'à ce que nous atteignions la limite de + // 5 pages au total (spécifiée ci-dessus) ou jusqu'à ce que nous sachions que nous avons atteint la dernière page + // parce que la page contient moins d'entités qu'une page complète. while (data.domains.length == perPage && --pages) { let lastID = data.domains[data.domains.length - 1].id query = ` @@ -122,7 +122,7 @@ async function getDomainNames() { data = await graphql(query, { perPage, lastID, blockHash }) - // Accumulate domain names into the result + // Accumuler les noms de domaine dans le résultat for (domain of data.domains) { result.push(domain.name) } diff --git a/website/src/pages/fr/subgraphs/querying/from-an-application.mdx b/website/src/pages/fr/subgraphs/querying/from-an-application.mdx index d86768f27d33..d778cec92320 100644 --- a/website/src/pages/fr/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/fr/subgraphs/querying/from-an-application.mdx @@ -1,53 +1,54 @@ --- title: Interrogation à partir d'une application +sidebarTitle: Interroger à partir d'une application --- -Learn how to query The Graph from your application. +Apprenez à interroger The Graph à partir de votre application. -## Getting GraphQL Endpoints +## Obtenir des endpoints GraphQL -During the development process, you will receive a GraphQL API endpoint at two different stages: one for testing in Subgraph Studio, and another for making queries to The Graph Network in production. +Au cours du processus de développement, vous recevrez un Endpoint de l'API GraphQL à deux étapes différentes : l'une pour les tests dans Subgraph Studio, et l'autre pour effectuer des requêtes sur The Graph Network en production. -### Subgraph Studio Endpoint +### Endpoint Subgraph Studio -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +Après avoir déployé votre subgraph dans [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), vous recevrez un endpoint qui ressemble à ceci : ``` https://api.studio.thegraph.com/query/// ``` -> This endpoint is intended for testing purposes **only** and is rate-limited. +> Cet endpoint est destiné à des fins de test **uniquement** et son débit est limité. -### The Graph Network Endpoint +### Endpoint de The Graph Network -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +Après avoir publié votre subgraph sur le réseau, vous recevrez un endpoint qui ressemble à ceci : : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> Cet endpoint à une utilisation active sur le réseau. Il vous permet d'utiliser diverses bibliothèques client GraphQL pour interroger le Subgraph et alimenter votre application en données indexées. -## Using Popular GraphQL Clients +## Utilisation de clients GraphQL populaires ### Graph Client -The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: +The Graph fournit son propre client GraphQL, `graph-client`, qui prend en charge des fonctionnalités uniques telles que : -- Gestion des subgraphs inter-chaînes : interrogation à partir de plusieurs subgraphs en une seule requête -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Traitement des subgraphs multi-chaînes : Interrogation de plusieurs subgraphs en une seule requête +- [Suivi automatique des blocs](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [Pagination automatique](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Résultat entièrement typé -> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. +> Remarque : `graph-client` est intégré à d'autres clients GraphQL populaires tels qu'Apollo et URQL, qui sont compatibles avec des environnements tels que React, Angular, Node.js et React Native. Par conséquent, l'utilisation de `graph-client` vous fournira une expérience améliorée pour travailler avec The Graph. -### Fetch Data with Graph Client +### Récupérer des données avec Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Voyons comment récupérer les données d'un subgraph avec `graph-client` : #### Étape 1 -Install The Graph Client CLI in your project: +Installez The Graph Client CLI dans votre projet : ```sh yarn add -D @graphprotocol/client-cli @@ -57,7 +58,7 @@ npm install --save-dev @graphprotocol/client-cli #### Étape 2 -Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): +Définissez votre requête dans un fichier `.graphql` (ou dans votre fichier `.js` ou `.ts`) : ```graphql query ExampleQuery { @@ -86,7 +87,7 @@ query ExampleQuery { #### Étape 3 -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Créez un fichier de configuration (appelé `.graphclientrc.yml`) et pointez vers vos endpoints GraphQL fournis par The Graph, par exemple : ```yaml # .graphclientrc.yml @@ -104,22 +105,22 @@ documents: - ./src/example-query.graphql ``` -#### Step 4 +#### Étape 4 -Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: +Exécutez la commande CLI suivante de The Graph Client pour générer un code JavaScript typé et prêt à l'emploi : ```sh graphclient build ``` -#### Step 5 +#### Étape 5 -Update your `.ts` file to use the generated typed GraphQL documents: +Mettez à jour votre fichier `.ts` pour utiliser les documents GraphQL typés générés : ```tsx import React, { useEffect } from 'react' // ... -// we import types and typed-graphql document from the generated code (`..graphclient/`) +// nous importons les types et le document typed-graphql du code généré (`..graphclient/`) import { ExampleQueryDocument, ExampleQueryQuery, execute } from '../.graphclient' function App() { @@ -134,7 +135,7 @@ function App() {
logo -

Graph Client Example

+

Exemple de Graph Client

{data && (
@@ -152,27 +153,27 @@ function App() { export default App ``` -> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +> **Note importante:** `graph-client` est parfaitement intégré avec d'autres clients GraphQL tels que Apollo client, URQL, ou React Query ; vous pouvez [trouver des exemples dans le dépôt officiel](https://github.com/graphprotocol/graph-client/tree/main/examples). Cependant, si vous choisissez d'aller avec un autre client, gardez à l'esprit que **vous ne serez pas en mesure d'utiliser Cross-chain Subgraph Handling (La manipulation cross-chain des subgraphs) ou Automatic Pagination (La pagination automatique), qui sont des fonctionnalités essentielles pour interroger The Graph**. -### Apollo Client +### Le client Apollo -[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. +[Apollo client] (https://www.apollographql.com/docs/) est un client GraphQL commun sur les écosystèmes front-end. Il est disponible pour React, Angular, Vue, Ember, iOS et Android. -Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: +Bien qu'il s'agisse du client le plus lourd, il possède de nombreuses fonctionnalités permettant de construire des interfaces utilisateur avancées sur GraphQL : -- Advanced error handling +- Gestion avancée des erreurs - Pagination -- Data prefetching -- Optimistic UI -- Local state management +- Pré-récupération des données +- UI optimiste +- Gestion locale de l'État -### Fetch Data with Apollo Client +### Récupérer des données avec Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Voyons comment récupérer les données d'un subgraph avec le client Apollo : #### Étape 1 -Install `@apollo/client` and `graphql`: +Installez `@apollo/client` et `graphql` : ```sh npm install @apollo/client graphql @@ -180,7 +181,7 @@ npm install @apollo/client graphql #### Étape 2 -Query the API with the following code: +Interrogez l'API avec le code suivant : ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -215,7 +216,7 @@ client #### Étape 3 -To use variables, you can pass in a `variables` argument to the query: +Pour utiliser des variables, vous pouvez passer un argument `variables` à la requête : ```javascript const tokensQuery = ` @@ -246,22 +247,22 @@ client }) ``` -### URQL Overview +### Vue d'ensemble d'URQL -[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: +[URQL] (https://formidable.com/open-source/urql/) est disponible dans les environnements Node.js, React/Preact, Vue et Svelte, avec des fonctionnalités plus avancées : - Système de cache flexible - Conception extensible (facilitant l’ajout de nouvelles fonctionnalités par-dessus) - Offre légère (~ 5 fois plus légère que Apollo Client) - Prise en charge des téléchargements de fichiers et du mode hors ligne -### Fetch data with URQL +### Récupérer des données avec URQL -Let's look at how to fetch data from a subgraph with URQL: +Voyons comment récupérer des données d'un subgraph avec URQL : #### Étape 1 -Install `urql` and `graphql`: +Installez `urql` et `graphql` : ```sh npm install urql graphql @@ -269,7 +270,7 @@ npm install urql graphql #### Étape 2 -Query the API with the following code: +Interrogez l'API avec le code suivant : ```javascript import { createClient } from 'urql' diff --git a/website/src/pages/fr/subgraphs/querying/graph-client/README.md b/website/src/pages/fr/subgraphs/querying/graph-client/README.md index 416cadc13c6f..394465ec1712 100644 --- a/website/src/pages/fr/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/fr/subgraphs/querying/graph-client/README.md @@ -1,44 +1,44 @@ -# The Graph Client Tools +# Les outils de The Graph Client -This repo is the home for [The Graph](https://thegraph.com) consumer-side tools (for both browser and NodeJS environments). +Ce répertoire abrite les outils côté consommateur de [The Graph](https://thegraph.com) (pour les environnements navigateur et NodeJS). -## Background +## Contexte -The tools provided in this repo are intended to enrich and extend the DX, and add the additional layer required for dApps in order to implement distributed applications. +Les outils fournis dans ce repo sont destinés à enrichir et à étendre le DX, et à ajouter la couche supplémentaire requise pour les dApps afin de mettre en œuvre des applications distribuées. -Developers who consume data from [The Graph](https://thegraph.com) GraphQL API often need peripherals for making data consumption easier, and also tools that allow using multiple indexers at the same time. +Les développeurs qui consomment des données à partir de [The Graph](https://thegraph.com) GraphQL API ont souvent besoin de périphériques pour faciliter la consommation des données, ainsi que d'outils permettant d'utiliser plusieurs Indexeurs en même temps. -## Features and Goals +## Fonctionnalités et objectifs -This library is intended to simplify the network aspect of data consumption for dApps. The tools provided within this repository are intended to run at build time, in order to make execution faster and performant at runtime. +Cette bibliothèque est destinée à simplifier l'aspect réseau de la consommation de données pour les dApps. Les outils fournis dans ce dépôt sont destinés à être exécutés au moment de la construction, afin de rendre l'exécution plus rapide et plus performante au moment de l'exécution. -> The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! +> Les outils fournis dans ce repo peuvent être utilisés de manière autonome, mais vous pouvez également les utiliser avec n'importe quel client GraphQL existant ! -| Status | Feature | Notes | -| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| Status | Fonctionnalité | Notes | +| :----: | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | +| ✅ | Indexeurs multiples | sur la base de stratégies d'extraction | +| ✅ | Stratégies d'extraction | timeout, retry, fallback, race, highestValue | +| ✅ | Validations et optimisations du temps de construction | | +| ✅ | Composition côté client | avec un planificateur d'exécution amélioré (basé sur GraphQL-Mesh) | +| ✅ | Gestion des subgraphs multi-chaînes | Utiliser des subgraphs similaires comme source unique | +| ✅ | Exécution brute (mode autonome) | sans client GraphQL intégré | +| ✅ | Mutations locales (côté client) | | +| ✅ | [Suivi automatique des blocs](../packages/block-tracking/README.md) | les numéros de blocs de suivi [tels que décrits ici](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Pagination automatique](../packages/auto-pagination/README.md) | effectuer plusieurs requêtes en un seul appel pour récupérer plus que la limite de l'Indexeur | +| ✅ | Intégration avec `@apollo/client` | | +| ✅ | Intégration avec `urql` | | +| ✅ | Prise en charge de TypeScript | avec GraphQL Codegen et `TypedDocumentNode` intégrés | +| ✅ | [`@live` queries](./live.md) | Sur la base de sondages | -> You can find an [extended architecture design here](./architecture.md) +> Vous pouvez trouver un [modèle d'architecture étendu ici](./architecture.md) -## Getting Started +## Introduction -You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: +Vous pouvez suivre [l'épisode 45 de `graphql.wtf`] (https://graphql.wtf/episodes/45-the-graph-client) pour en savoir plus sur Graph Client : [![GraphQL.wtf Episode 45](https://img.youtube.com/vi/ZsRAmyUtvwg/0.jpg)](https://graphql.wtf/episodes/45-the-graph-client) -To get started, make sure to install [The Graph Client CLI] in your project: +Pour commencer, assurez-vous d'installer [The Graph Client CLI] dans votre projet : ```sh yarn add -D @graphprotocol/client-cli @@ -46,9 +46,9 @@ yarn add -D @graphprotocol/client-cli npm install --save-dev @graphprotocol/client-cli ``` -> The CLI is installed as dev dependency since we are using it to produce optimized runtime artifacts that can be loaded directly from your app! +> La CLI est installé en tant que dépendance dev puisque nous l'utilisons pour produire des artefacts d'exécution optimisés qui peuvent être chargés directement à partir de votre application ! -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Créez un fichier de configuration (appelé `.graphclientrc.yml`) et pointez vers vos endpoints GraphQL fournis par The Graph, par exemple : ```yml # .graphclientrc.yml @@ -59,15 +59,15 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 ``` -Now, create a runtime artifact by running The Graph Client CLI: +Maintenant, créez un artefact d'exécution en exécutant The Graph Client CLI: ```sh graphclient build ``` -> Note: you need to run this with `yarn` prefix, or add that as a script in your `package.json`. +> Note : vous devez exécuter ceci avec le préfixe `yarn`, ou ajouter ce script dans votre `package.json`. -This should produce a ready-to-use standalone `execute` function, that you can use for running your application GraphQL operations, you should have an output similar to the following: +Cela devrait produire une fonction autonome `execute` prête à l'emploi, que vous pouvez utiliser pour exécuter les opérations GraphQL de votre application, vous devriez obtenir une sortie similaire à la suivante : ```sh GraphClient: Cleaning existing artifacts @@ -80,7 +80,7 @@ GraphClient: Reading the configuration 🕸️: Done! => .graphclient ``` -Now, the `.graphclient` artifact is generated for you, and you can import it directly from your code, and run your queries: +Maintenant, l'artefact `.graphclient` est généré pour vous, et vous pouvez l'importer directement depuis votre code, et lancer vos requêtes : ```ts import { execute } from '../.graphclient' @@ -111,54 +111,54 @@ async function main() { main() ``` -### Using Vanilla JavaScript Instead of TypeScript +### Utiliser Vanilla JavaScript au lieu de TypeScript -GraphClient CLI generates the client artifacts as TypeScript files by default, but you can configure CLI to generate JavaScript and JSON files together with additional TypeScript definition files by using `--fileType js` or `--fileType json`. +GraphClient CLI génère par défaut les artefacts du client sous forme de fichiers TypeScript, mais vous pouvez configurer la CLI pour générer des fichiers JavaScript et JSON ainsi que des fichiers de définition TypeScript supplémentaires en utilisant `--fileType js` ou `--fileType json`. -`js` flag generates all files as JavaScript files with ESM Syntax and `json` flag generates source artifacts as JSON files while entrypoint JavaScript file with old CommonJS syntax because only CommonJS supports JSON files as modules. +L'option `js` génère tous les fichiers en tant que fichiers JavaScript avec la syntaxe ESM et l'option `json` génère les artefacts source en tant que fichiers JSON tandis que le fichier JavaScript du point d'entrée avec l'ancienne syntaxe CommonJS parce que seul CommonJS supporte les fichiers JSON en tant que modules. -Unless you use CommonJS(`require`) specifically, we'd recommend you to use `js` flag. +A moins que vous n'utilisiez CommonJS (`require`) spécifiquement, nous vous recommandons d'utiliser le l'option `js`. `graphclient --fileType js` -- [An example for JavaScript usage in CommonJS syntax with JSON files](../examples/javascript-cjs) -- [An example for JavaScript usage in ESM syntax](../examples/javascript-esm) +- [Un exemple d'utilisation de JavaScript dans la syntaxe CommonJS avec des fichiers JSON](../examples/javascript-cjs) +- [Un exemple d'utilisation de JavaScript dans la syntaxe ESM](../examples/javascript-esm) -#### The Graph Client DevTools +#### Le DevTools The Graph Client -The Graph Client CLI comes with a built-in GraphiQL, so you can experiment with queries in real-time. +La CLI de The Graph Client est dotée d'une interface GraphiQL intégrée, ce qui vous permet d'expérimenter des requêtes en temps réel. -The GraphQL schema served in that environment, is the eventual schema based on all composed Subgraphs and transformations you applied. +Le schéma GraphQL servi dans cet environnement est le schéma final basé sur tous les subgraphs composés et les transformations que vous avez appliquées. -To start the DevTool GraphiQL, run the following command: +Pour lancer Le DevTool GraphiQL, exécutez la commande suivante : ```sh graphclient serve-dev ``` -And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 +Et ouvrez http://localhost:4000/ pour utiliser GraphiQL. Vous pouvez maintenant expérimenter votre schéma GraphQL côté client localement ! 🥳 -#### Examples +#### Exemples -You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: +Vous pouvez également vous référer aux [répertoires examples dans ce repo](../examples), pour des exemples plus avancés et des exemples d'intégration : -- [TypeScript & React example with raw `execute` and built-in GraphQL-Codegen](../examples/execute) +- [Exemple TypeScript & React avec un `execute` brut et GraphQL-Codegen intégré](../examples/execute) - [TS/JS NodeJS standalone mode](../examples/node) -- [Client-Side GraphQL Composition](../examples/composition) -- [Integration with Urql and React](../examples/urql) -- [Integration with NextJS and TypeScript](../examples/nextjs) -- [Integration with Apollo-Client and React](../examples/apollo) -- [Integration with React-Query](../examples/react-query) -- _Cross-chain merging (same Subgraph, different chains)_ -- - [Parallel SDK calls](../examples/cross-chain-sdk) -- - [Parallel internal calls with schema extensions](../examples/cross-chain-extension) -- [Customize execution with Transforms (auto-pagination and auto-block-tracking)](../examples/transforms) +- [Composition GraphQL côté client](../examples/composition) +- [Intégration avec Urql et React](../examples/urql) +- [Intégration avec NextJS et TypeScript](../examples/nextjs) +- [Intégration avec Apollo-Client et React](../examples/apollo) +- [Intégration avec React-Query](../examples/react-query) +- Fusion interchain (même subgraph, blockchains différentes) +- - [Appels SDK parallèles](../examples/cross-chain-sdk) +- - [Appels internes parallèles avec les extensions de schéma](../examples/cross-chain-extension) +- [Personnaliser l'exécution avec Transforms (auto-pagination et auto-block-tracking)](../examples/transforms) -### Advanced Examples/Features +### Exemples/fonctionnalités avancés -#### Customize Network Calls +#### Personnaliser les appels réseau -You can customize the network execution (for example, to add authentication headers) by using `operationHeaders`: +Vous pouvez personnaliser l'exécution du réseau (par exemple, pour ajouter des en-têtes d'authentification) en utilisant `operationHeaders` : ```yaml sources: @@ -170,7 +170,7 @@ sources: Authorization: Bearer MY_TOKEN ``` -You can also use runtime variables if you wish, and specify it in a declarative way: +Vous pouvez également utiliser des variables d'exécution si vous le souhaitez, et les spécifier de manière déclarative : ```yaml sources: @@ -182,7 +182,7 @@ sources: Authorization: Bearer {context.config.apiToken} ``` -Then, you can specify that when you execute operations: +Vous pouvez ensuite le spécifier lorsque vous exécutez des opérations : ```ts execute(myQuery, myVariables, { @@ -192,11 +192,11 @@ execute(myQuery, myVariables, { }) ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Vous pouvez trouver la [documentation complète du gestionnaire `graphql` ici](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Environment Variables Interpolation +#### Interpolation des Variables d'environnement -If you wish to use environment variables in your Graph Client configuration file, you can use interpolation with `env` helper: +Si vous souhaitez utiliser des variables d'environnement dans votre fichier de configuration Graph Client, vous pouvez utiliser l'interpolation avec l'assistant `env` : ```yaml sources: @@ -208,9 +208,9 @@ sources: Authorization: Bearer {env.MY_API_TOKEN} # runtime ``` -Then, make sure to have `MY_API_TOKEN` defined when you run `process.env` at runtime. +Ensuite, assurez-vous que `MY_API_TOKEN` est défini lorsque vous lancez `process.env` au moment de l'exécution. -You can also specify environment variables to be filled at build time (during `graphclient build` run) by using the env-var name directly: +Vous pouvez également spécifier des variables d'environnement à remplir au moment de la construction (pendant l'exécution de `graphclient build`) en utilisant directement le nom env-var : ```yaml sources: @@ -219,21 +219,21 @@ sources: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 operationHeaders: - Authorization: Bearer ${MY_API_TOKEN} # build time + Authorization: Bearer ${MY_API_TOKEN} # temps de construction ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Vous pouvez trouver la [documentation complète du gestionnaire `graphql` ici](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Fetch Strategies and Multiple Graph Indexers +#### Extraire les Stratégies et les multiples Indexeurs de The Graph -It's a common practice to use more than one indexer in dApps, so to achieve the ideal experience with The Graph, you can specify several `fetch` strategies in order to make it more smooth and simple. +C'est une pratique courante d'utiliser plus d'un Indexeur dans les dApps, donc pour obtenir l'expérience idéale avec The Graph, vous pouvez spécifier plusieurs stratégies `fetch` afin de rendre les choses plus fluides et plus simples. -All `fetch` strategies can be combined to create the ultimate execution flow. +Toutes les stratégies `fetch` peuvent être combinées pour créer le flux d'exécution ultime.
`retry` -The `retry` mechanism allow you to specify the retry attempts for a single GraphQL endpoint/source. +Le mécanisme (retry)`réessai` vous permet de spécifier les tentatives de réessais pour un seul endpoint/source GraphQL. The retry flow will execute in both conditions: a netword error, or due to a runtime error (indexing issue/inavailability of the indexer). @@ -243,7 +243,7 @@ sources: handler: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 - retry: 2 # specify here, if you have an unstable/error prone indexer + retry: 2 # spécifier ici, si vous avez un Indexeur instable ou sujet à des erreurs ```
@@ -251,7 +251,7 @@ sources:
`timeout` -The `timeout` mechanism allow you to specify the `timeout` for a given GraphQL endpoint. +Le mécanisme `timeout` vous permet de spécifier le `timeout` pour un endpoint GraphQL donné. ```yaml sources: @@ -259,7 +259,7 @@ sources: handler: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 - timeout: 5000 # 5 seconds + timeout: 5000 # 5 secondes ```
@@ -267,9 +267,9 @@ sources:
`fallback` -The `fallback` mechanism allow you to specify use more than one GraphQL endpoint, for the same source. +Le mécanisme `fallback` vous permet de spécifier l'utilisation de plus d'un endpoint GraphQL, pour la même source. -This is useful if you want to use more than one indexer for the same Subgraph, and fallback when an error/timeout happens. You can also use this strategy in order to use a custom indexer, but allow it to fallback to [The Graph Hosted Service](https://thegraph.com/hosted-service). +Ceci est utile si vous voulez utiliser plus d'un Indexeur pour le même subgraph, et vous replier en cas d'erreur ou de dépassement de délai. Vous pouvez également utiliser cette stratégie pour utiliser un Indexeur personnalisé, mais lui permettre de se replier sur [Le Service Hébergé de The Graph](https://thegraph.com/hosted-service). ```yaml sources: @@ -289,9 +289,9 @@ sources:
`race` -The `race` mechanism allow you to specify use more than one GraphQL endpoint, for the same source, and race on every execution. +Le mécanisme `race` permet d'utiliser plusieurs endpoints GraphQL simultanément pour une même source et de prendre la réponse la plus rapide. -This is useful if you want to use more than one indexer for the same Subgraph, and allow both sources to race and get the fastest response from all specified indexers. +Cette option est utile si vous souhaitez utiliser plus d'un Indexeur pour le même subgraph, et permettre aux deux sources de faire la course et d'obtenir la réponse la plus rapide de tous les Indexeurs spécifiés. ```yaml sources: @@ -308,10 +308,10 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. -This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. +Cette stratégie vous permet d'envoyer des demandes parallèles à différents endpoints pour la même source et de choisir la plus récente. + +Cette option est utile si vous souhaitez choisir les données les plus synchronisées pour le même subgraph parmi différents Indexeurs/sources. ```yaml sources: @@ -349,9 +349,9 @@ graph LR;
-#### Block Tracking +#### Suivi des blocs -The Graph Client can track block numbers and do the following queries by following [this pattern](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) with `blockTracking` transform; +The Graph Client peut suivre les numéros de blocs et effectuer les requêtes suivantes en suivant [ce schéma](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) avec la transformation `blockTracking` ; ```yaml sources: @@ -361,23 +361,23 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - blockTracking: - # You might want to disable schema validation for faster startup + # Vous pouvez désactiver la validation des schémas pour un démarrage plus rapide validateSchema: true - # Ignore the fields that you don't want to be tracked + # Ignorer les champs qui ne doivent pas être suivis ignoreFieldNames: [users, prices] - # Exclude the operation with the following names + # Exclure les opérations avec les noms suivants ignoreOperationNames: [NotFollowed] ``` -[You can try a working example here](../examples/transforms) +[Vous pouvez essayer un exemple pratique ici](../examples/transforms) -#### Automatic Pagination +#### Pagination automatique -With most subgraphs, the number of records you can fetch is limited. In this case, you have to send multiple requests with pagination. +Dans la plupart des subgraphs, le nombre d'enregistrements que vous pouvez récupérer est limité. Dans ce cas, vous devez envoyer plusieurs requêtes avec pagination. ```graphql query { - # Will throw an error if the limit is 1000 + # Lance une erreur si la limite est de 1000 users(first: 2000) { id name @@ -385,11 +385,11 @@ query { } ``` -So you have to send the following operations one after the other: +Vous devez donc envoyer les opérations suivantes l'une après l'autre : ```graphql query { - # Will throw an error if the limit is 1000 + # Lance une erreur si la limite est de 1000 users(first: 1000) { id name @@ -397,11 +397,11 @@ query { } ``` -Then after the first response: +Ensuite, après la première réponse : ```graphql query { - # Will throw an error if the limit is 1000 + # Lance une erreur si la limite est de 1000 users(first: 1000, skip: 1000) { id name @@ -409,9 +409,9 @@ query { } ``` -After the second response, you have to merge the results manually. But instead The Graph Client allows you to do the first one and automatically does those multiple requests for you under the hood. +Après la deuxième réponse, vous devez fusionner les résultats manuellement. En revanche, The Graph Client vous permet de faire la première réponse et exécute automatiquement ces demandes multiples pour vous. -All you have to do is: +Tout ce que vous avez à faire, c'est : ```yaml sources: @@ -421,21 +421,21 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - autoPagination: - # You might want to disable schema validation for faster startup + # Vous pouvez désactiver la validation des schémas pour accélérer le démarrage. validateSchema: true ``` -[You can try a working example here](../examples/transforms) +[Vous pouvez essayer un exemple pratique ici](../examples/transforms) -#### Client-side Composition +#### Composition côté client -The Graph Client has built-in support for client-side GraphQL Composition (powered by [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). +The Graph Client est doté d'une prise en charge intégrée de la composition GraphQL côté client (assurée par [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). -You can leverage this feature in order to create a single GraphQL layer from multiple Subgraphs, deployed on multiple indexers. +Vous pouvez tirer parti de cette fonctionnalité pour créer une seule couche GraphQL à partir de plusieurs subgraphs, déployés sur plusieurs Indexeurs. -> 💡 Tip: You can compose any GraphQL sources, and not only Subgraphs! +> 💡 Astuce : Vous pouvez composer n'importe quelle source GraphQL, et pas seulement des subgraphs ! -Trivial composition can be done by adding more than one GraphQL source to your `.graphclientrc.yml` file, here's an example: +Une composition triviale peut être faite en ajoutant plus d'une source GraphQL à votre fichier `.graphclientrc.yml`, voici un exemple : ```yaml sources: @@ -449,15 +449,15 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/graphprotocol/compound-v2 ``` -As long as there a no conflicts across the composed schemas, you can compose it, and then run a single query to both Subgraphs: +Tant qu'il n'y a pas de conflit entre les schémas composés, vous pouvez les composer, puis exécuter une seule requête sur les deux subgraphs : ```graphql query myQuery { - # this one is coming from compound-v2 + # Celui-ci provient de compound-v2 markets(first: 7) { borrowRate } - # this one is coming from uniswap-v2 + # Celui-ci provient de l'uniswap-v2 pair(id: "0x00004ee988665cdda9a1080d5792cecd16dc1220") { id token0 { @@ -470,33 +470,33 @@ query myQuery { } ``` -You can also resolve conflicts, rename parts of the schema, add custom GraphQL fields, and modify the entire execution phase. +Vous pouvez également résoudre des conflits, renommer des parties du schéma, ajouter des champs GraphQL personnalisés et modifier l'ensemble de la phase d'exécution. -For advanced use-cases with composition, please refer to the following resources: +Pour les cas d'utilisation avancée de la composition, veuillez vous référer aux ressources suivantes : -- [Advanced Composition Example](../examples/composition) -- [GraphQL-Mesh Schema transformations](https://graphql-mesh.com/docs/transforms/transforms-introduction) -- [GraphQL-Tools Schema-Stitching documentation](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) +- [Exemple de composition avancée](../examples/composition) +- [Transformations de schémas GraphQL-Mesh](https://graphql-mesh.com/docs/transforms/transforms-introduction) +- [Documentation GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) -#### TypeScript Support +#### Prise en charge de TypeScript -If your project is written in TypeScript, you can leverage the power of [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) and have a fully-typed GraphQL client experience. +Si votre projet est écrit en TypeScript, vous pouvez exploiter la puissance de [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) et avoir une expérience GraphQL client entièrement typée. -The standalone mode of The GraphQL, and popular GraphQL client libraries like Apollo-Client and urql has built-in support for `TypedDocumentNode`! +Le mode autonome de The GraphQL, et les bibliothèques client GraphQL populaires comme Apollo-Client et urql ont une prise en charge intégrée pour `TypedDocumentNode` ! -The Graph Client CLI comes with a ready-to-use configuration for [GraphQL Code Generator](https://graphql-code-generator.com), and it can generate `TypedDocumentNode` based on your GraphQL operations. +La CLI The Graph Client est livrée avec une configuration prête à l'emploi pour [GraphQL Code Generator](https://graphql-code-generator.com), et il peut générer `TypedDocumentNode` sur la base de vos opérations GraphQL. -To get started, define your GraphQL operations in your application code, and point to those files using the `documents` section of `.graphclientrc.yml`: +Pour commencer, définissez vos opérations GraphQL dans le code de votre application, et pointez vers ces fichiers en utilisant la section `documents` de `.graphclientrc.yml` : ```yaml sources: - - # ... your Subgraphs/GQL sources here + - # ... vos sources Subgraphs/GQL ici documents: - ./src/example-query.graphql ``` -You can also use Glob expressions, or even point to code files, and the CLI will find your GraphQL queries automatically: +Vous pouvez également utiliser des expressions globales, ou même pointer vers des fichiers de code, et la CLI trouvera automatiquement vos requêtes GraphQL : ```yaml documents: @@ -504,37 +504,37 @@ documents: - './src/**/*.{ts,tsx,js,jsx}' ``` -Now, run the GraphQL CLI `build` command again, the CLI will generate a `TypedDocumentNode` object under `.graphclient` for every operation found. +Maintenant, lancez à nouveau la commande `build` de la CLI GraphQL, la CLI va générer un objet `TypedDocumentNode` sous `.graphclient` pour chaque opération trouvée. -> Make sure to name your GraphQL operations, otherwise it will be ignored! +> Veillez à nommer vos opérations GraphQL, sinon elles seront ignorées ! -For example, a query called `query ExampleQuery` will have the corresponding `ExampleQueryDocument` generated in `.graphclient`. You can now import it and use that for your GraphQL calls, and you'll have a fully typed experience without writing or specifying any TypeScript manually: +Par exemple, une requête appelée `query ExampleQuery` aura le `ExampleQueryDocument` correspondant généré dans `.graphclient`. Vous pouvez maintenant l'importer et l'utiliser pour vos appels GraphQL, et vous aurez une expérience entièrement typée sans écrire ou spécifier manuellement du TypeScript : ```ts import { ExampleQueryDocument, execute } from '../.graphclient' async function main() { - // "result" variable is fully typed, and represents the exact structure of the fields you selected in your query. + //La variable "result" est entièrement typée et représente la structure exacte des champs que vous avez sélectionnés dans votre requête. const result = await execute(ExampleQueryDocument, {}) console.log(result) } ``` -> You can find a [TypeScript project example here](../examples/urql). +> Vous pouvez trouver un [exemple de projet TypeScript ici](../examples/urql). -#### Client-Side Mutations +#### Mutations côté client -Due to the nature of Graph-Client setup, it is possible to add client-side schema, that you can later bridge to run any arbitrary code. +En raison de la nature de la configuration de Graph-Client, il est possible d'ajouter un schéma côté client, que vous pouvez ensuite relier pour exécuter n'importe quel code arbitraire. -This is helpful since you can implement custom code as part of your GraphQL schema, and have it as unified application schema that is easier to track and develop. +Cela est utile car vous pouvez implémenter du code personnalisé dans le cadre de votre schéma GraphQL et en faire un schéma d'application unifié qui est plus facile à suivre et à développer. -> This document explains how to add custom mutations, but in fact you can add any GraphQL operation (query/mutation/subscriptions). See [Extending the unified schema article](https://graphql-mesh.com/docs/guides/extending-unified-schema) for more information about this feature. +> Ce document explique comment ajouter des mutations personnalisées, mais en fait vous pouvez ajouter n'importe quelle opération GraphQL (requête/mutation/abonnements). Voir [Extension de l'article sur le schéma unifié](https://graphql-mesh.com/docs/guides/extending-unified-schema) pour plus d'informations sur cette fonctionnalité. -To get started, define a `additionalTypeDefs` section in your config file: +Pour commencer, définissez une section `additionalTypeDefs` dans votre fichier de configuration : ```yaml additionalTypeDefs: | - # We should define the missing `Mutation` type + # Nous devrions définir le type `Mutation` manquant extend schema { mutation: Mutation } @@ -548,21 +548,21 @@ additionalTypeDefs: | } ``` -Then, add a pointer to a custom GraphQL resolvers file: +Ensuite, ajoutez un pointeur vers un fichier de résolveurs GraphQL personnalisé : ```yaml additionalResolvers: - './resolvers' ``` -Now, create `resolver.js` (or, `resolvers.ts`) in your project, and implement your custom mutation: +Maintenant, créez `resolver.js` (ou, `resolvers.ts`) dans votre projet, et implémentez votre mutation personnalisée : ```js module.exports = { Mutation: { async doSomething(root, args, context, info) { - // Here, you can run anything you wish. - // For example, use `web3` lib, connect a wallet and so on. + // Ici, vous pouvez exécuter tout ce que vous voulez. + // Par exemple, utiliser la librairie `web3`, connecter un portefeuille et ainsi de suite. return true }, @@ -570,17 +570,17 @@ module.exports = { } ``` -If you are using TypeScript, you can also get fully type-safe signature by doing: +Si vous utilisez TypeScript, vous pouvez également obtenir une signature entièrement sécurisée en faisant : ```ts import { Resolvers } from './.graphclient' -// Now it's fully typed! +// Maintenant, il est entièrement saisi ! const resolvers: Resolvers = { Mutation: { async doSomething(root, args, context, info) { - // Here, you can run anything you wish. - // For example, use `web3` lib, connect a wallet and so on. + // Ici, vous pouvez exécuter tout ce que vous voulez. + // Par exemple, utiliser la librairie `web3`, connecter un portefeuille et ainsi de suite. return true }, @@ -590,22 +590,22 @@ const resolvers: Resolvers = { export default resolvers ``` -If you need to inject runtime variables into your GraphQL execution `context`, you can use the following snippet: +Si vous avez besoin d'injecter des variables d'exécution dans votre `contexte` d'exécution GraphQL, vous pouvez utiliser l'extrait suivant : ```ts execute( MY_QUERY, {}, { - myHelper: {}, // this will be available in your Mutation resolver as `context.myHelper` + myHelper: {}, // Ceci sera disponible dans votre Mutation resolver as `context.myHelper` }, ) ``` -> [You can read more about client-side schema extensions here](https://graphql-mesh.com/docs/guides/extending-unified-schema) +> [Pour en savoir plus sur les extensions de schéma côté client, cliquez ici](https://graphql-mesh.com/docs/guides/extending-unified-schema) -> [You can also delegate and call Query fields as part of your mutation](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) +> [Vous pouvez également déléguer et appeler des champs de requête dans le cadre de votre mutation](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) -## License +## Licence -Released under the [MIT license](../LICENSE). +Publié sous la [licence MIT](../LICENSE). diff --git a/website/src/pages/fr/subgraphs/querying/graph-client/architecture.md b/website/src/pages/fr/subgraphs/querying/graph-client/architecture.md index 99098cd77b95..f14ce931aecc 100644 --- a/website/src/pages/fr/subgraphs/querying/graph-client/architecture.md +++ b/website/src/pages/fr/subgraphs/querying/graph-client/architecture.md @@ -1,13 +1,13 @@ -# The Graph Client Architecture +# L'architecture The Graph Client -To address the need to support a distributed network, we plan to take several actions to ensure The Graph client provides everything app needs: +Pour répondre à la nécessité de prendre en charge un réseau distribué, nous prévoyons de prendre plusieurs mesures pour faire en sorte que The Graph client fournisse tout ce dont l'application a besoin : -1. Compose multiple Subgraphs (on the client-side) -2. Fallback to multiple indexers/sources/hosted services -3. Automatic/Manual source picking strategy -4. Agnostic core, with the ability to run integrate with any GraphQL client +1. Composer plusieurs subgraphs (côté client) +2. Repli sur plusieurs Indexeurs/sources/services hébergés +3. Stratégie de prélèvement automatique/manuel à la source +4. Un noyau agnostique, avec la possibilité d'exécuter des intégrations avec n'importe quel client GraphQL -## Standalone mode +## Mode Standalone ```mermaid graph LR; @@ -17,7 +17,7 @@ graph LR; op-->sB[Subgraph B]; ``` -## With any GraphQL client +## Avec n'importe quel client GraphQL ```mermaid graph LR; @@ -28,11 +28,11 @@ graph LR; op-->sB[Subgraph B]; ``` -## Subgraph Composition +## Composition d'un subgraph -To allow simple and efficient client-side composition, we'll use [`graphql-tools`](https://graphql-tools.com) to create a remote schema / Executor, then can be hooked into the GraphQL client. +Pour permettre une composition simple et efficace côté client, nous allons utiliser [`graphql-tools`](https://graphql-tools.com) pour créer un schéma / Executor distant, qui peut ensuite être accroché au client GraphQL. -API could be either raw `graphql-tools` transformers, or using [GraphQL-Mesh declarative API](https://graphql-mesh.com/docs/transforms/transforms-introduction) for composing the schema. +L'API peut être soit des transformateurs `graphql-tools` bruts, soit l'utilisation de l'[API déclarative GraphQL-Mesh](https://graphql-mesh.com/docs/transforms/transforms-introduction) pour composer le schéma. ```mermaid graph LR; @@ -42,9 +42,9 @@ graph LR; m-->s3[Subgraph C GraphQL schema]; ``` -## Subgraph Execution Strategies +## Stratégies d'exécution des subgraphs -Within every Subgraph defined as source, there will be a way to define it's source(s) indexer and the querying strategy, here are a few options: +Dans chaque subgraph défini comme source, il sera possible de définir l'Indexeur de la (des) source(s) et la stratégie d'interrogation, dont voici quelques exemples : ```mermaid graph LR; @@ -85,9 +85,9 @@ graph LR; end ``` -> We can ship a several built-in strategies, along with a simple interfaces to allow developers to write their own. +> Nous pouvons proposer plusieurs stratégies intégrées, ainsi qu'une interface simple permettant aux développeurs d'écrire leurs propres stratégies. -To take the concept of strategies to the extreme, we can even build a magical layer that does subscription-as-query, with any hook, and provide a smooth DX for dapps: +Pour pousser le concept de stratégies à l'extrême, nous pouvons même construire une couche magique qui fait de l'abonnement en tant que requête, avec n'importe quel crochet, et fournit un DX fluide pour les dapps : ```mermaid graph LR; @@ -99,5 +99,5 @@ graph LR; sc[Smart Contract]-->|change event|op; ``` -With this mechanism, developers can write and execute GraphQL `subscription`, but under the hood we'll execute a GraphQL `query` to The Graph indexers, and allow to connect any external hook/probe for re-running the operation. -This way, we can watch for changes on the Smart Contract itself, and the GraphQL client will fill the gap on the need to real-time changes from The Graph. +Avec ce mécanisme, les développeurs peuvent écrire et exécuter des `subscriptions` GraphQL, mais sous le capot, nous exécuterons une `requête` GraphQL vers les Indexeurs de The Graph, et nous permettrons de connecter n'importe quel hook/probe externe pour ré-exécuter l'opération. +De cette façon, nous pouvons surveiller les changements sur le Smart Contract lui-même, et le client GraphQL comblera l'écart sur le besoin de changements en temps réel de The Graph. diff --git a/website/src/pages/fr/subgraphs/querying/graph-client/live.md b/website/src/pages/fr/subgraphs/querying/graph-client/live.md index e6f726cb4352..4337c6eb2d0a 100644 --- a/website/src/pages/fr/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/fr/subgraphs/querying/graph-client/live.md @@ -1,10 +1,10 @@ -# `@live` queries in `graph-client` +# Requêtes `@live` dans `graph-client` -Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. +Graph-Client implémente une directive personnalisée `@live` qui permet à chaque requête GraphQL de fonctionner avec des données en temps réel. -## Getting Started +## Introduction -Start by adding the following configuration to your `.graphclientrc.yml` file: +Commencez par ajouter la configuration suivante à votre fichier `.graphclientrc.yml` : ```yaml plugins: @@ -14,7 +14,7 @@ plugins: ## Usage -Set the default update interval you wish to use, and then you can apply the following GraphQL `@directive` over your GraphQL queries: +Définissez l'intervalle de mise à jour par défaut que vous souhaitez utiliser, puis vous pouvez appliquer la `@directive` GraphQL suivante à vos requêtes GraphQL : ```graphql query ExampleQuery @live { @@ -26,7 +26,7 @@ query ExampleQuery @live { } ``` -Or, you can specify a per-query interval: +Vous pouvez également spécifier un intervalle par requête : ```graphql query ExampleQuery @live(interval: 5000) { @@ -36,8 +36,8 @@ query ExampleQuery @live(interval: 5000) { } ``` -## Integrations +## Intégrations Since the entire network layer (along with the `@live` mechanism) is implemented inside `graph-client` core, you can use Live queries with every GraphQL client (such as Urql or Apollo-Client), as long as it supports streame responses (`AsyncIterable`). -No additional setup is required for GraphQL clients cache updates. +Aucune configuration supplémentaire n'est requise pour les mises à jour du cache des clients GraphQL. diff --git a/website/src/pages/fr/subgraphs/querying/graphql-api.mdx b/website/src/pages/fr/subgraphs/querying/graphql-api.mdx index 204fae24a5a5..27a45d409b6a 100644 --- a/website/src/pages/fr/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/fr/subgraphs/querying/graphql-api.mdx @@ -2,23 +2,23 @@ title: API GraphQL --- -Learn about the GraphQL Query API used in The Graph. +Découvrez l'API de requête GraphQL utilisée dans The Graph. ## Qu'est-ce que GraphQL ? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) est un langage d'interrogation pour les API et un moteur d'exécution pour l'exécution de ces requêtes avec vos données existantes. Le graphe utilise GraphQL pour interroger les subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +Pour comprendre le rôle plus important joué par GraphQL, consultez [développer](/subgraphs/developing/introduction/) et [créer un subgraph](/developing/creating-a-subgraph/). -## Queries with GraphQL +## Requêtes avec GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +Dans votre schéma Subgraph, vous définissez des types appelés `Entities`. Pour chaque type `Entity`, les champs `entity` et `entities` seront générés sur le type `Query` de premier niveau. -> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. +> Note : `query` n'a pas besoin d'être inclus au début de la requête `graphql` lors de l'utilisation de The Graph. ### Exemples -Query for a single `Token` entity defined in your schema: +Requête pour une seule entité `Token` définie dans votre schéma : ```graphql { @@ -29,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> Note: When querying for a single entity, the `id` field is required, and it must be written as a string. +> Note : Lors de l'interrogation d'une seule entité, le champ `id` est obligatoire et doit être écrit sous forme de chaîne de caractères. -Query all `Token` entities: +Interroge toutes les entités `Token` : ```graphql { @@ -44,10 +44,10 @@ Query all `Token` entities: ### Tri -When querying a collection, you may: +Lors de l'interrogation d'une collection, vous pouvez : -- Use the `orderBy` parameter to sort by a specific attribute. -- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. +- Utilisez le paramètre `orderBy` pour trier les données en fonction d'un attribut spécifique. +- Utilisez `orderDirection` pour spécifier la direction du tri, `asc` pour ascendant ou `desc` pour descendant. #### Exemple @@ -62,9 +62,9 @@ When querying a collection, you may: #### Exemple de tri d'entités imbriquées -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Depuis Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), les entités peuvent être triées sur la base des entités imbriquées. -The following example shows tokens sorted by the name of their owner: +L'exemple suivant montre des jetons triés par le nom de leur propriétaire : ```graphql { @@ -79,18 +79,18 @@ The following example shows tokens sorted by the name of their owner: } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> Actuellement, vous pouvez trier par type `String` ou `ID` à "un" niveau de profondeur sur les champs `@entity` et `@derivedFrom`. Malheureusement, le [tri par interfaces sur des entités d'un niveau de profondeur] (https://github.com/graphprotocol/graph-node/pull/4058), le tri par champs qui sont des tableaux et des entités imbriquées n'est pas encore prit en charge. ### Pagination -When querying a collection, it's best to: +Lors de l'interrogation d'une collection, il est préférable de : -- Use the `first` parameter to paginate from the beginning of the collection. - - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. -- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. +- Utilisez le paramètre `first` pour paginer à partir du début de la collection. + - L'ordre de tri par défaut est le tri par `ID` dans l'ordre alphanumérique croissant, **non** par heure de création. +- Utilisez le paramètre `skip` pour sauter des entités et paginer. Par exemple, `first:100` affiche les 100 premières entités et `first:100, skip:100` affiche les 100 entités suivantes. +- Évitez d'utiliser les valeurs `skip` dans les requêtes car elles sont généralement peu performantes. Pour récupérer un grand nombre d'éléments, il est préférable de parcourir les entités en fonction d'un attribut, comme indiqué dans l'exemple précédent. -#### Example using `first` +#### Exemple d'utilisation de `first` Interroger les 10 premiers tokens : @@ -103,11 +103,11 @@ Interroger les 10 premiers tokens : } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Pour rechercher des groupes d'entités au milieu d'une collection, le paramètre `skip` peut être utilisé en conjonction avec le paramètre `first` pour sauter un nombre spécifié d'entités en commençant par le début de la collection. -#### Example using `first` and `skip` +#### Exemple utilisant `first` et `skip` -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +Interroger 10 entités `Token`, décalées de 10 places par rapport au début de la collection : ```graphql { @@ -118,9 +118,9 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example using `first` and `id_ge` +#### Exemple utilisant `first` et `id_ge` -If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: +Si un client a besoin de récupérer un grand nombre d'entités, il est plus performant de baser les requêtes sur un attribut et de filtrer par cet attribut. Par exemple, un client pourrait récupérer un grand nombre de jetons en utilisant cette requête : ```graphql query manyTokens($lastID: String) { @@ -131,16 +131,16 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +La première fois, il enverra la requête avec `lastID = ""`, et pour les requêtes suivantes, il fixera `lastID` à l'attribut `id` de la dernière entité dans la requête précédente. Cette approche est nettement plus performante que l'utilisation de valeurs `skip` croissantes. ### Filtration -- You can use the `where` parameter in your queries to filter for different properties. -- You can filter on multiple values within the `where` parameter. +- Vous pouvez utiliser le paramètre `where` dans vos requêtes pour filtrer les différentes propriétés. +- Vous pouvez filtrer sur plusieurs valeurs dans le paramètre `where`. -#### Example using `where` +#### Exemple d'utilisation de `where` -Query challenges with `failed` outcome: +Défis de la requête avec un résultat `failed` : ```graphql { @@ -154,7 +154,7 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +Vous pouvez utiliser des suffixes comme `_gt`, `_lte` pour comparer les valeurs : #### Exemple de filtrage de plage @@ -170,9 +170,9 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Exemple de filtrage par bloc -You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. +Vous pouvez également filtrer les entités qui ont été mises à jour dans ou après un bloc spécifié avec `_change_block(number_gte : Int)`. -Cela peut être utile si vous cherchez à récupérer uniquement les entités qui ont changé, par exemple depuis la dernière fois que vous avez interrogé. Ou bien, il peut être utile d'étudier ou de déboguer la façon dont les entités changent dans votre subgraph (si combiné avec un filtre de bloc, vous pouvez isoler uniquement les entités qui ont changé dans un bloc spécifique). +Cela peut être utile si vous cherchez à récupérer uniquement les entités qui ont changé, par exemple depuis la dernière fois que vous avez interrogé. Ou encore, elle peut être utile pour étudier ou déboguer la façon dont les entités changent dans votre subgraph (si elle est combinée à un filtre de bloc, vous pouvez isoler uniquement les entités qui ont changé dans un bloc spécifique). ```graphql { @@ -186,7 +186,7 @@ Cela peut être utile si vous cherchez à récupérer uniquement les entités qu #### Exemple de filtrage d'entités imbriquées -Filtering on the basis of nested entities is possible in the fields with the `_` suffix. +Le filtrage sur la base d'entités imbriquées est possible dans les champs avec le suffixe `_`. Cela peut être utile si vous souhaitez récupérer uniquement les entités dont les entités au niveau enfant remplissent les conditions fournies. @@ -204,11 +204,11 @@ Cela peut être utile si vous souhaitez récupérer uniquement les entités dont #### Opérateurs logiques -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria. +Depuis Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), vous pouvez regrouper plusieurs paramètres dans le même argument `where` en utilisant les opérateurs `and` ou `or` pour filtrer les résultats en fonction de plusieurs critères. -##### `AND` Operator +##### L'opérateur `AND` -The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +L'exemple suivant filtre les défis avec `outcome` `succeeded` et `number` supérieur ou égal à `100`. ```graphql { @@ -222,7 +222,7 @@ The following example filters for challenges with `outcome` `succeeded` and `num } ``` -> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. +> **Sucre syntaxique:** Vous pouvez simplifier la requête ci-dessus en supprimant l'opérateur \`and\`\` et en passant une sous-expression séparée par des virgules. > > ```graphql > { @@ -236,9 +236,9 @@ The following example filters for challenges with `outcome` `succeeded` and `num > } > ``` -##### `OR` Operator +##### L'opérateur `OR` -The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +L'exemple suivant filtre les défis avec `outcome` `succeeded` ou `number` supérieur ou égal à `100`. ```graphql { @@ -252,7 +252,7 @@ The following example filters for challenges with `outcome` `succeeded` or `numb } ``` -> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries. +> **Note** : Lors de l'élaboration des requêtes, il est important de prendre en compte l'impact sur les performances de l'utilisation de l'opérateur `or`. Si `or` peut être un outil utile pour élargir les résultats d'une recherche, il peut aussi avoir des coûts importants. L'un des principaux problèmes de l'opérateur `or` est qu'il peut ralentir les requêtes. En effet, `or` oblige la base de données à parcourir plusieurs index, ce qui peut prendre beaucoup de temps. Pour éviter ces problèmes, il est recommandé aux développeurs d'utiliser les opérateurs and au lieu de or chaque fois que cela est possible. Cela permet un filtrage plus précis et peut conduire à des requêtes plus rapides et plus précises. #### Tous les filtres @@ -281,9 +281,9 @@ _not_ends_with _not_ends_with_nocase ``` -> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types. +> Veuillez noter que certains suffixes ne sont supportés que pour des types spécifiques. Par exemple, `Boolean` ne supporte que `_not`, `_in`, et `_not_in`, mais `_` n'est disponible que pour les types objet et interface. -In addition, the following global filters are available as part of `where` argument: +En outre, les filtres globaux suivants sont disponibles en tant que partie de l'argument `where` : ```graphql _change_block(numéro_gte : Int) @@ -291,11 +291,11 @@ _change_block(numéro_gte : Int) ### Interrogation des états précédents -You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +Vous pouvez interroger l'état de vos entités non seulement pour le dernier bloc, ce qui est le cas par défaut, mais aussi pour un bloc arbitraire dans le passé. Le bloc auquel une requête doit se produire peut être spécifié soit par son numéro de bloc, soit par son hash de bloc, en incluant un argument `block` dans les champs de niveau supérieur des requêtes. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +Le résultat d'une telle requête ne changera pas au fil du temps, c'est-à-dire qu'une requête portant sur un certain bloc passé renverra le même résultat quel que soit le moment où elle est exécutée, à l'exception d'une requête portant sur un bloc très proche de la tête de la chaîne, dont le résultat pourrait changer s'il s'avérait que ce bloc ne figurait **pas** sur la chaîne principale et que la chaîne était réorganisée. Une fois qu'un bloc peut être considéré comme définitif, le résultat de la requête ne changera pas. -> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Remarque : l'implémentation actuelle est encore sujette à certaines limitations qui pourraient violer ces garanties. L'implémentation ne permet pas toujours de déterminer si un bloc donné n'est pas du tout sur la chaîne principale ou si le résultat d'une requête par bloc pour un bloc qui n'est pas encore considéré comme final peut être influencé par une réorganisation du bloc qui a lieu en même temps que la requête. Elles n'affectent pas les résultats des requêtes par hash de bloc lorsque le bloc est final et que l'on sait qu'il se trouve sur la chaîne principale. [Ce numéro](https://github.com/graphprotocol/graph-node/issues/1405) explique ces limitations en détail. #### Exemple @@ -311,7 +311,7 @@ The result of such a query will not change over time, i.e., querying at a certai } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +Cette requête renverra les entités `Challenge` et les entités `Application` qui leur sont associées, telles qu'elles existaient directement après le traitement du bloc numéro 8 000 000. #### Exemple @@ -327,26 +327,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +Cette requête renverra les entités `Challenge`, et leurs entités `Application` associées, telles qu'elles existaient directement après le traitement du bloc avec le hash donné. ### Requêtes de recherche en texte intégral -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Les champs de recherche intégralement en texte fournissent une API de recherche textuelle expressive qui peut être ajoutée au schéma du subgraph et personnalisée. Reportez-vous à [Définir des champs de recherche en texte intégral](/developing/creating-a-subgraph/#defining-fulltext-search-fields) pour ajouter la recherche intégralement en texte à votre subgraph. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Les requêtes de recherche en texte intégral comportent un champ obligatoire, `text`, pour fournir les termes de la recherche. Plusieurs opérateurs spéciaux de texte intégral peuvent être utilisés dans ce champ de recherche `text`. Opérateurs de recherche en texte intégral : -| Symbole | Opérateur | Description | -| --- | --- | --- | -| `&` | `And` | Pour combiner plusieurs termes de recherche dans un filtre pour les entités incluant tous les termes fournis | -| | | `Or` | Les requêtes comportant plusieurs termes de recherche séparés par l'opérateur ou renverront toutes les entités correspondant à l'un des termes fournis | -| `<->` | `Follow by` | Spécifiez la distance entre deux mots. | -| `:*` | `Prefix` | Utilisez le terme de recherche de préfixe pour trouver les mots dont le préfixe correspond (2 caractères requis.) | +| Symbole | Opérateur | Description | +| ------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | Pour combiner plusieurs termes de recherche dans un filtre pour les entités incluant tous les termes fournis | +| | | `Or` | Les requêtes comportant plusieurs termes de recherche séparés par l'opérateur ou renverront toutes les entités correspondant à l'un des termes fournis | +| `<->` | `Follow by` | Spécifiez la distance entre deux mots. | +| `:*` | `Prefix` | Utilisez le terme de recherche de préfixe pour trouver les mots dont le préfixe correspond (2 caractères requis.) | #### Exemples -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +En utilisant l'opérateur `ou`, cette requête filtrera les entités de blog ayant des variations d' "anarchism" ou "crumpet" dans leurs champs de texte intégral. ```graphql { @@ -359,7 +359,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +L'opérateur `follow by` spécifie un mot à une distance spécifique dans les documents en texte intégral. La requête suivante renverra tous les blogs contenant des variations de "decentralize" suivies de "philosophy" ```graphql { @@ -387,25 +387,25 @@ Combinez des opérateurs de texte intégral pour créer des filtres plus complex ### Validation -Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more. +Graph Node met en œuvre une validation [basée sur les spécifications](https://spec.graphql.org/October2021/#sec-Validation) des requêtes GraphQL qu'il reçoit à l'aide de [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), qui est basée sur l'implémentation de référence [graphql-js](https://github.com/graphql/graphql-js/tree/main/src/validation). Les requêtes qui échouent à une règle de validation sont accompagnées d'une erreur standard - consultez les [spécifications GraphQL](https://spec.graphql.org/October2021/#sec-Validation) pour en savoir plus. ## Schema -The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +Le schéma de vos sources de données, c'est-à-dire les types d'entités, les valeurs et les relations qui peuvent être interrogés, est défini dans le [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +Les schémas GraphQL définissent généralement des types racines pour les `queries`, les `subscriptions` et les `mutations`. The Graph ne prend en charge que les `requêtes`. Le type racine `Query` pour votre subgraph est automatiquement généré à partir du schéma GraphQL qui est inclus dans votre [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Remarque : notre API n'expose pas les mutations car les développeurs sont censés émettre des transactions directement contre la blockchain sous-jacente à partir de leurs applications. ### Entities -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +Tous les types GraphQL avec des directives `@entity` dans votre schéma seront traités comme des entités et doivent avoir un champ `ID`. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Note:** Actuellement, tous les types de votre schéma doivent avoir une directive `@entity`. Dans le futur, nous traiterons les types n'ayant pas la directive `@entity` comme des objets de valeur, mais cela n'est pas encore pris en charge. ### Métadonnées du Subgraph -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +Tous les subgraphs ont un objet `_Meta_` auto-généré, qui permet d'accéder aux métadonnées du subgraph. Cet objet peut être interrogé comme suit : ```graphQL { @@ -421,14 +421,14 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Si un bloc est fourni, les métadonnées sont celles de ce bloc, sinon le dernier bloc indexé est utilisé. S'il est fourni, le bloc doit être postérieur au bloc de départ du subgraph et inférieur ou égal au bloc indexé le plus récent. +Si un bloc est fourni, les métadonnées sont celles de ce bloc, sinon le dernier bloc indexé est utilisé. S'il est fourni, le bloc doit être postérieur au bloc de départ du subgraph et inférieur ou égal au dernier bloc indexé. -`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. +`deployment` est un ID unique, correspondant au IPFS CID du fichier `subgraph.yaml`. -`block` provides information about the latest block (taking into account any block constraints passed to `_meta`): +`block` fournit des informations sur le dernier bloc (en tenant compte des contraintes de bloc passées à `_meta`) : - hash : le hash du bloc - number: the block number -- timestamp : l'horodatage du bloc, si disponible (ceci n'est actuellement disponible que pour les subgraphs indexant les réseaux EVM) +- horodatage : l'horodatage du bloc, s'il est disponible (pour l'instant, cette information n'est disponible que pour les subgraphs indexant les réseaux EVM) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` est un booléen indiquant si le subgraph a rencontré des erreurs d'indexation à un moment donné diff --git a/website/src/pages/fr/subgraphs/querying/introduction.mdx b/website/src/pages/fr/subgraphs/querying/introduction.mdx index 38a2f3d528d7..75088fa635a9 100644 --- a/website/src/pages/fr/subgraphs/querying/introduction.mdx +++ b/website/src/pages/fr/subgraphs/querying/introduction.mdx @@ -3,30 +3,30 @@ title: Interroger The Graph sidebarTitle: Présentation --- -To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer). +Pour commencer à interroger immédiatement, visitez [The Graph Explorer](https://thegraph.com/explorer). ## Aperçu -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +Lorsqu'un subgraph est publié sur The Graph Network, vous pouvez visiter sa page de détails sur Graph Explorer et utiliser l'onglet "Query" pour explorer l'API GraphQL déployée pour chaque subgraph. ## Spécificités⁠ -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Chaque subgraph publié dans The Graph Network possède une URL de requête unique dans Graph Explorer, qui permet d'effectuer des requêtes directes. Vous pouvez la trouver en naviguant vers la page de détails du subgraph et en cliquant sur le bouton "Requête" dans le coin supérieur droit. -![Query Subgraph Button](/img/query-button-screenshot.png) +![Bouton d'interrogation de subgraphs](/img/query-button-screenshot.png) -![Query Subgraph URL](/img/query-url-screenshot.png) +![URL d'interrogation de subgraphs](/img/query-url-screenshot.png) -You will notice that this query URL must use a unique API key. You can create and manage your API keys in [Subgraph Studio](https://thegraph.com/studio), under the "API Keys" section. Learn more about how to use Subgraph Studio [here](/deploying/subgraph-studio/). +Vous remarquerez que cette URL de requête doit utiliser une clé API unique. Vous pouvez créer et gérer vos clés API dans [Subgraph Studio](https://thegraph.com/studio), dans la section "clés API". Pour en savoir plus sur l'utilisation de Subgraph Studio cliquez [ici](/deploying/subgraph-studio/). -Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). +Les utilisateurs de Subgraph Studio commencent avec un plan gratuit, qui leur permet d'effectuer 100 000 requêtes par mois. Des requêtes supplémentaires sont disponibles sur le plan de croissance, qui offre une tarification basée sur l'utilisation pour les requêtes supplémentaires, payable par carte de crédit, ou GRT sur Arbitrum. Vous pouvez en savoir plus sur la facturation [ici](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Veuillez consulter l'[API de requête](/subgraphs/querying/graphql-api/) pour une référence complète sur la manière d'interroger les entités du Subgraph. > -> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. +> Remarque : si vous rencontrez des erreurs 405 lors d'une requête GET vers l'URL de Graph Explorer, veuillez passer à une requête POST. ### Ressources supplémentaires -- Use [GraphQL querying best practices](/subgraphs/querying/best-practices/). -- To query from an application, click [here](/subgraphs/querying/from-an-application/). -- View [querying examples](https://github.com/graphprotocol/query-examples/tree/main). +- Utilisez les [meilleures pratiques d'interrogation GraphQL](/subgraphs/querying/best-practices/). +- Pour effectuer une requête à partir d'une application, cliquez sur [ici](/subgraphs/querying/from-an-application/). +- Voir [exemples de recherche](https://github.com/graphprotocol/query-examples/tree/main). diff --git a/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx index d44a65306dc1..644b58ccf482 100644 --- a/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/fr/subgraphs/querying/managing-api-keys.mdx @@ -1,34 +1,34 @@ --- -title: Gérer vos clés API +title: Gestion des clés API --- ## Aperçu -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +Les clés API sont nécessaires pour interroger les subgraphs. Elles garantissent que les connexions entre les services d'application sont valides et autorisées, y compris l'authentification de l'utilisateur final et de l'appareil utilisant l'application. -### Create and Manage API Keys +### Créer et gérer des clés API -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Allez sur [Subgraph Studio](https://thegraph.com/studio/) et cliquez sur l'onglet **API Keys** pour créer et gérer vos clés API pour des subgraphs spécifiques. -The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. +Le tableau "Clés API" répertorie les clés API existantes et vous permet de les gérer ou de les supprimer. Pour chaque clé, vous pouvez voir son statut, le coût pour la période en cours, la limite de dépenses pour la période en cours et le nombre total de requêtes. -You can click the "three dots" menu to the right of a given API key to: +Vous pouvez cliquer sur le "menu à trois points" à droite d'une clé API donnée pour : -- Rename API key -- Regenerate API key -- Delete API key -- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +- Renommer la clé API +- Régénérer la clé API +- Supprimer la clé API +- Gérer la limite de dépenses : il s'agit d'une limite de dépenses mensuelle facultative pour une clé API donnée, en USD. Cette limite s'applique à chaque période de facturation (mois civil). -### API Key Details +### Détails de la clé API -You can click on an individual API key to view the Details page: +Vous pouvez cliquer sur une clé API individuelle pour afficher la page des détails : -1. Under the **Overview** section, you can: +1. Dans la section **Aperçu**, vous pouvez : - Modifiez le nom de votre clé - Régénérer les clés API - Affichez l'utilisation actuelle de la clé API avec les statistiques : - Nombre de requêtes - Montant de GRT dépensé -2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: +2. Dans la section **Sécurité**, vous pouvez choisir des paramètres de sécurité en fonction du niveau de contrôle que vous souhaitez avoir. Plus précisément, vous pouvez : - Visualisez et gérez les noms de domaine autorisés à utiliser votre clé API - - Attribuez des subgraphs qui peuvent être interrogés avec votre clé API + - Attribuer des subgraphs qui peuvent être interrogés avec votre clé API diff --git a/website/src/pages/fr/subgraphs/querying/python.mdx b/website/src/pages/fr/subgraphs/querying/python.mdx index f8d2b0741c18..3e172e324351 100644 --- a/website/src/pages/fr/subgraphs/querying/python.mdx +++ b/website/src/pages/fr/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Interroger The Graph avec Python et Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds est une librairie Python utilisée pour les requêtes Subgraph. Cette librairie a été conçue par [Playgrounds](https://playgrounds.network/). Subgrounds permet de connecter directement les données d'un Subgraph à un environnement de données Python, permettant l'utilisation de librairies comme [pandas](https://pandas.pydata.org/) afin de faire de l'analyse de données! +Subgrounds est une bibliothèque Python intuitive pour l'interrogation des subgraphs, créée par [Playgrounds](https://playgrounds.network/). Elle vous permet de connecter directement les données des subgraphs à un environnement de données Python, ce qui vous permet d'utiliser des bibliothèques comme [pandas](https://pandas.pydata.org/) pour effectuer des analyses de données ! Subgrounds propose une API Python simplifiée afin de construire des requêtes GraphQL. Subgrounds automatise les workflows fastidieux comme la pagination, et donne aux utilisateurs avancés plus de pouvoir grâce à des transformations de schéma contrôlées. @@ -17,24 +17,24 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Une fois installé, vous pouvez tester Subgrounds avec la requête suivante. La requête ci-dessous récupère un Subgraph pour le protocole Aave v2 et interroge les 5 principaux marchés par TVL (Total Value Locked - Valeur Totale Verouillée), sélectionne leur nom et leur TVL (en USD) et renvoie les données sous forme de DataFrame Panda [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Une fois installé, vous pouvez tester les subgraphs avec la requête suivante. L'exemple suivant récupère un subgraph pour le protocole Aave v2 et interroge les 5 premiers marchés classés par TVL (Total Value Locked), sélectionne leur nom et leur TVL (en USD) et renvoie les données sous la forme d'un [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) pandas . ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Charge le Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# Construct the query +# Construit la requête latest_markets = aave_v2.Query.markets( orderBy=aave_v2.Market.totalValueLockedUSD, orderDirection='desc', first=5, ) -# Return query to a dataframe +# Renvoi la requête à un dataframe sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, @@ -54,4 +54,4 @@ Subgrounds est développé et maintenu par l'équipe de [Playgrounds](https://pl - [Requêtes concurrentes](https://docs.playgrounds.network/subgrounds/getting_started/async/) - Améliorez vos requêtes en les parallélisant. - [Export de données en CSVs](https://docs.playgrounds.network/subgrounds/faq/exporting/) - - A quick article on how to seamlessly save your data as CSVs for further analysis. + - Un article rapide sur la manière d'enregistrer de manière transparente vos données au format CSV en vue d'une analyse ultérieure. diff --git a/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 91eb7ec02307..acd40aface24 100644 --- a/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/fr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Identifiant du Subgraph VS. Identifiant de déploiement --- -Un Subgraph est identifié par un identifiant Subgraph (Subpgraph ID), et chaque version de ce subgraph est identifiée par un identifiant de déploiement (Deployment ID). +Un subgraph est identifié par un ID de subgraph, et chaque version du subgraph est identifiée par un ID de déploiement. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +Lors de l'interrogation d'un subgraph, l'un ou l'autre ID peut être utilisé, bien qu'il soit généralement suggéré d'utiliser l'ID de déploiement en raison de sa capacité à spécifier une version spécifique d'un subgraph. -Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) +Voici les principales différences entre les deux ID : ![](/img/subgraph-id-vs-deployment-id.png) ## Identifiant de déploiement -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +L'ID de déploiement est le hash IPFS du fichier manifeste compilé, qui fait référence à d'autres fichiers sur IPFS au lieu d'URL relatives sur l'ordinateur. Par exemple, le manifeste compilé est accessible via : `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. Pour modifier l'ID de déploiement, il suffit de mettre à jour le fichier de manifeste, en modifiant par exemple le champ de description comme décrit dans la [documentation du manifeste du subgraph](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +Lorsque des requêtes sont effectuées à l'aide de l'ID de déploiement d'un subgraph, nous spécifions une version de ce subgraph à interroger. L'utilisation de l'ID de déploiement pour interroger une version spécifique du subgraph donne lieu à une configuration plus sophistiquée et plus robuste, car il y a un contrôle total sur la version du subgraph interrogée. Toutefois, cela implique la nécessité de mettre à jour manuellement le code d'interrogation chaque fois qu'une nouvelle version du subgraph est publiée. Exemple d'endpoint utilisant l'identifiant de déploiement: @@ -20,8 +20,8 @@ Exemple d'endpoint utilisant l'identifiant de déploiement: ## Identifiant du Subgraph -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +L'ID du subgraph est un ID unique pour un subgraph. Il reste constant dans toutes les versions d'un subgraph. Il est recommandé d'utiliser l'ID du subgraph pour demander la dernière version d'un subgraph, bien qu'il y ait quelques mises en garde. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Sachez que l'interrogation à l'aide de l'ID du Subgraph peut entraîner la réponse à des requêtes par une version plus ancienne du Subgraph, la nouvelle version ayant besoin d'un certain temps pour se synchroniser. De plus, les nouvelles versions peuvent introduire des changements de schéma radicaux. -Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` +Exemple d'endpoint utilisant l'ID du subgraph : `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/fr/subgraphs/quick-start.mdx b/website/src/pages/fr/subgraphs/quick-start.mdx index 7f5b41aa8eaf..c227ec40ccc7 100644 --- a/website/src/pages/fr/subgraphs/quick-start.mdx +++ b/website/src/pages/fr/subgraphs/quick-start.mdx @@ -2,24 +2,24 @@ title: Démarrage rapide --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Apprenez à construire, publier et interroger facilement un [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) sur The Graph. -## Prerequisites +## Prérequis - Un portefeuille crypto -- A smart contract address on a [supported network](/supported-networks/) -- [Node.js](https://nodejs.org/) installed -- A package manager of your choice (`npm`, `yarn` or `pnpm`) +- Une adresse de contrat intelligent sur un [réseau pris en charge](/supported-networks/) +- [Node.js](https://nodejs.org/) installé +- Un gestionnaire de package de votre choix (`npm`, `yarn` ou `pnpm`) -## How to Build a Subgraph +## Comment construire un subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Créer un subgraph dans Subgraph Studio Accédez à [Subgraph Studio](https://thegraph.com/studio/) et connectez votre portefeuille. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio vous permet de créer, de gérer, de déployer et de publier des subgraphs, ainsi que de créer et de gérer des clés API. -Cliquez sur « Créer un subgraph ». Il est recommandé de nommer le subgraph en majuscule : « Nom du subgraph Nom de la chaîne ». +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Installez la CLI Graph @@ -37,56 +37,56 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialiser votre subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Vous trouverez les commandes pour votre subgraph spécifique sur la page du subgraph dans [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +La commande `graph init` créera automatiquement un échafaudage d'un subgraph basé sur les événements de votre contrat. -The following command initializes your subgraph from an existing contract: +La commande suivante initialise votre subgraph à partir d'un contrat existant : ```sh graph init ``` -If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. +Si votre contrat est vérifié sur le scanner de blocs où il est déployé (comme [Etherscan](https://etherscan.io/)), l'ABI sera automatiquement créé dans le CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +Lorsque vous initialisez votre subgraph, la CLI vous demande les informations suivantes : -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. -- **Contract address**: Locate the smart contract address you’d like to query data from. -- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. -- **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. -- **Add another contract** (optional): You can add another contract. +- **Protocole** : Choisissez le protocole à partir duquel votre subgraph indexera les données. +- **Subgraph slug** : Créez un nom pour votre subgraphe. Votre nom de subgraph est un Identifiant pour votre subgraph. +- **Répertoire** : Choisissez un répertoire dans lequel créer votre Subgraph. +- **Réseau Ethereum** (optionnel) : Vous pouvez avoir besoin de spécifier le réseau compatible EVM à partir duquel votre subgraph indexera les données. +- **Adresse du contrat** : Localisez l'adresse du contrat intelligent dont vous souhaitez interroger les données. +- **ABI** : Si l'ABI n'est pas renseigné automatiquement, vous devrez le saisir manuellement sous la forme d'un fichier JSON. +- **Bloc de départ** : Vous devez saisir le bloc de départ pour optimiser l'indexation du Subgraph des données de la blockchain. Localisez le bloc de départ en trouvant le bloc où votre contrat a été déployé. +- **Nom du contrat** : Saisissez le nom de votre contrat. +- **Indexer les événements contractuels comme des entités** : Il est conseillé de mettre cette option à true, car elle ajoutera automatiquement des mappages à votre subgraph pour chaque événement émis. +- **Ajouter un autre contrat** (facultatif) : Vous pouvez ajouter un autre contrat. -La capture d'écran suivante donne un exemple de ce qui vous attend lors de l'initialisation de votre subgraph : +La capture d'écran suivante donne un exemple de ce à quoi on peut s'attendre lors de l'initialisation du subgraph : -![Subgraph command](/img/CLI-Example.png) +![Commande de subgraph](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Modifiez votre subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +La commande `init` de l'étape précédente crée un Subgraph d'échafaudage que vous pouvez utiliser comme point de départ pour construire votre Subgraph. -When making changes to the subgraph, you will mainly work with three files: +Lorsque vous modifiez le Subgraph, vous travaillez principalement avec trois fichiers : -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - définit les sources de données que votre Subgraph indexera. +- Schema (`schema.graphql`) - définit les données que vous souhaitez extraire du Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +Pour une description détaillée de la manière d'écrire votre Subgraph, consultez [Créer un Subgraph](/developing/creating-a-subgraph/). -### 5. Déployer votre subgraph +### 5. Déployez votre Subgraph -> Remember, deploying is not the same as publishing. +> N'oubliez pas que le déploiement n'est pas la même chose que la publication. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +Lorsque vous **déployez** un Subgraph, vous l'envoyez au [Subgraph Studio](https://thegraph.com/studio/), où vous pouvez le tester, le mettre en scène et le réviser. L'indexation d'un Subgraph déployé est effectuée par l'[Indexeur de mise à niveau](https://thegraph.com/blog/upgrade-indexer/), qui est un indexeur unique détenu et exploité par Edge & Node, plutôt que par les nombreux Indexeurs décentralisés de The Graph Network. Un Subgraph **déployé** est libre d'utilisation, à taux limité, non visible par le public et destiné à être utilisé à des fins de développement, de mise en place et de test. -Une fois votre subgraph écrit, exécutez les commandes suivantes : +Une fois que votre Subgraph est écrit, exécutez les commandes suivantes : ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authentifiez-vous et déployez votre subgraph. La clé de déploiement se trouve sur la page du subgraph dans Subgraph Studio. +Authentifiez et déployez votre Subgraph. La clé de déploiement se trouve sur la page du Subgraph dans Subgraph Studio. ![Clé de déploiement](/img/subgraph-studio-deploy-key.jpg) @@ -107,39 +107,39 @@ graph deploy ``` ```` -The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. +La CLI demandera un label de version. Il est fortement recommandé d'utiliser [le versionnement sémantique](https://semver.org/), par exemple `0.0.1`. -### 6. Examiner votre subgraph +### 6. Examinez votre subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +Si vous souhaitez tester votre subgraph avant de le publier, vous pouvez utiliser [Subgraph Studio](https://thegraph.com/studio/) pour effectuer les opérations suivantes : - Exécuter un exemple de requête. -- Analyser votre subgraph dans le tableau de bord pour vérifier les informations. -- Vérifier les logs sur le tableau de bord pour voir si des erreurs surviennent avec votre subgraph. Les logs d'un subgraph opérationnel ressembleront à ceci : +- Analysez votre subgraph dans le tableau de bord pour vérifier les informations. +- Vérifiez les logs sur le tableau de bord pour voir s'il y a des erreurs avec votre subgraph. Les logs d'un subgraph opérationnel ressemblent à ceci : ![Logs du subgraph](/img/subgraph-logs-image.png) -### 7. Publier votre subgraph sur The Graph Network⁠ +### 7. Publier votre subgraph sur The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +Lorsque votre subgraph est prêt pour un environnement de production, vous pouvez le publier sur le réseau décentralisé. La publication est une action onchain qui effectue les opérations suivantes : -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- Il rend votre subgraph disponible pour être indexé par les [Indexeurs](/indexing/overview/) décentralisés sur The Graph Network. +- Il supprime les limites de taux et rend votre subgraph publiquement consultable et interrogeable dans [Graph Explorer](https://thegraph.com/explorer/). +- Il met votre subgraph à la disposition des [Curateurs](/resources/roles/curating/) pour qu'ils le curent. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> Plus la quantité de GRT que vous et d'autres personnes curez dans votre subgraph est importante, plus les Indexeurs seront incités à indexer votre subgraph, ce qui améliorera la qualité du service, réduira la latence et renforcera la redondance du réseau pour votre subgraph. #### Publier avec Subgraph Studio⁠ -Pour publier votre subgraph, cliquez sur le bouton "Publish" dans le tableau de bord. +Pour publier votre subgraph, cliquez sur le bouton Publier dans le tableau de bord. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publier un subgraph sur Subgraph Studio](/img/publish-sub-transfer.png) -Sélectionnez le réseau sur lequel vous souhaitez publier votre subgraph. +Sélectionnez le réseau dans lequel vous souhaitez publier votre subgraph. #### Publication à partir de la CLI -À partir de la version 0.73.0, vous pouvez également publier votre subgraph avec Graph CLI. +Depuis la version 0.73.0, vous pouvez également publier votre subgraph à l'aide de Graph CLI. Ouvrez le `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. Une fenêtre s'ouvrira, vous permettant de connecter votre portefeuille, d'ajouter des métadonnées et de déployer votre subgraph finalisé sur le réseau de votre choix. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) -To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +Pour personnaliser votre déploiement, voir [Publier un subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Ajout de signal à votre subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. Pour inciter les Indexeurs à interroger votre subgraph, vous devez y ajouter un signal de curation GRT. - - Cette action améliore la qualité du service, réduit la latence et renforce la redondance et la disponibilité du réseau pour votre subgraph. + - Cette action améliore la qualité de service, réduit la latence et améliore la redondance et la disponibilité du réseau pour votre Subgraph. 2. Si éligibles aux récompenses d'indexation, les Indexeurs reçoivent des récompenses en GRT proportionnelles au montant signalé. - - Il est recommandé de curer au moins 3 000 GRT pour attirer 3 Indexeurs. Vérifiez l'éligibilité aux récompenses en fonction de l'utilisation des fonctionnalités du subgraph et des réseaux supportés. + - Il est recommandé de rassembler au moins 3 000 GRT pour attirer 3 Indexeurs. Vérifiez l'éligibilité des récompenses en fonction de l'utilisation des fonctions du subgraph et des réseaux pris en charge. -To learn more about curation, read [Curating](/resources/roles/curating/). +Pour en savoir plus sur la curation, lisez [Curating](/resources/roles/curating/). -Pour économiser sur les frais de gas, vous pouvez curer votre subgraph dans la même transaction que celle où vous le publiez en sélectionnant cette option : +Pour économiser des frais de gas, vous pouvez créer votre subgraph dans la même transaction que vous le publiez en sélectionnant cette option : -![Subgraph publish](/img/studio-publish-modal.png) +![Publication de subgraph](/img/studio-publish-modal.png) ### 8. Interroger votre subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +Vous avez maintenant accès à 100 000 requêtes gratuites par mois avec votre subgraph sur The Graph Network ! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +Vous pouvez interroger votre subgraph en envoyant des requêtes GraphQL à son URL de requête, que vous trouverez en cliquant sur le bouton Requête. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +Pour plus d'informations sur l'interrogation des données de votre subgraph, lisez [Interroger The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/fr/substreams/_meta-titles.json b/website/src/pages/fr/substreams/_meta-titles.json index 6262ad528c3a..bd6a51423076 100644 --- a/website/src/pages/fr/substreams/_meta-titles.json +++ b/website/src/pages/fr/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "Développement" } diff --git a/website/src/pages/fr/substreams/developing/_meta-titles.json b/website/src/pages/fr/substreams/developing/_meta-titles.json index 882ee9fc7c9c..05826edb5e9f 100644 --- a/website/src/pages/fr/substreams/developing/_meta-titles.json +++ b/website/src/pages/fr/substreams/developing/_meta-titles.json @@ -1,4 +1,4 @@ { "solana": "Solana", - "sinks": "Sink your Substreams" + "sinks": "Faites un Sink de vos Substreams" } diff --git a/website/src/pages/fr/substreams/developing/dev-container.mdx b/website/src/pages/fr/substreams/developing/dev-container.mdx index bd4acf16eec7..3e7814c857df 100644 --- a/website/src/pages/fr/substreams/developing/dev-container.mdx +++ b/website/src/pages/fr/substreams/developing/dev-container.mdx @@ -1,48 +1,48 @@ --- -title: Substreams Dev Container -sidebarTitle: Dev Container +title: Dev Container Substreams +sidebarTitle: Le Dev Container --- -Develop your first project with Substreams Dev Container. +Développez votre premier projet avec Substreams Dev Container. -## What is a Dev Container? +## C'est quoi un Dev Container? -It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). +C'est un outil qui vous aide à construire votre premier projet. Vous pouvez l'utiliser à distance via les codespaces Github ou localement en clonant la [repo de départ de substreams](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Dans le Dev Container, la commande `substreams init` met en place un projet Substreams généré par le code, ce qui vous permet de construire facilement un subgraph ou une solution basée sur SQL pour le traitement des données. -## Prerequisites +## Prérequis -- Ensure Docker and VS Code are up-to-date. +- S'assurer que Docker et VS Code sont à jour. -## Navigating the Dev Container +## Naviguer dans le Dev Container -In the Dev Container, you can either build or import your own `substreams.yaml` and associate modules within the minimal path or opt for the automatically generated Substreams paths. Then, when you run the `Substreams Build` it will generate the Protobuf files. +Dans le Dev Container, vous pouvez soit construire ou importer votre propre `substreams.yaml` et associer des modules dans le chemin minimal, soit opter pour les chemins Substreams générés automatiquement. Ensuite, lorsque vous exécutez le `Substreams Build`, il génère les fichiers Protobuf. ### Options -- **Minimal**: Starts you with the raw block `.proto` and requires development. This path is intended for experienced users. -- **Non-Minimal**: Extracts filtered data using network-specific caches and Protobufs taken from corresponding foundational modules (maintained by the StreamingFast team). This path generates a working Substreams out of the box. +- **Minimal** : Vous démarre avec le bloc brut `.proto` et nécessite du développement. Ce chemin est destiné aux utilisateurs expérimentés. +- **Non-Minimal** : Extrait les données filtrées en utilisant les caches spécifiques au réseau et les Protobufs provenant des modules de base correspondants (maintenus par l'équipe StreamingFast). Ce chemin génère un Substreams fonctionnel dès sa sortie de la boîte. -To share your work with the broader community, publish your `.spkg` to [Substreams registry](https://substreams.dev/) using: +Pour partager votre travail avec la communauté, publiez votre `.spkg` sur [Substreams registry](https://substreams.dev/) en utilisant : - `substreams registry login` - `substreams registry publish` -> Note: If you run into any problems within the Dev Container, use the `help` command to access trouble shooting tools. +> Note : Si vous rencontrez des problèmes dans le Dev Container, utilisez la commande `help` pour accéder aux outils de dépannage. -## Building a Sink for Your Project +## Construire un sink pour votre projet -You can configure your project to query data either through a Subgraph or directly from an SQL database: +Vous pouvez configurer votre projet pour qu'il interroge des données soit par l'intermédiaire d'un subgraph, soit directement à partir d'une base de données SQL : -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). -- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +- **Subgraph** : Exécutez `substreams codegen subgraph`. Cela génère un projet avec un fichier de base `schema.graphql` et `mappings.ts`. Vous pouvez les personnaliser pour définir des entités basées sur les données extraites par Substreams. Pour plus de configurations, voir [Documentation Subgraph sink](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **SQL** : Exécutez `substreams codegen sql` pour les requêtes basées sur SQL. Pour plus d'informations sur la configuration d'un sink SQL, consultez la [documentation SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). -## Deployment Options +## Options de déploiement -To deploy a Subgraph, you can either run the `graph-node` locally using the `deploy-local` command or deploy to Subgraph Studio by using the `deploy` command found in the `package.json` file. +Pour déployer un subgraph, vous pouvez soit exécuter le `graph-node` localement en utilisant la commande `deploy-local`, soit le déployer dans Subgraph Studio en utilisant la commande `deploy` qui se trouve dans le fichier `package.json`. -## Common Errors +## Erreurs courantes -- When running locally, make sure to verify that all Docker containers are healthy by running the `dev-status` command. -- If you put the wrong start-block while generating your project, navigate to the `substreams.yaml` to change the block number, then re-run `substreams build`. +- Lors d'une exécution locale, assurez-vous de vérifier que tous les conteneurs Docker sont sains en lançant la commande `dev-status`. +- Si vous avez mis le mauvais bloc de départ lors de la génération de votre projet, naviguez jusqu'à `substreams.yaml` pour changer le numéro de bloc, puis relancez `substreams build`. diff --git a/website/src/pages/fr/substreams/developing/sinks.mdx b/website/src/pages/fr/substreams/developing/sinks.mdx index 265c2e31b425..367d24b07099 100644 --- a/website/src/pages/fr/substreams/developing/sinks.mdx +++ b/website/src/pages/fr/substreams/developing/sinks.mdx @@ -1,51 +1,51 @@ --- -title: Official Sinks +title: Faites un Sink de vos Substreams --- -Choose a sink that meets your project's needs. +Choisissez un sink qui répond aux besoins de votre projet. ## Aperçu -Once you find a package that fits your needs, you can choose how you want to consume the data. +Une fois que vous avez trouvé un package qui répond à vos besoins, vous pouvez choisir la façon dont vous voulez utiliser les données. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Les sinks sont des intégrations qui vous permettent d'envoyer les données extraites vers différentes destinations, telles qu'une base de données SQL, un fichier ou un Subgraph. ## Sinks -> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. +> Remarque : certains sinks sont officiellement pris en charge par l'équipe de développement de StreamingFast (c'est-à-dire qu'ils bénéficient d'un soutien actif), mais d'autres sinks sont gérés par la communauté et leur prise en charge n'est pas garantie. -- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. -- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. -- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. -- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. +- [Base de données SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. +- [Subgraph](/sps/introduction/) : Configurez une API pour répondre à vos besoins en matière de données et hébergez-la sur The Graph Network. +- [Direct Streaming] (https://docs.substreams.dev/how-to-guides/sinks/stream) : Stream en continu des données directement à partir de votre application. +- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub) : Envoyer des données à un sujet PubSub. +- [Sinks communautaires] (https://docs.substreams.dev/how-to-guides/sinks/community-sinks) : Découvrez des Sinks de qualité entretenus par la communauté. -> Important: If you’d like your sink (e.g., SQL or PubSub) hosted for you, reach out to the StreamingFast team [here](mailto:sales@streamingfast.io). +> Important : Si vous souhaitez que votre sink (par exemple, SQL ou PubSub) soit hébergé pour vous, contactez l'équipe StreamingFast [ici](mailto:sales@streamingfast.io). -## Navigating Sink Repos +## Naviguer dans les Repos de Sink -### Official +### Officiel -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Nom | Support | Responsable de la maintenance | Code Source | +| ---------- | ------- | ----------------------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| SDK Go | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| SDK Rust | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| SDK JS | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| Store KV | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | -### Community +### Communauté -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Nom | Support | Responsable de la maintenance | Code Source | +| ---------- | ------- | ----------------------------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Communauté | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Fichiers | C | Communauté | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| Store KV | C | Communauté | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Communauté | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -- O = Official Support (by one of the main Substreams providers) -- C = Community Support +- O = Soutien officiel (par l'un des principaux fournisseurs de Substreams) +- C = Soutien de la Communauté diff --git a/website/src/pages/fr/substreams/developing/solana/account-changes.mdx b/website/src/pages/fr/substreams/developing/solana/account-changes.mdx index b295ffdce030..7211f25c5f6e 100644 --- a/website/src/pages/fr/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/fr/substreams/developing/solana/account-changes.mdx @@ -1,57 +1,57 @@ --- -title: Solana Account Changes -sidebarTitle: Account Changes +title: Modifications du compte Solana +sidebarTitle: Modifications du compte --- -Learn how to consume Solana account change data using Substreams. +Apprenez comment consommer les données de modification de compte Solana en utilisant Substreams. ## Présentation -This guide walks you through the process of setting up your environment, configuring your first Substreams stream, and consuming account changes efficiently. By the end of this guide, you will have a working Substreams feed that allows you to track real-time account changes on the Solana blockchain, as well as historical account change data. +Ce guide vous accompagne dans le processus de mise en place de votre environnement, de configuration de votre premier flux Substreams et de consommation efficace des modifications de compte. À la fin de ce guide, vous aurez un flux Substreams opérationnel qui vous permettra de suivre les changements de compte en temps réel sur la blockchain Solana, ainsi que les données historiques des changements de compte. -> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. +> NOTE : L'historique des modifications du compte Solana est daté de 2025, bloc 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +Pour chaque bloc de comptes Substreams Solana, seule la dernière mise à jour par compte est enregistrée, voir la [Référence Protobuf](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). Si un compte est supprimé, une charge utile avec `deleted == True` est fournie. En outre, les événements de faible importance sont omis, tels que ceux dont le propriétaire spécial est le compte “Vote11111111…” ou les changements qui n'affectent pas les données du compte (par exemple, les changements de lamport). -> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. +> NOTE : Pour tester la latence de Substreams pour les comptes Solana, mesurée par la dérive des têtes de blocs, installez la [CLI Substreams](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) et exécutez `substreams run solana-common blocks_without_votes -s -1 -o clock`. ## Introduction -### Prerequisites +### Prérequis -Before you begin, ensure that you have the following: +Avant de commencer, assurez-vous que vous disposez des éléments suivants : -1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installed. -2. A [Substreams key](https://docs.substreams.dev/reference-material/substreams-cli/authentication) for access to the Solana Account Change data. -3. Basic knowledge of [how to use](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) the command line interface (CLI). +1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installé. +2. Une [clé Substreams] (https://docs.substreams.dev/reference-material/substreams-cli/authentication) pour accéder aux données de modification du compte Solana (Solana Account Change). +3. Connaissance de base de [l'utilisation](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) de l'interface de ligne de commande (CLI). -### Step 1: Set Up a Connection to Solana Account Change Substreams +### Étape 1 : Établir une connexion au flux Substreams des modifications de compte Solana -Now that you have Substreams CLI installed, you can set up a connection to the Solana Account Change Substreams feed. +Maintenant que vous avez installé Substreams CLI, vous pouvez établir une connexion au flux Substreams des modifications de compte Solana. -- Using the [Solana Accounts Foundational Module](https://substreams.dev/packages/solana-accounts-foundational/latest), you can choose to stream data directly or use the GUI for a more visual experience. The following `gui` example filters for Honey Token account data. +- En utilisant le [Module de base du compte Solana](https://substreams.dev/packages/solana-accounts-foundational/latest), vous pouvez choisir de diffuser les données directement ou d'utiliser l'interface graphique pour une expérience plus visuelle. L'exemple `gui` suivant filtre les données du compte Honey Token. ```bash substreams gui solana-accounts-foundational filtered_accounts -t +10 -p filtered_accounts="owner:TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA || account:4vMsoUT2BWatFweudnQM1xedRLfJgJ7hswhcpz4xgBTy" ``` -- This command will stream account changes directly to your terminal. +- Cette commande permet de streamer les modifications apportées aux comptes directement dans votre terminal. ```bash substreams run solana-accounts-foundational filtered_accounts -s -1 -o clock ``` -The Foundational Module has support for filtering on specific accounts and/or owners. You can adjust the query based on your needs. +Le module de base permet de filtrer des comptes et/ou des propriétaires spécifiques. Vous pouvez adapter la requête en fonction de vos besoins. -### Step 2: Sink the Substreams +### Étape 2 : Intégrer les Substreams -Consume the account stream [directly in your application](https://docs.substreams.dev/how-to-guides/sinks/stream) using a callback or make it queryable by using the [SQL-DB sink](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +Consommez le flux de modifications de compte [directement dans votre application](https://docs.substreams.dev/how-to-guides/sinks/stream) à l'aide d'un callback, ou rendez-le interrogeable en utilisant le [sink SQL-DB](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). -### Step 3: Setting up a Reconnection Policy +### Étape 3 : Mise en place d'une politique de reconnexion -[Cursor Management](https://docs.substreams.dev/reference-material/reliability-guarantees) ensures seamless continuity and retraceability by allowing you to resume from the last consumed block if the connection is interrupted. This functionality prevents data loss and maintains a persistent stream. +La [gestion du curseur](https://docs.substreams.dev/reference-material/reliability-guarantees) garantit une continuité et une traçabilité sans faille en vous permettant de reprendre à partir du dernier bloc consommé si la connexion est interrompue. Cette fonctionnalité permet d'éviter les pertes de données et de maintenir un flux persistant. -When creating or using a sink, the user's primary responsibility is to provide implementations of BlockScopedDataHandler and a BlockUndoSignalHandler implementation(s) which has the following interface: +Lors de la création ou de l'utilisation d'un sink, la responsabilité première de l'utilisateur est de fournir des implémentations de BlockScopedDataHandler et une ou plusieurs implémentations de BlockUndoSignalHandler qui ont l'interface suivante : ```go import ( diff --git a/website/src/pages/fr/substreams/developing/solana/transactions.mdx b/website/src/pages/fr/substreams/developing/solana/transactions.mdx index 762fc65ad792..4660c252afcf 100644 --- a/website/src/pages/fr/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/fr/substreams/developing/solana/transactions.mdx @@ -1,61 +1,61 @@ --- -title: Solana Transactions +title: Transactions Solana sidebarTitle: Transactions --- -Learn how to initialize a Solana-based Substreams project within the Dev Container. +Apprenez à initialiser un projet Substreams basé sur Solana dans le Dev Container. -> Note: This guide excludes [Account Changes](/substreams/developing/solana/account-changes/). +> Note : ce guide ne concerne pas les [Modifications de compte](/substreams/developing/solana/account-changes/). ## Options -If you prefer to begin locally within your terminal rather than through the Dev Container (VS Code required), refer to the [Substreams CLI installation guide](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). +Si vous préférez commencer localement dans votre terminal plutôt que par l'intermédiaire du Dev Container (VS Code requis), referez-vous au [Guide d'installation de l'interface CLI de Substreams](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). -## Step 1: Initialize Your Solana Substreams Project +## Étape 1 : Initialisation du projet Solana Substreams -1. Open the [Dev Container](https://github.com/streamingfast/substreams-starter) and follow the on-screen steps to initialize your project. +1. Ouvrez le [Dev Container] (https://github.com/streamingfast/substreams-starter) et suivez les étapes à l'écran pour initialiser votre projet. -2. Running `substreams init` will give you the option to choose between two Solana project options. Select the best option for your project: - - **sol-minimal**: This creates a simple Substreams that extracts raw Solana block data and generates corresponding Rust code. This path will start you with the full raw block, and you can navigate to the `substreams.yaml` (the manifest) to modify the input. - - **sol-transactions**: This creates a Substreams that filters Solana transactions based on one or more Program IDs and/or Account IDs, using the cached [Solana Foundational Module](https://substreams.dev/streamingfast/solana-common/v0.3.0). - - **sol-anchor-beta**: This creates a Substreams that decodes instructions and events with an Anchor IDL. If an IDL isn’t available (reference [Anchor CLI](https://www.anchor-lang.com/docs/cli)), then you’ll need to provide it yourself. +2. L'exécution de `substreams init` vous donnera la possibilité de choisir entre deux options de projet Solana. Sélectionnez la meilleure option pour votre projet : + - **sol-minimal** : Ceci crée un simple Substreams qui extrait les données brutes du bloc Solana et génère le code Rust correspondant. Ce chemin démarre avec le bloc brut complet, et vous pouvez naviguer vers le `substreams.yaml` (le manifeste) pour modifier l'entrée. + - **sol-transactions** : Ceci crée un Substreams qui filtre les transactions Solana sur la base d'un ou plusieurs Program IDs et/ou Account IDs, en utilisant le [Module fondamental de Solana](https://substreams.dev/streamingfast/solana-common/v0.3.0) mis en cache. + - **sol-anchor-beta** : Ceci crée un Substreams qui décode les instructions et les événements avec un IDL Anchor. Si un IDL n'est pas disponible (référence [Anchor CLI](https://www.anchor-lang.com/docs/cli)), vous devrez le fournir vous-même. -The modules within Solana Common do not include voting transactions. To gain a 75% reduction in data processing size and costs, delay your stream by over 1000 blocks from the head. This can be done using the [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) function in Rust. +Les modules de Solana Common ne comprennent pas de transactions de vote. Pour obtenir une réduction de 75 % de la taille et des coûts de traitement des données, retardez votre flux de plus de 1000 blocs à partir de la tête. Cela peut être fait en utilisant la fonction [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) de Rust. -To access voting transactions, use the full Solana block, `sf.solana.type.v1.Block`, as input. +Pour accéder aux transactions de vote, utilisez le bloc Solana complet, `sf.solana.type.v1.Block`, comme entrée. -## Step 2: Visualize the Data +## Étape 2 : Visualiser les données -1. Run `substreams auth` to create your [account](https://thegraph.market/) and generate an authentication token (JWT), then pass this token back as input. +1. Exécutez `substreams auth` pour créer votre [compte] (https://thegraph.market/) et générer un jeton d'authentification (JWT), puis renvoyez ce jeton en entrée. -2. Now you can freely use the `substreams gui` to visualize and iterate on your extracted data. +2. Vous pouvez maintenant utiliser librement l'interface `substreams` pour visualiser et itérer sur vos données extraites. -## Step 2.5: (Optionally) Transform the Data +## Étape 2.5 : Transformer (éventuellement) les données -Within the generated directories, modify your Substreams modules to include additional filters, aggregations, and transformations, then update the manifest accordingly. +Dans les répertoires générés, modifiez vos modules Substreams pour inclure des filtres, des agrégations et des transformations supplémentaires, puis mettez à jour le manifeste en conséquence. -## Step 3: Load the Data +## Étape 3 : Charger les données -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +Pour rendre vos Substreams interrogeables (par opposition au [streaming direct](https://docs.substreams.dev/how-to-guides/sinks/stream)), vous pouvez générer automatiquement un [Subgraph alimenté par Substreams](/sps/introduction/) ou un sink SQL-DB. ### Subgraphe -1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. -3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. +1. Exécutez `substreams codegen subgraph` pour initialiser le sink, en produisant les fichiers et les définitions de fonctions nécessaires. +2. Créez vos [Mappages de Subgraphs](/sps/triggers/) dans le fichier `mappings.ts` et les entités associées dans le fichier `schema.graphql`. +3. Construire et déployer localement ou vers [Subgraph Studio](https://thegraph.com/studio-pricing/) en lançant `deploy-studio`. ### SQL -1. Run `substreams codegen sql` and choose from either ClickHouse or Postgres to initialize the sink, producing the necessary files. -2. Run `substreams build` build the [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink) sink. -3. Run `substreams-sink-sql` to sink the data into your selected SQL DB. +1. Exécutez `substreams codegen sql` et choisissez entre ClickHouse et Postgres pour initialiser le sink, en produisant les fichiers nécessaires. +2. Exécutez `substreams build` pour construire le sink [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +3. Exécutez `substreams-sink-sql` pour transférer les données dans la base de données SQL choisie. -> Note: Run `help` to better navigate the development environment and check the health of containers. +> Note : Lancez `help` pour mieux naviguer dans l'environnement de développement et vérifier l'état des conteneurs. ## Ressources supplémentaires -You may find these additional resources helpful for developing your first Solana application. +Ces ressources supplémentaires peuvent vous être utiles pour développer votre première application Solana. -- The [Dev Container Reference](/substreams/developing/dev-container/) helps you navigate the container and its common errors. -- The [CLI reference](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) lets you explore all the tools available in the Substreams CLI. -- The [Components Reference](https://docs.substreams.dev/reference-material/substreams-components/packages) dives deeper into navigating the `substreams.yaml`. +- La [Référence du Dev Container](/substreams/developing/dev-container/) vous aide à naviguer dans le conteneur et ses erreurs courantes. +- La [référence CLI](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) vous permet d'explorer tous les outils disponibles dans la CLI de Substreams. +- La [Référence des composants](https://docs.substreams.dev/reference-material/substreams-components/packages) permet d'approfondir la navigation dans le fichier `substreams.yaml`. diff --git a/website/src/pages/fr/substreams/introduction.mdx b/website/src/pages/fr/substreams/introduction.mdx index 8e17afebc2a0..1f37496ab7c0 100644 --- a/website/src/pages/fr/substreams/introduction.mdx +++ b/website/src/pages/fr/substreams/introduction.mdx @@ -1,26 +1,26 @@ --- -title: Introduction to Substreams +title: Introduction à Substreams sidebarTitle: Présentation --- -![Substreams Logo](/img/substreams-logo.png) +![Logo de Substreams](/img/substreams-logo.png) -To start coding right away, check out the [Substreams Quick Start](/substreams/quick-start/). +Pour commencer à coder tout de suite, consultez le [Démarrage rapide de Substreams] (/substreams/quick-start/). ## Aperçu -Substreams is a powerful parallel blockchain indexing technology designed to enhance performance and scalability within The Graph Network. +Substreams est une puissante technologie d'indexation parallèle de la blockchain conçue pour améliorer les performances et l'évolutivité au sein de The Graph Network. -## Substreams Benefits +## Avantages de Substreams -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. -- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. -- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. -- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. +- **Indexation accélérée** : Augmentez le temps d'indexation des subgraphs grâce à un moteur parallélisé pour une récupération et un traitement plus rapides des données. +- **Prise en charge de plusieurs blockchains** : Étendre les capacités d'indexation au-delà des blockchains basées sur EVM, en prenant en charge des écosystèmes tels que Solana, Injective, Starknet et Vara. +- **Modèle de données amélioré** : Accédez à des données complètes, y compris les données de niveau `trace` sur EVM ou les changements de compte sur Solana, tout en gérant efficacement les forks/déconnexions. +- **Support multi-Sink:** Pour Subgraph, base de données Postgres, Clickhouse, et base de données Mongo. ## Le fonctionnement de Substreams en 4 étapes -1. You write a Rust program, which defines the transformations that you want to apply to the blockchain data. For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). +1. Vous écrivez un programme Rust, qui définit les transformations que vous souhaitez appliquer aux données de la blockchain. Par exemple, la fonction Rust suivante extrait les informations pertinentes d'un bloc Ethereum (numéro, hash et hash parent). ```rust fn get_my_block(blk: Block) -> Result { @@ -34,12 +34,12 @@ fn get_my_block(blk: Block) -> Result { } ``` -2. You wrap up your Rust program into a WASM module just by running a single CLI command. +2. Il suffit d'exécuter une seule commande CLI pour transformer votre programme Rust en un module WASM. -3. The WASM container is sent to a Substreams endpoint for execution. The Substreams provider feeds the WASM container with the blockchain data and the transformations are applied. +3. Le conteneur WASM est envoyé à un endpoint Substreams pour exécution. Le fournisseur Substreams alimente le conteneur WASM avec les données de la blockchain et les transformations sont appliquées. -4. You select a [sink](https://docs.substreams.dev/how-to-guides/sinks), a place where you want to send the transformed data (such as a SQL database or a Subgraph). +4. Vous sélectionnez un [sink](https://docs.substreams.dev/how-to-guides/sinks), un endroit où vous souhaitez envoyer les données transformées (comme une base de données SQL ou un subgraph). ## Ressources supplémentaires -All Substreams developer documentation is maintained by the StreamingFast core development team on the [Substreams registry](https://docs.substreams.dev). +Toute la documentation destinée aux développeurs de Substreams est conservée par l'équipe de développement de StreamingFast sur le [Registre Substreams](https://docs.substreams.dev). diff --git a/website/src/pages/fr/substreams/publishing.mdx b/website/src/pages/fr/substreams/publishing.mdx index eecb92d0d48b..6059a7e26c8a 100644 --- a/website/src/pages/fr/substreams/publishing.mdx +++ b/website/src/pages/fr/substreams/publishing.mdx @@ -1,53 +1,53 @@ --- -title: Publishing a Substreams Package -sidebarTitle: Publishing +title: Publication d'un package Substreams +sidebarTitle: Publication --- -Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). +Apprenez à publier un package Substreams sur le [Registre Substreams] (https://substreams.dev). ## Aperçu -### What is a package? +### Qu'est-ce qu'un package ? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +Un package Substreams est un fichier binaire précompilé qui définit les données spécifiques que vous souhaitez extraire de la blockchain, similaire au fichier `mapping.ts` dans les Subgraphs traditionnels. -## Publish a Package +## Publier un package -### Prerequisites +### Prérequis -- You must have the Substreams CLI installed. -- You must have a Substreams package (`.spkg`) that you want to publish. +- La CLI de Substreams doit être installé. +- Vous devez avoir un package Substreams (`.spkg`) que vous voulez publier. -### Step 1: Run the `substreams publish` Command +### Étape 1 : Exécuter la commande `substreams publish` -1. In a command-line terminal, run `substreams publish .spkg`. +1. Dans un terminal de ligne de commande, lancez `substreams publish .spkg`. -2. If you do not have a token set in your computer, navigate to `https://substreams.dev/me`. +2. Si vous n'avez pas de jeu de jetons sur votre ordinateur, naviguez vers `https://substreams.dev/me`. -![get token](/img/1_get-token.png) +![obtenir un jeton](/img/1_get-token.png) -### Step 2: Get a Token in the Substreams Registry +### Étape 2 : Obtenir un jeton dans le registre de Substreams -1. In the Substreams Registry, log in with your GitHub account. +1. Dans le registre Substreams, connectez-vous avec votre compte GitHub. -2. Create a new token and copy it in a safe location. +2. Créez un nouveau jeton et copiez-le dans un endroit sûr. -![new token](/img/2_new_token.png) +![nouveau jeton](/img/2_new_token.png) -### Step 3: Authenticate in the Substreams CLI +### Étape 3 : Authentification dans l'interface de gestion de Substreams -1. Back in the Substreams CLI, paste the previously generated token. +1. De retour dans le CLI de Substreams, collez le jeton généré précédemment. -![paste token](/img/3_paste_token.png) +![collez le jeton](/img/3_paste_token.png) -2. Lastly, confirm that you want to publish the package. +2. Enfin, confirmez que vous souhaitez publier le package. -![confirm](/img/4_confirm.png) +![confirmer](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +C'est Terminé ! Vous avez réussi à publier un package dans le registre de Substreams. Vous avez publié avec succès un package dans le registre Substreams. -![success](/img/5_success.png) +![succès](/img/5_success.png) ## Ressources supplémentaires -Visit [Substreams](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. +Visitez [Substreams](https://substreams.dev/) pour découvrir une collection croissante de packages Substreams prêts à l'emploi sur différents réseaux de blockchain. diff --git a/website/src/pages/fr/substreams/quick-start.mdx b/website/src/pages/fr/substreams/quick-start.mdx index ad7774b5102e..75da28206cb5 100644 --- a/website/src/pages/fr/substreams/quick-start.mdx +++ b/website/src/pages/fr/substreams/quick-start.mdx @@ -3,28 +3,28 @@ title: Démarrage rapide des Substreams sidebarTitle: Démarrage rapide --- -Discover how to utilize ready-to-use substream packages or develop your own. +Découvrez comment utiliser des packages substream prêts à l'emploi ou développer vos propres packages. ## Aperçu -Integrating Substreams can be quick and easy. They are permissionless, and you can [obtain a key here](https://thegraph.market/) without providing personal information to start streaming on-chain data. +L'intégration de Substreams peut être rapide et facile. Ils sont sans autorisation et vous pouvez [obtenir une clé ici](https://thegraph.market/) sans fournir d'informations personnelles pour commencer à streamer des données onchain. ## Commencez à développer -### Use Substreams Packages +### Utiliser les packages Substreams -There are many ready-to-use Substreams packages available. You can explore these packages by visiting the [Substreams Registry](https://substreams.dev) and [sinking them](/substreams/developing/sinks/). The registry lets you search for and find any package that meets your needs. +Il existe de nombreux packages Substreams prêts à l'emploi. Vous pouvez les découvrir en visitant le [Registre Substreams] (https://substreams.dev) et les [sink] (/substreams/developing/sinks/). Le registre vous permet de rechercher et de trouver n'importe quel package répondant à vos besoins. -Once you find a package that fits your needs, you can choose how you want to consume the data: +Une fois que vous avez trouvé un package qui répond à vos besoins, vous pouvez choisir la façon dont vous voulez utiliser les données : -- **[Subgraph](/sps/introduction/)**: Configure an API to meet your data needs and host it on The Graph Network. -- **[SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)**: Send the data to a database. -- **[Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Stream data directly to your application. -- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Send data to a PubSub topic. +- **[Subgraph](/sps/introduction/)** : Configurez une API pour répondre à vos besoins en matière de données et hébergez-la sur The Graph Network. +- **[Base de données SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)** : Envoyer les données à une base de données. +- **[Direct Streaming] (https://docs.substreams.dev/how-to-guides/sinks/stream)** : Streamez des données en continu directement dans votre application. +- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)** : Envoyer des données à un sujet PubSub. -### Develop Your Own +### Développez le vôtre -If you can't find a Substreams package that meets your specific needs, you can develop your own. Substreams are built with Rust, so you'll write functions that extract and filter the data you need from the blockchain. To get started, check out the following tutorials: +Si vous ne trouvez pas de package Substreams qui réponde à vos besoins spécifiques, vous pouvez développer le vôtre. Substreams est construit avec Rust, vous écrirez donc des fonctions qui extrairont et filtreront les données dont vous avez besoin à partir de la blockchain. Pour commencer, consultez les tutoriels suivants : - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Solana](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-solana) @@ -32,11 +32,11 @@ If you can't find a Substreams package that meets your specific needs, you can d - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) -To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/). +Pour construire et optimiser vos Substreams à partir de zéro, utilisez le chemin minimal dans le [conteneur de développement](/substreams/developing/dev-container/). -> Note: Substreams guarantees that you'll [never miss data](https://docs.substreams.dev/reference-material/reliability-guarantees) with a simple reconnection policy. +> Remarque : Substreams garantit que vous ne manquerez jamais de données (https://docs.substreams.dev/reference-material/reliability-guarantees) grâce à une politique de reconnexion simple. ## Ressources supplémentaires -- For additional guidance, reference the [Tutorials](https://docs.substreams.dev/tutorials/intro-to-tutorials) and follow the [How-To Guides](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) on Streaming Fast docs. -- For a deeper understanding of how Substreams works, explore the [architectural overview](https://docs.substreams.dev/reference-material/architecture) of the data service. +- Pour obtenir des conseils supplémentaires, consultez les [Tutoriels](https://docs.substreams.dev/tutorials/intro-to-tutorials) et suivez les [Guides pratiques](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) sur les documents de Streaming Fast. +- Pour mieux comprendre le fonctionnement de Substreams, consultez la [vue d'ensemble de l'architecture](https://docs.substreams.dev/reference-material/architecture) du service de données. diff --git a/website/src/pages/fr/supported-networks.mdx b/website/src/pages/fr/supported-networks.mdx index 54a89c59745f..c1b6ee3fd39c 100644 --- a/website/src/pages/fr/supported-networks.mdx +++ b/website/src/pages/fr/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Réseaux pris en charge hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio repose sur la stabilité et la fiabilité des technologies sous-jacentes, comme les endpoints JSON-RPC, Firehose et Substreams. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- Si un subgraph a été publié via la CLI et repris par un Indexer, il pourrait techniquement être interrogé même sans support, et des efforts sont en cours pour simplifier davantage l'intégration de nouveaux réseaux. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Exécution de Graph Node en local If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node peut également indexer d'autres protocoles via une intégration Firehose. Des intégrations Firehose ont été créées pour NEAR, Arweave et les réseaux basés sur Cosmos. De plus, Graph Node peut prendre en charge les subgraphs alimentés par Substreams pour tout réseau prenant en charge Substreams. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/fr/token-api/_meta-titles.json b/website/src/pages/fr/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/fr/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/fr/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/fr/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/fr/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/fr/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/fr/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/fr/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/fr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/fr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/fr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/fr/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/fr/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/fr/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/fr/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/fr/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/fr/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/fr/token-api/faq.mdx b/website/src/pages/fr/token-api/faq.mdx new file mode 100644 index 000000000000..55125891c079 --- /dev/null +++ b/website/src/pages/fr/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Général + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/fr/token-api/mcp/claude.mdx b/website/src/pages/fr/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..3c7c756d5b31 --- /dev/null +++ b/website/src/pages/fr/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prérequis + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## La Configuration + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/fr/token-api/mcp/cline.mdx b/website/src/pages/fr/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..e4952d58a1d9 --- /dev/null +++ b/website/src/pages/fr/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prérequis + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## La Configuration + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/fr/token-api/mcp/cursor.mdx b/website/src/pages/fr/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..ae68e7ff6cf9 --- /dev/null +++ b/website/src/pages/fr/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prérequis + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## La Configuration + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/fr/token-api/monitoring/get-health.mdx b/website/src/pages/fr/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/fr/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/fr/token-api/monitoring/get-networks.mdx b/website/src/pages/fr/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/fr/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/fr/token-api/monitoring/get-version.mdx b/website/src/pages/fr/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/fr/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/fr/token-api/quick-start.mdx b/website/src/pages/fr/token-api/quick-start.mdx new file mode 100644 index 000000000000..4a38a878fd7c --- /dev/null +++ b/website/src/pages/fr/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Démarrage rapide +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prérequis + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/hi/about.mdx b/website/src/pages/hi/about.mdx index 7f9feff0a53e..53b13b3188a9 100644 --- a/website/src/pages/hi/about.mdx +++ b/website/src/pages/hi/about.mdx @@ -28,27 +28,27 @@ Alternatively, you have the option to set up your own server, process the transa ब्लॉकचेन की विशेषताएँ, जैसे अंतिमता, चेन पुनर्गठन, और अंकल ब्लॉक्स, प्रक्रिया में जटिलता जोड़ती हैं, जिससे ब्लॉकचेन डेटा से सटीक क्वेरी परिणाम प्राप्त करना समय लेने वाला और अवधारणात्मक रूप से चुनौतीपूर्ण हो जाता है। -## The Graph एक समाधान प्रदान करता है +## The Graph एक समाधान प्रदान करता है -The Graph इस चुनौती को एक विकेन्द्रीकृत प्रोटोकॉल के माध्यम से हल करता है जो ब्लॉकचेन डेटा को इंडेक्स करता है और उसकी कुशल और उच्च-प्रदर्शन वाली क्वेरी करने की सुविधा प्रदान करता है। ये एपीआई (इंडेक्स किए गए "सबग्राफ") फिर एक मानक GraphQL एपीआई के साथ क्वेरी की जा सकती हैं। +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. आज एक विकेंद्रीकृत प्रोटोकॉल है, जो [Graph Node](https://github.com/graphprotocol/graph-node) के ओपन सोर्स इम्प्लीमेंटेशन द्वारा समर्थित है, जो इस प्रक्रिया को सक्षम बनाता है। ### The Graph कैसे काम करता है -ब्लॉकचेन डेटा को इंडेक्स करना बहुत मुश्किल होता है, लेकिन The Graph इसे आसान बना देता है। The Graph सबग्राफ्स का उपयोग करके एथेरियम डेटा को इंडेक्स करना सीखता है। सबग्राफ्स ब्लॉकचेन डेटा पर बनाए गए कस्टम एपीआई होते हैं, जो ब्लॉकचेन से डेटा निकालते हैं, उसे प्रोसेस करते हैं, और उसे इस तरह स्टोर करते हैं ताकि उसे GraphQL के माध्यम से आसानी से क्वेरी किया जा सके। +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### विशिष्टताएँ -- The Graph का उपयोग subgraph विवरणों के लिए करता है, जिन्हें subgraph के अंदर subgraph manifest के रूप में जाना जाता है। +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- सबग्राफ विवरण उन स्मार्ट कॉन्ट्रैक्ट्स की रूपरेखा प्रदान करता है जो एक सबग्राफ के लिए महत्वपूर्ण हैं, उन कॉन्ट्रैक्ट्स के भीतर कौन-कौन सी घटनाओं पर ध्यान केंद्रित करना है, और घटना डेटा को उस डेटा से कैसे मैप करना है जिसे The Graph अपने डेटाबेस में संग्रहीत करेगा। +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- जब आप एक subgraph बना रहे होते हैं, तो आपको एक subgraph मैनिफेस्ट लिखने की आवश्यकता होती है। +- When creating a Subgraph, you need to write a Subgraph manifest. -- `Subgraph manifest` लिखने के बाद, आप Graph CLI का उपयोग करके परिभाषा को IPFS में संग्रहीत कर सकते हैं और एक Indexer को उस subgraph के लिए डेटा को इंडेक्स करने का निर्देश दे सकते हैं। +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -नीचे दिया गया आरेख Ethereum लेनदेन के साथ subgraph मैनिफेस्ट को डिप्लॉय करने के बाद डेटा के प्रवाह के बारे में अधिक विस्तृत जानकारी प्रदान करता है। +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![एक ग्राफ़िक समझाता है कि कैसे ग्राफ़ डेटा उपभोक्ताओं को क्वेरीज़ प्रदान करने के लिए ग्राफ़ नोड का उपयोग करता है](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The Graph इस चुनौती को एक विकेन्द्री 1. एक विकेंद्रीकृत एप्लिकेशन स्मार्ट अनुबंध पर लेनदेन के माध्यम से एथेरियम में डेटा जोड़ता है। 2. लेन-देन संसाधित करते समय स्मार्ट अनुबंध एक या अधिक घटनाओं का उत्सर्जन करता है। -3. ग्राफ़ नोड लगातार नए ब्लॉकों के लिए एथेरियम को स्कैन करता है और आपके सबग्राफ के डेटा में शामिल हो सकता है। -4. ग्राफ नोड इन ब्लॉकों में आपके सबग्राफ के लिए एथेरियम ईवेंट ढूंढता है और आपके द्वारा प्रदान किए गए मैपिंग हैंडलर को चलाता है। मैपिंग एक WASM मॉड्यूल है जो एथेरियम घटनाओं के जवाब में ग्राफ़ नोड द्वारा संग्रहीत डेटा संस्थाओं को बनाता या अपडेट करता है। +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. नोड के [GraphQL समापन बिंदु](https://graphql.org/learn/) का उपयोग करते हुए, विकेन्द्रीकृत एप्लिकेशन ब्लॉकचैन से अनुक्रमित डेटा के लिए ग्राफ़ नोड से पूछताछ करता है। ग्राफ़ नोड बदले में इस डेटा को प्राप्त करने के लिए, स्टोर की इंडेक्सिंग क्षमताओं का उपयोग करते हुए, अपने अंतर्निहित डेटा स्टोर के लिए ग्राफ़कॉल प्रश्नों का अनुवाद करता है। विकेंद्रीकृत एप्लिकेशन इस डेटा को एंड-यूजर्स के लिए एक समृद्ध यूआई में प्रदर्शित करता है, जिसका उपयोग वे एथेरियम पर नए लेनदेन जारी करने के लिए करते हैं। चक्र दोहराता है। ## अगले कदम -निम्नलिखित अनुभागों में subgraphs, उनके डिप्लॉयमेंट और डेटा क्वेरी करने के तरीके पर अधिक गहराई से जानकारी दी गई है। +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -अपना खुद का subgraph लिखने से पहले, यह अनुशंसा की जाती है कि आप [Graph Explorer](https://thegraph.com/explorer) को एक्सप्लोर करें और पहले से डिप्लॉय किए गए कुछ subgraphs की समीक्षा करें। प्रत्येक subgraph के पेज में एक GraphQL प्लेग्राउंड शामिल होता है, जिससे आप उसके डेटा को क्वेरी कर सकते हैं। +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/hi/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/hi/archived/arbitrum/arbitrum-faq.mdx index 35afafb65cd3..8f4654210970 100644 --- a/website/src/pages/hi/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/hi/archived/arbitrum/arbitrum-faq.mdx @@ -2,29 +2,29 @@ title: Arbitrum FAQ --- -Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. +यदि आप आर्बिट्रम बिलिंग एफएक्यू पर जाना चाहते हैं तो [here] (#बिलिंग-ऑन-आर्बिट्रम-एफएक्यू) पर क्लिक करें। ## The Graph ने L2 समाधान को लागू करने का कारण क्या था? -L2 पर The Graph को स्केल करके, नेटवर्क के प्रतिभागी अब निम्नलिखित लाभ उठा सकते हैं: + L2 पर The Graph को स्केल करके, नेटवर्क के प्रतिभागी अब निम्नलिखित लाभ उठा सकते हैं: -- Upwards of 26x savings on gas fees +- गैस शुल्क पर 26 गुना से अधिक की बचत -- Faster transaction speed +- तेज़ लेनदेन गति -- Security inherited from Ethereum +- सुरक्षा एथेरियम से विरासत में मिली है -L2 पर प्रोटोकॉल स्मार्ट कॉन्ट्रैक्ट्स को स्केल करने से नेटवर्क के प्रतिभागियों को गैस शुल्क में कमी के साथ अधिक बार इंटरैक्ट करने की अनुमति मिलती है। उदाहरण के लिए, Indexer अधिक बार आवंटन खोल और बंद कर सकते हैं ताकि अधिक सबग्राफ़ को इंडेक्स किया जा सके। डेवलपर्स सबग्राफ़ को अधिक आसानी से तैनात और अपडेट कर सकते हैं, और डेलीगेटर्स अधिक बार GRT को डेलीगेट कर सकते हैं। क्यूरेटर अधिक सबग्राफ़ में सिग्नल जोड़ या हटा सकते हैं—ऐसे कार्य जो पहले गैस की उच्च लागत के कारण अक्सर करना बहुत महंगा माना जाता था। +स्केलिंग प्रोटोकॉल स्मार्ट contract को L2 पर ले जाने से नेटवर्क प्रतिभागियों को कम गैस शुल्क में अधिक बार इंटरैक्ट करने की सुविधा मिलती है। उदाहरण के लिए, Indexers अधिक सबग्राफ को इंडेक्स करने के लिए अधिक बार आवंटन खोल और बंद कर सकते हैं। डेवलपर्स अधिक आसानी से सबग्राफ को डिप्लॉय और अपडेट कर सकते हैं, और Delegators अधिक बार GRT डेलीगेट कर सकते हैं। Curators अधिक संख्या में सबग्राफ में सिग्नल जोड़ या हटा सकते हैं—जो पहले गैस लागत के कारण बार-बार करना महंगा माना जाता था। -The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. +ग्राफ समुदाय ने पिछले साल [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) चर्चा के नतीजे के बाद आर्बिट्रम के साथ आगे बढ़ने का फैसला किया। ## What do I need to do to use The Graph on L2? -The Graph का बिलिंग सिस्टम Arbitrum पर GRT को स्वीकार करता है, और उपयोगकर्ताओं को गैस के भुगतान के लिए Arbitrum पर ETH की आवश्यकता होगी। जबकि The Graph प्रोटोकॉल Ethereum Mainnet पर शुरू हुआ, सभी गतिविधियाँ, जिसमें बिलिंग कॉन्ट्रैक्ट्स भी शामिल हैं, अब Arbitrum One पर हैं। +The Graph का बिलिंग सिस्टम Arbitrum पर GRT को स्वीकार करता है, और उपयोगकर्ताओं को गैस के भुगतान के लिए Arbitrum पर ETH की आवश्यकता होगी। जबकि The Graph प्रोटोकॉल Ethereum Mainnet पर शुरू हुआ, सभी गतिविधियाँ, जिसमें बिलिंग कॉन्ट्रैक्ट्स भी शामिल हैं, अब Arbitrum One पर हैं। अत: क्वेरीज़ के लिए भुगतान करने के लिए, आपको Arbitrum पर GRT की आवश्यकता है। इसे प्राप्त करने के कुछ विभिन्न तरीके यहाँ दिए गए हैं: -- यदि आपके पास पहले से Ethereum पर GRT है, तो आप इसे Arbitrum पर ब्रिज कर सकते हैं। आप यह Subgraph Studio में प्रदान किए गए GRT ब्रिजिंग विकल्प के माध्यम से या निम्नलिखित में से किसी एक ब्रिज का उपयोग करके कर सकते हैं: +- यदि आपके पास पहले से Ethereum पर GRT है, तो आप इसे Arbitrum पर ब्रिज कर सकते हैं। आप यह Subgraph Studio में प्रदान किए गए GRT ब्रिजिंग विकल्प के माध्यम से या निम्नलिखित में से किसी एक ब्रिज का उपयोग करके कर सकते हैं: - [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) - [TransferTo](https://transferto.xyz/swap) @@ -35,11 +35,11 @@ The Graph का बिलिंग सिस्टम Arbitrum पर GRT क एक बार जब आपके पास Arbitrum पर GRT हो, तो आप इसे अपनी बिलिंग बैलेंस में जोड़ सकते हैं। -To take advantage of using The Graph on L2, use this dropdown switcher to toggle between chains. +L2 पर The Graph का उपयोग करने का लाभ उठाने के लिए, इस dropdown switcher का उपयोग chains के बीच toggle करने के लिए करें। ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Subgraph developer, data consumer, Indexer, Curator, or Delegator, के रूप में, मुझे अब क्या करने की आवश्यकता है? +## As a सबग्राफ developer, data consumer, Indexer, Curator, or Delegator, अब आपको क्या करना चाहिए? The Graph Network में भाग लेने के लिए नेटवर्क प्रतिभागियों को Arbitrum पर स्थानांतरित होना आवश्यक है। अतिरिक्त सहायता के लिए कृपया [L2 Transfer Tool मार्गदर्शक](/archived/arbitrum/l2-transfer-tools-guide/) देखें। @@ -47,33 +47,34 @@ The Graph Network में भाग लेने के लिए नेटव ## क्या नेटवर्क को L2 पर स्केल करने से संबंधित कोई जोखिम थे? -सभी स्मार्ट कॉन्ट्रैक्ट्स का पूरी तरह से परीक्षित किया गया है। (https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). +सभी स्मार्ट कॉन्ट्रैक्ट्स का पूरी तरह से परीक्षित किया गया है। +(https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). हर चीज़ का पूरी तरह से परीक्षण किया गया है, और एक सुरक्षित और निर्बाध संक्रमण सुनिश्चित करने के लिए एक आकस्मिक योजना बनाई गई है। विवरण यहां पाया जा सकता है [here] (https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and- सुरक्षा-विचार-20). -## क्या Ethereum पर मौजूद सबग्राफ़ काम कर रहे हैं? +## क्या मौजूदा सबग्राफ Ethereum पर काम कर रहे हैं? -सभी सबग्राफ अब Arbitrum पर हैं। कृपया [ L2 Transfer Tool मार्गदर्शक](/archived/arbitrum/l2-transfer-tools-guide/) का संदर्भ लें ताकि आपके सबग्राफ बिना किसी समस्या के कार्य करें। +सभी सबग्राफ अब Arbitrum पर हैं। कृपया [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) देखें ताकि आपके सबग्राफ बिना किसी समस्या के कार्य कर सकें। ## क्या GRT का एक नया स्मार्ट कॉन्ट्रैक्ट Arbitrum पर तैनात किया गया है? -Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. +हां, जीआरटी के पास एक अतिरिक्त [आर्बिट्रम पर स्मार्ट अनुबंध](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) है। हालाँकि, एथेरियम मेननेट [जीआरटी अनुबंध](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) चालू रहेगा। ## Arbitrum पर बिलिंग FAQs -## What do I need to do about the GRT in my billing balance? +## मुझे अपने billing balance में GRT के बारे में क्या करना होगा? -Nothing! Your GRT has been securely migrated to Arbitrum and is being used to pay for queries as you read this. +कुछ नहीं! आपके GRT को Arbitrum में सुरक्षित रूप से migrate कर दिया गया है और जब आप इसे पढ़ रहे हैं तो इसका उपयोग queries के भुगतान के लिए किया जा रहा है। -## How do I know my funds have migrated securely to Arbitrum? +## मुझे कैसे पता चलेगा कि मेरे funds Arbitrum में सुरक्षित रूप से migrate हो गए हैं? सभी जीआरटी बिलिंग शेष पहले ही सफलतापूर्वक आर्बिट्रम में स्थानांतरित कर दिए गए हैं। आप आर्बिट्रम पर बिलिंग अनुबंध [यहां] [here](https://arbiscan.io/address/0x1B07D3344188908Fb6DEcEac381f3eE63C48477a) देख सकते हैं। -## How do I know the Arbitrum bridge is secure? +## मुझे कैसे पता चलेगा कि Arbitrum bridge सुरक्षित है? -The bridge has been [heavily audited](https://code4rena.com/contests/2022-10-the-graph-l2-bridge-contest) to ensure safety and security for all users. +सभी उपयोगकर्ताओं के लिए सुरक्षा सुनिश्चित करने के लिए पुल का [भारी ऑडिट](https://code4rena.com/contests/2022-10-the-graph-l2-bridge-contest) किया गया है। -## What do I need to do if I'm adding fresh GRT from my Ethereum mainnet wallet? +## यदि मैं अपने Ethereum mainnet wallet से fresh GRT add कर रहा हूँ तो मुझे क्या करने की आवश्यकता है? आपके आर्बिट्रम बिलिंग बैलेंस में जीआरटी जोड़ना [सबग्राफ स्टूडियो] (https://thegraph.com/studio/) में एक-क्लिक अनुभव के साथ किया जा सकता है। आप आसानी से अपने जीआरटी को आर्बिट्रम से जोड़ सकेंगे और एक लेनदेन में अपनी एपीआई कुंजी भर सकेंगे। diff --git a/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-faq.mdx index 66574cb53dd4..2928bdbccb78 100644 --- a/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -1,100 +1,100 @@ --- -title: L2 Transfer Tools FAQ +title: L2 स्थानांतरण उपकरण अक्सर पूछे जाने वाले प्रश्न --- ## आम -### What are L2 Transfer Tools? +### L2 स्थानांतरण उपकरण क्या हैं? -The Graph has made it 26x cheaper for contributors to participate in the network by deploying the protocol to Arbitrum One. The L2 Transfer Tools were created by core devs to make it easy to move to L2. +ग्राफ़ ने आर्बिट्रम वन में प्रोटोकॉल लागू करके योगदानकर्ताओं के लिए नेटवर्क में भाग लेना 26 गुना सस्ता कर दिया है। L2 ट्रांसफर टूल्स को कोर डेवलपर्स द्वारा L2 पर ले जाना आसान बनाने के लिए बनाया गया था। -For each network participant, a set of L2 Transfer Tools are available to make the experience seamless when moving to L2, avoiding thawing periods or having to manually withdraw and bridge GRT. +प्रत्येक नेटवर्क प्रतिभागी के लिए, L2 पर जाने पर अनुभव को सहज बनाने, पिघलने की अवधि से बचने या मैन्युअल रूप से निकालने और GRT को पाटने के लिए L2 ट्रांसफर टूल का एक सेट उपलब्ध है। -These tools will require you to follow a specific set of steps depending on what your role is within The Graph and what you are transferring to L2. +इन उपकरणों के लिए आपको चरणों के एक विशिष्ट सेट का पालन करने की आवश्यकता होगी जो इस बात पर निर्भर करेगा कि ग्राफ़ के भीतर आपकी भूमिका क्या है और आप एल2 में क्या स्थानांतरित कर रहे हैं। -### Can I use the same wallet I use on Ethereum mainnet? +### क्या मैं उसी वॉलेट का उपयोग कर सकता हूँ जिसका उपयोग मैं एथेरियम मेननेट पर करता हूँ? यदि आप [ EOA ] (https://ethereum.org/en/developers/docs/accounts/#types-of-account) वॉलेट का उपयोग कर रहे हैं, तो आप उसी पते का उपयोग कर सकते हैं। यदि आपका Ethereum mainnet वॉलेट एक contract है (जैसे कि एक multisig), तो आपको एक [Arbitrum बटुआ पता ](/archived/arbitrum/arbitrum-faq/#what-do-i-need-to-do-to-use-the-graph-on-l2) निर्दिष्ट करना होगा जहाँ आपका ट्रांसफर भेजा जाएगा। कृपया पते को ध्यानपूर्वक जांचें, क्योंकि गलत पते पर ट्रांसफर करने से स्थायी हानि हो सकती है। यदि आप L2 पर multisig का उपयोग करना चाहते हैं, तो सुनिश्चित करें कि आपने Arbitrum One पर एक multisig contract तैनात किया हो। -Wallets on EVM blockchains like Ethereum and Arbitrum are a pair of keys (public and private), that you create without any need to interact with the blockchain. So any wallet that was created for Ethereum will also work on Arbitrum without having to do anything else. +एथेरियम और आर्बिट्रम जैसे ईवीएम ब्लॉकचेन पर वॉलेट कुंजी (सार्वजनिक और निजी) की एक जोड़ी है, जिसे आप ब्लॉकचेन के साथ बातचीत करने की आवश्यकता के बिना बनाते हैं। इसलिए एथेरियम के लिए बनाया गया कोई भी वॉलेट बिना कुछ और किए आर्बिट्रम पर भी काम करेगा। -The exception is with smart contract wallets like multisigs: these are smart contracts that are deployed separately on each chain, and get their address when they are deployed. If a multisig was deployed to Ethereum, it won't exist with the same address on Arbitrum. A new multisig must be created first on Arbitrum, and may get a different address. +अपवाद मल्टीसिग जैसे स्मार्ट कॉन्ट्रैक्ट वॉलेट के साथ है: ये स्मार्ट कॉन्ट्रैक्ट हैं जो प्रत्येक श्रृंखला पर अलग से तैनात किए जाते हैं, और तैनात होने पर उनका पता प्राप्त होता है। यदि एक मल्टीसिग को एथेरियम पर तैनात किया गया था, तो यह आर्बिट्रम पर समान पते के साथ मौजूद नहीं होगा। आर्बिट्रम पर पहले एक नया मल्टीसिग बनाया जाना चाहिए, और उसे एक अलग पता मिल सकता है। ### यदि मैं अपना स्थानांतरण 7 दिनों में पूरा नहीं कर पाता तो क्या होगा? L2 ट्रांसफर टूल L1 से L2 तक संदेश भेजने के लिए आर्बिट्रम के मूल तंत्र का उपयोग करते हैं। इस तंत्र को "पुनर्प्रयास योग्य टिकट" कहा जाता है और इसका उपयोग आर्बिट्रम जीआरटी ब्रिज सहित सभी देशी टोकन ब्रिजों द्वारा किया जाता है। आप पुनः प्रयास योग्य टिकटों के बारे में अधिक जानकारी [आर्बिट्रम डॉक्स](https://docs.arbitrum.io/arbos/l1-to-l2-messageing) में पढ़ सकते हैं। -जब आप अपनी संपत्ति (सबग्राफ, हिस्सेदारी, प्रतिनिधिमंडल या क्यूरेशन) को एल2 में स्थानांतरित करते हैं, तो आर्बिट्रम जीआरटी ब्रिज के माध्यम से एक संदेश भेजा जाता है जो एल2 में एक पुनः प्रयास योग्य टिकट बनाता है। ट्रांसफ़र टूल में लेन-देन में कुछ ETH मान शामिल होते हैं, जिनका उपयोग 1) टिकट बनाने के लिए भुगतान करने और 2) L2 में टिकट निष्पादित करने के लिए गैस का भुगतान करने के लिए किया जाता है। हालाँकि, क्योंकि गैस की कीमतें L2 में निष्पादित होने के लिए टिकट तैयार होने तक के समय में भिन्न हो सकती हैं, यह संभव है कि यह ऑटो-निष्पादन प्रयास विफल हो जाए। जब ऐसा होता है, तो आर्बिट्रम ब्रिज पुनः प्रयास योग्य टिकट को 7 दिनों तक जीवित रखेगा, और कोई भी टिकट को "रिडीम" करने का पुनः प्रयास कर सकता है (जिसके लिए आर्बिट्रम में ब्रिज किए गए कुछ ईटीएच के साथ वॉलेट की आवश्यकता होती है)। +जब आप अपने assets (सबग्राफ, stake, delegation या curation) को L2 में ट्रांसफर करते हैं, तो एक संदेश Arbitrum GRT bridge के माध्यम से भेजा जाता है, जो L2 में एक retryable ticket बनाता है। ट्रांसफर टूल लेनदेन में कुछ ETH मूल्य शामिल करता है, जिसका उपयोग 1) टिकट बनाने के लिए भुगतान करने और 2) L2 में टिकट को निष्पादित करने के लिए गैस के भुगतान के लिए किया जाता है। हालाँकि, क्योंकि गैस की कीमतें उस समय तक बदल सकती हैं जब तक टिकट L2 में निष्पादित होने के लिए तैयार होता है, यह संभव है कि यह ऑटो-निष्पादन प्रयास विफल हो जाए। जब ऐसा होता है, तो Arbitrum bridge 7 दिनों तक retryable ticket को सक्रिय रखेगा, और कोई भी "redeeming" टिकट को पुन: प्रयास कर सकता है (जिसके लिए Arbitrum पर कुछ ETH ब्रिज्ड किए गए वॉलेट की आवश्यकता होगी)। -इसे हम सभी स्थानांतरण टूल में "पुष्टि करें" चरण कहते हैं - यह ज्यादातर मामलों में स्वचालित रूप से चलेगा, क्योंकि ऑटो-निष्पादन अक्सर सफल होता है, लेकिन यह महत्वपूर्ण है कि आप यह सुनिश्चित करने के लिए वापस जांचें कि यह पूरा हो गया है। यदि यह सफल नहीं होता है और 7 दिनों में कोई सफल पुनर्प्रयास नहीं होता है, तो आर्बिट्रम ब्रिज टिकट को खारिज कर देगा, और आपकी संपत्ति (सबग्राफ, हिस्सेदारी, प्रतिनिधिमंडल या क्यूरेशन) खो जाएगी और पुनर्प्राप्त नहीं की जा सकेगी। ग्राफ़ कोर डेवलपर्स के पास इन स्थितियों का पता लगाने और बहुत देर होने से पहले टिकटों को भुनाने की कोशिश करने के लिए एक निगरानी प्रणाली है, लेकिन यह सुनिश्चित करना अंततः आपकी ज़िम्मेदारी है कि आपका स्थानांतरण समय पर पूरा हो जाए। यदि आपको अपने लेनदेन की पुष्टि करने में परेशानी हो रही है, तो कृपया [इस फॉर्म](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) और कोर डेव का उपयोग करके संपर्क करें आपकी मदद के लिए वहाँ मौजूद रहूँगा. +यह वह चरण है जिसे हम सभी ट्रांसफर टूल्स में "Confirm" स्टेप कहते हैं - यह ज्यादातर मामलों में स्वचालित रूप से चलेगा, क्योंकि ऑटो-एक्सीक्यूशन आमतौर पर सफल होता है, लेकिन यह महत्वपूर्ण है कि आप यह सुनिश्चित करने के लिए वापस जांचें कि यह सफलतापूर्वक पूरा हुआ है। यदि यह सफल नहीं होता है और 7 दिनों के भीतर कोई सफल पुनःप्रयास नहीं होता है, तो Arbitrum ब्रिज टिकट को हटा देगा, और आपके assets (सबग्राफ, stake, delegation या curation) खो जाएंगे और उन्हें पुनः प्राप्त नहीं किया जा सकता। The Graph के कोर डेव्स के पास ऐसी स्थितियों का पता लगाने और टिकट को समय रहते रिडीम करने के लिए एक मॉनिटरिंग सिस्टम है, लेकिन अंततः यह आपकी जिम्मेदारी है कि आप सुनिश्चित करें कि आपका ट्रांसफर समय पर पूरा हो जाए। यदि आपको अपने ट्रांजेक्शन की पुष्टि करने में समस्या आ रही है, तो कृपया [इस फॉर्म](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) का उपयोग करके संपर्क करें, और कोर डेव्स आपकी सहायता के लिए उपलब्ध होंगे। ### मैंने अपना डेलिगेशन/स्टेक/क्यूरेशन ट्रांसफर शुरू कर दिया है और मुझे यकीन नहीं है कि यह एल2 तक पहुंच गया है या नहीं, मैं कैसे पुष्टि कर सकता हूं कि इसे सही तरीके से ट्रांसफर किया गया था? यदि आपको अपनी प्रोफ़ाइल पर स्थानांतरण पूरा करने के लिए कहने वाला कोई बैनर नहीं दिखता है, तो संभव है कि लेन-देन सुरक्षित रूप से L2 पर पहुंच गया है और किसी और कार्रवाई की आवश्यकता नहीं है। यदि संदेह है, तो आप जांच सकते हैं कि एक्सप्लोरर आर्बिट्रम वन पर आपका प्रतिनिधिमंडल, हिस्सेदारी या क्यूरेशन दिखाता है या नहीं। -If you have the L1 transaction hash (which you can find by looking at the recent transactions in your wallet), you can also confirm if the "retryable ticket" that carried the message to L2 was redeemed here: https://retryable-dashboard.arbitrum.io/ - if the auto-redeem failed, you can also connect your wallet there and redeem it. Rest assured that core devs are also monitoring for messages that get stuck, and will attempt to redeem them before they expire. +यदि आपके पास L1 transaction hash हैं (which you can find by looking at the recent transactions in your wallet), तो आप यह भी पुष्टि कर सकते हैं कि L2 पर संदेश ले जाने वाला "retryable ticket" यहाँ carried गया था या नहीं: https://retryable-dashboard.arbitrum.io/ - यदि auto-redeem विफल रहा, तो आप वहाँ अपना wallet जोड़ सकते हैं और इसे redeem सकते हैं। आश्वासन दिया जाता है कि core Developer भी फंसे हुए संदेशों की निगरानी कर रहे हैं, और वे समाप्त होने से पहले उन्हें redeem का प्रयास करेंगे। ## सबग्राफ स्थानांतरण -### मैं अपना सबग्राफ कैसे स्थानांतरित करूं? +### मेरा सबग्राफ कैसे ट्रांसफर करें? -अपने सबग्राफ को स्थानांतरित करने के लिए, आपको निम्नलिखित चरणों को पूरा करने होंगे: +अपने सबग्राफ को स्थानांतरित करने के लिए, आपको निम्नलिखित चरणों को पूरा करना होगा: 1. Ethereum mainnet वर हस्तांतरण सुरू करा 2. पुष्टि के लिए 20 मिनट का इंतजार करें: -3. आर्बिट्रमवर सबग्राफ हस्तांतरणाची पुष्टी करा\* +3. सबग्राफ स्थानांतरण की पुष्टि करें Arbitrum\* पर -4. आर्बिट्रम पर सबग्राफ का प्रकाशन समाप्त करें +4. सबग्राफ को Arbitrum पर प्रकाशित करना समाप्त करें 5. क्वेरी यूआरएल अपडेट करें (अनुशंसित) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*ध्यान दें कि आपको स्थानांतरण की पुष्टि 7 दिनों के भीतर करनी होगी, अन्यथा आपका सबग्राफ खो सकता है। अधिकांश मामलों में, यह चरण स्वचालित रूप से पूरा हो जाएगा, लेकिन यदि Arbitrum पर गैस मूल्य में अचानक वृद्धि होती है, तो मैन्युअल पुष्टि की आवश्यकता हो सकती है। यदि इस प्रक्रिया के दौरान कोई समस्या आती है, तो सहायता के लिए संसाधन उपलब्ध होंगे: समर्थन से संपर्क करें support@thegraph.com या [Discord](https://discord.gg/graphprotocol) पर। ### मुझे अपना स्थानांतरण कहाँ से आरंभ करना चाहिए? -आप[Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) या किसी भी Subgraph विवरण पृष्ठ से अपने transfer को प्रारंभ कर सकते हैं। Subgraph विवरण पृष्ठ में "Transfer " button पर click करके transfer आरंभ करें। +आप अपना ट्रांसफर शुरू कर सकते हैं [सबग्राफ Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) या किसी भी सबग्राफ विवरण पृष्ठ से। ट्रांसफर शुरू करने के लिए सबग्राफ विवरण पृष्ठ पर "Transfer सबग्राफ" बटन पर क्लिक करें। -### मेरा सबग्राफ़ स्थानांतरित होने तक मुझे कितने समय तक प्रतीक्षा करनी होगी? +### आपको अपना सबग्राफ ट्रांसफर होने में कितना समय लगेगा इंतजार करना पड़ेगा? अंतरण करने में लगभग 20 मिनट का समय लगता है। Arbitrum bridge स्वचालित रूप से bridge अंतरण पूरा करने के लिए पृष्ठभूमि में काम कर रहा है। कुछ मामलों में, गैस लागत में spike हो सकती है और आपको transaction की पुष्टि फिर से करनी होगी। -### क्या मेरा सबग्राफ L2 में स्थानांतरित करने के बाद भी खोजा जा सकेगा? +### क्या मेरा सबग्राफ L2 पर ट्रांसफर करने के बाद भी खोजने योग्य रहेगा? -आपका सबग्राफ केवल उस नेटवर्क पर खोजने योग्य होगा जिस पर यह प्रकाशित किया गया है। उदाहरण स्वरूप, यदि आपका सबग्राफ आर्बिट्रम वन पर है, तो आपकेंद्रीय तंत्र पर केवल आर्बिट्रम वन के खोजक में ही ढूंढा जा सकता है और आप इथेरियम पर इसे नहीं खोज पाएंगे। कृपया सुनिश्चित करें कि आपने पृष्ठ के शीर्ष में नेटवर्क स्विचर में आर्बिट्रम वन को चुना है ताकि आप सही नेटवर्क पर हों। अंतरण के बाद, L1 सबग्राफ को पुराना किया गया माना जाएगा। +आपका सबग्राफ केवल उसी नेटवर्क पर खोजा जा सकेगा, जिस पर इसे प्रकाशित किया गया है। उदाहरण के लिए, यदि आपका सबग्राफ Arbitrum One पर है, तो आप इसे केवल Arbitrum One के Explorer में खोज सकते हैं और इसे Ethereum पर नहीं ढूंढ पाएंगे। कृपया सुनिश्चित करें कि आप पृष्ठ के शीर्ष पर नेटवर्क स्विचर में Arbitrum One का चयन करें, ताकि यह सुनिश्चित हो सके कि आप सही नेटवर्क पर हैं। स्थानांतरण के बाद, L1 सबग्राफ अप्रचलित के रूप में दिखाई देगा। -### क्या मेरे सबग्राफ को स्थानांतरित करने के लिए इसे प्रकाशित किया जाना आवश्यक है? +### क्या मेरा सबग्राफ स्थानांतरित करने के लिए प्रकाशित होना आवश्यक है? -सबग्राफ अंतरण उपकरण का लाभ उठाने के लिए, आपके सबग्राफ को पहले ही ईथेरियम मेननेट पर प्रकाशित किया जाना चाहिए और सबग्राफ के मालिक wallet द्वारा स्वामित्व signal subgraph का कुछ होना चाहिए। यदि आपका subgraph प्रकाशित नहीं है, तो सिफ़ारिश की जाती है कि आप सीधे Arbitrum One पर प्रकाशित करें - जुड़े गए gas fees काफी कम होंगे। यदि आप किसी प्रकाशित subgraph को अंतरण करना चाहते हैं लेकिन owner account ने उस पर कोई signal curate नहीं किया है, तो आप उस account से थोड़ी सी राशि (जैसे 1 GRT) के signal कर सकते हैं; सुनिश्चित करें कि आपने "auto-migrating" signal को चुना है। +आप सबग्राफ transfer tool का लाभ उठाने के लिए, आपका सबग्राफ पहले से ही Ethereum mainnet पर प्रकाशित होना चाहिए और उसमें उस वॉलेट के स्वामित्व में कुछ क्यूरेशन सिग्नल होना चाहिए जो सबग्राफ का मालिक है। यदि आपका सबग्राफ प्रकाशित नहीं है, तो यह अनुशंसित है कि आप इसे सीधे Arbitrum One पर प्रकाशित करें - इससे संबंधित गैस शुल्क काफी कम होंगे। यदि आप पहले से प्रकाशित सबग्राफ को स्थानांतरित करना चाहते हैं, लेकिन स्वामी खाते ने उस पर कोई क्यूरेशन सिग्नल नहीं दिया है, तो आप उस खाते से एक छोटी राशि (जैसे 1 GRT) का सिग्नल दे सकते हैं; सुनिश्चित करें कि आप "auto-migrating" सिग्नल चुनें। -### मी आर्बिट्रममध्ये हस्तांतरित केल्यानंतर माझ्या सबग्राफच्या इथरियम मेननेट आवृत्तीचे काय होते? +### Ethereum मेननेट संस्करण का आपका सबग्राफ Arbitrum में स्थानांतरित करने के बाद क्या होता है? -अपने सबग्राफ को आर्बिट्रम पर अंतरण करने के बाद, ईथेरियम मेननेट संस्करण को पुराना किया जाएगा। हम आपको 48 घंटों के भीतर अपनी क्वेरी URL को अद्यतन करने की सिफारिश करते हैं। हालांकि, एक ग्रेस पीरियड लागू होता है जिसके तहत आपकी मुख्यनेट URL को कार्यरत रखा जाता है ताकि किसी तिसरी पक्ष डैप समर्थन को अपडेट किया जा सके। +आपके सबग्राफ को Arbitrum में ट्रांसफर करने के बाद, Ethereum mainnet संस्करण को हटा दिया जाएगा। हम अनुशंसा करते हैं कि आप अपनी क्वेरी URL को 48 घंटों के भीतर अपडेट करें। हालाँकि, एक ग्रेस अवधि उपलब्ध है, जिससे आपका mainnet URL कार्यशील बना रहेगा ताकि कोई भी तृतीय-पक्ष dapp समर्थन अपडेट किया जा सके। ### स्थानांतरण करने के बाद, क्या मुझे आर्बिट्रम पर पुनः प्रकाशन की आवश्यकता होती है? 20 मिनट के अंतराल के बाद, आपको अंतरण को पूरा करने के लिए UI में एक लेन-देन की पुष्टि करनी होगी, लेकिन अंतरण उपकरण आपको इसके माध्यम से मार्गदर्शन करेगा। आपकी L1 इंड पॉइंट ट्रांसफर विंडो के दौरान और एक ग्रेस पीरियड के बाद भी समर्थित रहेगा। आपको यह सुझाव दिया जाता है कि आप अपनी इंड पॉइंट को अपनी सुविधा के अनुसार अपडेट करें। -### Will my endpoint experience downtime while re-publishing? +### क्या पुनः प्रकाशित करते समय मेरे समापन बिंदु को डाउनटाइम का अनुभव होगा? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +यह असंभव नहीं है, लेकिन संभव है कि थोड़े समय के लिए डाउनटाइम का अनुभव हो सकता है, यह इस बात पर निर्भर करता है कि कौन से Indexers L1 पर सबग्राफ को सपोर्ट कर रहे हैं और क्या वे इसे तब तक इंडेक्सिंग करते रहते हैं जब तक कि सबग्राफ पूरी तरह से L2 पर सपोर्ट न हो जाए। ### क्या L2 पर प्रकाशन और संस्करणीकरण Ethereum मेननेट के समान होते हैं? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +हाँ। सबग्राफ Studio में प्रकाशित करते समय अपने प्रकाशित नेटवर्क के रूप में Arbitrum One चुनें। Studio में, नवीनतम संस्करण की ओर इंगित करने वाला नवीनतम endpoint उपलब्ध होगा। -### क्या मेरे subgraph की curation उसके साथ चलेगी जब मैंsubgraph को स्थानांतरित करूँगा? +### क्या मेरे सबग्राफ का curation मेरे सबग्राफ के साथ मूव होगा? -यदि आपने " auto-migrating" signal का चयन किया है, तो आपके खुद के curation का 100% आपकेsubgraph के साथ Arbitrum One पर जाएगा। subgraph के सभी curation signalको अंतरण के समय GRT में परिवर्तित किया जाएगा, और आपके curation signal के समर्थन में उत्पन्न होने वाले GRT का उपयोग L2 subgraph पर signal mint करने के लिए किया जाएगा। +यदि आपने auto-migrating signal चुना है, तो आपकी पूरी curation आपके सबग्राफ के साथ Arbitrum One पर स्थानांतरित हो जाएगी। स्थानांतरण के समय सबग्राफ की पूरी curation signal को GRT में परिवर्तित कर दिया जाएगा, और आपकी curation signal के अनुरूप GRT का उपयोग L2 सबग्राफ पर signal को मिंट करने के लिए किया जाएगा। -अन्य क्यूरेटर यह चुन सकते हैं कि जीआरटी का अपना अंश वापस लेना है या नहीं, या इसे उसी सबग्राफ पर मिंट सिग्नल के लिए एल2 में स्थानांतरित करना है या नहीं। +Other Curators यह चुन सकते हैं कि वे अपने GRT के भाग को निकालना चाहते हैं, या फिर इसे L2 में स्थानांतरित करके उसी सबग्राफ पर सिग्नल मिंट करना चाहते हैं। -### क्या मैं स्थानांतरण के बाद अपने सबग्राफ को एथेरियम मेननेट पर वापस ले जा सकता हूं? +### क्या मैं अपना सबग्राफ ट्रांसफर करने के बाद वापस Ethereum mainnet पर ला सकता हूँ? -एक बार अंतरित होने के बाद, आपके ईथेरियम मेननेट संस्करण को पुराना मान दिया जाएगा। अगर आप मुख्यनेट पर वापस जाना चाहते हैं, तो आपको पुनः डिप्लॉय और प्रकाशित करने की आवश्यकता होगी। हालांकि, वापस ईथेरियम मेननेट पर लौटने को मजबूरी से अनुशंसित किया जाता है क्योंकि सूचीकरण रिवॉर्ड आखिरकार पूरी तरह से आर्बिट्रम वन पर ही वितरित किए जाएंगे। +एक बार ट्रांसफर हो जाने के बाद, आपके Ethereum mainnet संस्करण का यह सबग्राफ डिप्रिकेट कर दिया जाएगा। यदि आप वापस mainnet पर जाना चाहते हैं, तो आपको इसे दोबारा डिप्लॉय और पब्लिश करना होगा। हालांकि, वापस Ethereum mainnet पर ट्रांसफर करना दृढ़ता से हतोत्साहित किया जाता है क्योंकि Indexing रिवॉर्ड्स अंततः पूरी तरह से Arbitrum One पर वितरित किए जाएंगे। ### मेरे स्थानांतरण को पूरा करने के लिए मुझे ब्रिज़्ड ईथ की आवश्यकता क्यों है? @@ -112,11 +112,11 @@ Yes. Select Arbitrum One as your published network when publishing in Subgraph S 2. पुष्टि के लिए 20 मिनट का इंतजार करें: 3. आर्बिट्रम पर समर्पण स्थानांतरण की पुष्टि करें: -\*\*\*\*You must confirm the transaction to complete the delegation transfer on Arbitrum. This step must be completed within 7 days or the delegation could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*\*\*\* आपको Arbitrum पर balance हस्तांतरण पूरा करने के लिए अपने transaction की पुष्टि करनी होगी। इस कदम को 7 दिनों के भीतर पूरा करना होगा, अन्यथा balance खो सकता है। अधिकांश मामलों में, इस कदम को स्वचालित रूप से चलाया जाएगा, लेकिन अगर Arbitrum पर gas मूल्य में spike होती है तो कुछ स्थानिक पुष्टि की आवश्यकता हो सकती है। यदि इस प्रक्रिया के दौरान कोई समस्या होती है, तो सहायता के लिए संसाधन होंगे: support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### अगर मैं ईथेरियम मेननेट पर खुली आवंटन के साथ स्थानांतरण प्रारंभ करता हूँ, तो मेरे पुरस्कारों के साथ क्या होता है? -If the Indexer to whom you're delegating is still operating on L1, when you transfer to Arbitrum you will forfeit any delegation rewards from open allocations on Ethereum mainnet. This means that you will lose the rewards from, at most, the last 28-day period. If you time the transfer right after the Indexer has closed allocations you can make sure this is the least amount possible. If you have a communication channel with your Indexer(s), consider discussing with them to find the best time to do your transfer. +यदि जिस इंडेक्सर को आप सौंप रहे हैं वह अभी भी एल1 पर काम कर रहा है, तो जब आप आर्बिट्रम में स्थानांतरित होते हैं तो आप एथेरियम मेननेट पर खुले आवंटन से किसी भी प्रतिनिधिमंडल पुरस्कार को जब्त कर लेंगे। इसका मतलब यह है कि आप अधिकतम 28 दिनों की अवधि से पुरस्कार खो देंगे। यदि आप इंडेक्सर द्वारा आवंटन बंद करने के ठीक बाद स्थानांतरण का समय तय करते हैं तो आप यह सुनिश्चित कर सकते हैं कि यह न्यूनतम संभव राशि है। यदि आपके पास अपने इंडेक्सर्स के साथ संचार चैनल है, तो अपना स्थानांतरण करने के लिए सबसे अच्छा समय खोजने के लिए उनके साथ चर्चा करने पर विचार करें। ### यदि मैं जिस इंडेक्सर को वर्तमान में सौंप रहा हूं वह आर्बिट्रम वन पर नहीं है तो क्या होगा? @@ -144,53 +144,53 @@ L2 हस्तांतरण उपकरण हमेशा आपकी ड ### मेरे प्रतिनिधित्व को L2 में ट्रांसफर करने का पूरा काम कितने समय तक लगता है? -A 20-minute confirmation is required for delegation transfer. Please note that after the 20-minute period, you must come back and complete step 3 of the transfer process within 7 days. If you fail to do this, then your delegation may be lost. Note that in most cases the transfer tool will complete this step for you automatically. In case of a failed auto-attempt, you will need to complete it manually. If any issues arise during this process, don't worry, we'll be here to help: contact us at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +A 20-minute confirmation is required for delegation transfer. कृपया ध्यान दें कि 20 मिनट के अवधि के बाद, आपको वापस आकर स्थानांतरण प्रक्रिया के कदम 3 को 7 दिन के भीतर पूरा करना होगा। यदि आप ऐसा नहीं करते हैं, तो आपका delegation may be lost। ध्यान दें कि अधिकांश मामलों में transfer tool यह कदम स्वचालित रूप से पूरा कर देगा। स्वचालित प्रयास में असफल होने पर, आपको इसे manually रूप से पूरा करना होगा। इस प्रक्रिया के दौरान यदि कोई समस्याएं उत्पन्न होती हैं, तो चिंता न करें, हम यहां हैं आपकी सहायता के लिए: contact के लिए हमसे support@thegraph.com पर या Discord पर संपर्क करें। ### क्या मैं अपनी सौंपन को स्थानांतरित कर सकता हूँ अगर मैं एक जीआरटी वेस्टिंग अनुबंध/टोकन लॉक वॉलेट का उपयोग कर रहा हूँ? हाँ! प्रक्रिया थोड़ी अलग है क्योंकि वेस्टिंग कॉन्ट्रैक्ट्स आवश्यक L2 गैस के लिए आवश्यक ETH को फॉरवर्ड नहीं कर सकते, इसलिए आपको पहले ही इसे जमा करना होगा। यदि आपका वेस्टिंग कॉन्ट्रैक्ट पूरी तरह से वेस्ट नहीं होता है, तो आपको पहले L2 पर एक समकक्ष वेस्टिंग कॉन्ट्रैक्ट को प्रारंभ करना होगा और आप केवल इस L2 वेस्टिंग कॉन्ट्रैक्ट पर डेलीगेशन को हस्तांतरित कर सकेंगे। जब आप वेस्टिंग लॉक वॉलेट का उपयोग करके एक्सप्लोरर से जुड़ते हैं, तो यह प्रक्रिया आपको एक्सप्लोरर पर कनेक्ट करने के लिए गाइड कर सकती है। -### Does my Arbitrum vesting contract allow releasing GRT just like on mainnet? +### क्या मेरा Arbitrum vesting contract अनुबंध मेननेट की तरह ही GRT जारी करने की अनुमति देता है? -No, the vesting contract that is created on Arbitrum will not allow releasing any GRT until the end of the vesting timeline, i.e. until your contract is fully vested. This is to prevent double spending, as otherwise it would be possible to release the same amounts on both layers. +नहीं, Arbitrum पर बनाया गया vesting contract, निहित समयसीमा के अंत तक किसी भी GRT को जारी करने की अनुमति नहीं देगा, यानी जब तक कि आपका contract पूरी तरह से fully vested। यह दोहरे खर्च को रोकने के लिए है, अन्यथा दोनों स्तरों पर समान amounts जारी करना संभव होगा। -If you'd like to release GRT from the vesting contract, you can transfer them back to the L1 vesting contract using Explorer: in your Arbitrum One profile, you will see a banner saying you can transfer GRT back to the mainnet vesting contract. This requires a transaction on Arbitrum One, waiting 7 days, and a final transaction on mainnet, as it uses the same native bridging mechanism from the GRT bridge. +यदि आप GRT को vesting contract से मुक्त करना चाहते हैं, तो आप उन्हें Explorer का उपयोग करके L1 निहित अनुबंध में वापस स्थानांतरित कर सकते हैं: आपके Arbitrum One profile में, आपको एक banner दिखाई देगा जिसमें कहा जाएगा कि आप GRT को mainnet vesting contract में वापस स्थानांतरित कर सकते हैं। इसके लिए Arbitrum One पर transaction, 7 दिनों की प्रतीक्षा और mainnet पर अंतिम transaction की आवश्यकता होती है, क्योंकि यह GRT bridge से समान native bridging mechanism का उपयोग करता है। ### क्या कोई प्रतिनिधिमंडल कर है? नहीं, L2 पर प्राप्त टोकनों को निर्दिष्ट इंडेक्सर की ओर से निर्दिष्ट डेलीगेटर के प्रतिनिधि रूप में डेलीगेट किया जाता है और डेलीगेशन टैक्स का कोई भुगतान नहीं होता है। -### Will my unrealized rewards be transferred when I transfer my delegation? +### जब मैं अपना delegation स्थानांतरित करूंगा तो क्या मेरे unrealized rewards स्थानांतरित कर दिए जाएंगे? -​Yes! The only rewards that can't be transferred are the ones for open allocations, as those won't exist until the Indexer closes the allocations (usually every 28 days). If you've been delegating for a while, this is likely only a small fraction of rewards. +हाँ! एकमात्र rewards जिन्हें स्थानांतरित नहीं किया जा सकता है वे open allocations के लिए हैं, क्योंकि वे तब तक मौजूद नहीं रहेंगे जब तक कि Indexer allocations बंद नहीं कर देता (usually every 28 days)। यदि आप कुछ समय से delegating कर रहे हैं, तो यह fraction of rewards का एक छोटा सा अंश है। -At the smart contract level, unrealized rewards are already part of your delegation balance, so they will be transferred when you transfer your delegation to L2. ​ +Smart contract level पर, unrealized rewards पहले से ही आपके delegation balance का हिस्सा हैं, इसलिए जब आप अपने delegation को L2 में स्थानांतरित करेंगे तो उन्हें स्थानांतरित कर दिया जाएगा। ​ -### Is moving delegations to L2 mandatory? Is there a deadline? +### क्या delegations को L2 में ले जाना mandatory है? क्या कोई deadline है? -​Moving delegation to L2 is not mandatory, but indexing rewards are increasing on L2 following the timeline described in [GIP-0052](https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193). Eventually, if the Council keeps approving the increases, all rewards will be distributed in L2 and there will be no indexing rewards for Indexers and Delegators on L1. ​ +Moving delegation को L2 पर ले जाना mandatory नहीं है, लेकिन [GIP-0052](https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193)। अंततः, यदि council वृद्धि को मंजूरी देती रहती है, तो सभी rewards L2 में वितरित किए जाएंगे और L1 पर Indexers और Delegators के लिए कोई अनुक्रमण पुरस्कार नहीं होंगे। ​ -### If I am delegating to an Indexer that has already transferred stake to L2, do I stop receiving rewards on L1? +### यदि मैं किसी ऐसे delegator को सौंप रहा हूं जिसने Indexer पहले ही stake L2 में स्थानांतरित कर दी है, तो क्या मुझे L1 पर पुरस्कार मिलना बंद हो जाएगा? -​Many Indexers are transferring stake gradually so Indexers on L1 will still be earning rewards and fees on L1, which are then shared with Delegators. Once an Indexer has transferred all of their stake, then they will stop operating on L1, so Delegators will not receive any more rewards unless they transfer to L2. +​कई Indexers धीरे-धीरे stake स्थानांतरित कर रहे हैं, इसलिए L1 पर Indexers अभी भी L1 पर reward और fees अर्जित करेंगे, जिन्हें बाद में Delegators के साथ साझा किया जाता है। एक बार जब कोई Indexer अपनी सारी हिस्सेदारी हस्तांतरित कर देता है, तो वे L1 पर काम करना बंद कर देंगे, इसलिए जब तक वे L2 में स्थानांतरित नहीं हो जाते, तब तक Delegators को कोई और rewards नहीं मिलेगा। -Eventually, if the Council keeps approving the indexing rewards increases in L2, all rewards will be distributed on L2 and there will be no indexing rewards for Indexers and Delegators on L1. ​ +Eventually, यदि Council L2 में indexing rewards में वृद्धि को मंजूरी देती रहती है, तो सभी rewards L2 पर वितरित किए जाएंगे और L1 पर Indexers and Delegators के लिए कोई indexing rewards नहीं होगा। ​ -### I don't see a button to transfer my delegation. Why is that? +### मुझे अपना delegation स्थानांतरित करने के लिए कोई button नहीं दिख रहा है। ऐसा क्यों? -​Your Indexer has probably not used the L2 transfer tools to transfer stake yet. +​आपके Indexers ने शायद अभी तक हिस्सेदारी हस्तांतरित करने के लिए L2 transfer tools का उपयोग नहीं किया है। -If you can contact the Indexer, you can encourage them to use the L2 Transfer Tools so that Delegators can transfer delegations to their L2 Indexer address. ​ +यदि आप Indexer से संपर्क कर सकते हैं, तो आप उन्हें L2 Transfer Tools का उपयोग करने के लिए encourage कर सकते हैं ताकि Delegators delegations को उनके L2 Indexer पते पर स्थानांतरित कर सकें। ​ -### My Indexer is also on Arbitrum, but I don't see a button to transfer the delegation in my profile. Why is that? +### मेरा Indexer भी Arbitrum पर है, लेकिन मुझे अपनी profile में delegation को स्थानांतरित करने के लिए कोई button नहीं दिख रहा है। ऐसा क्यों? -​It is possible that the Indexer has set up operations on L2, but hasn't used the L2 transfer tools to transfer stake. The L1 smart contracts will therefore not know about the Indexer's L2 address. If you can contact the Indexer, you can encourage them to use the transfer tool so that Delegators can transfer delegations to their L2 Indexer address. ​ +​यह संभव है कि Indexer ने L2 पर operations set up किया है, लेकिन transfer stake करने के लिए L2 transfer tools का उपयोग नहीं किया है। इसलिए L1 स smart contracts को Indexer के L2 पते के बारे में पता नहीं चलेगा। यदि आप Indexer से संपर्क कर सकते हैं, तो आप उन्हें transfer tool का उपयोग करने के लिए प्रोत्साहित कर सकते हैं ताकि Delegators delegations को उनके L2 Indexer address पर स्थानांतरित कर सकें। ​ ### Can I transfer my delegation to L2 if I have started the undelegating process and haven't withdrawn it yet? -​No. If your delegation is thawing, you have to wait the 28 days and withdraw it. +नहीं. यदि आपका delegation thawing रहा है, तो आपको 28 दिनों तक इंतजार करना होगा और इसे वापस लेना होगा। -The tokens that are being undelegated are "locked" and therefore cannot be transferred to L2. +जिन tokens को undelegated नहीं किया जा रहा है वे "locked" हैं और इसलिए उन्हें L2 में स्थानांतरित नहीं किया जा सकता है। ## क्यूरेशन सिग्नल @@ -206,19 +206,19 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans \*यदि आवश्यक हो - अर्थात्, आप एक कॉन्ट्रैक्ट पते का उपयोग कर रहे हैं | -### मी क्युरेट केलेला सबग्राफ L2 वर गेला असल्यास मला कसे कळेल? +### मैं कैसे जानूँगा कि मैंने क्यूरेट किया हुआ सबग्राफ L2 पर चला गया है? -सबग्राफ विवरण पृष्ठ को देखते समय, एक बैनर आपको सूचित करेगा कि यह सबग्राफ अंतरण किया गया है। आप प्रोंप्ट का पालन करके अपने क्यूरेशन को अंतरण कर सकते हैं। आप इस जानकारी को भी उन सभी सबग्राफों के विवरण पृष्ठ पर पा सकते हैं जिन्होंने अंतरण किया है। +जब आप सबग्राफ विवरण पृष्ठ देख रहे होते हैं, तो एक बैनर आपको सूचित करेगा कि यह सबग्राफ स्थानांतरित कर दिया गया है। आप अपने curation को स्थानांतरित करने के लिए संकेत का पालन कर सकते हैं। आप यह जानकारी किसी भी स्थानांतरित किए गए सबग्राफ के विवरण पृष्ठ पर भी पा सकते हैं। ### अगर मैं अपनी संरचना को L2 में स्थानांतरित करना नहीं चाहता हूँ तो क्या होगा? -जब एक सबग्राफ पुराना होता है, तो आपके पास सिग्नल वापस लेने का विकल्प होता है। उसी तरह, अगर कोई सबग्राफ L2 पर चल रहा है, तो आपको चुनने का विकल्प होता है कि क्या आप ईथेरियम मेननेट से सिग्नल वापस लेना चाहेंगे या सिग्नल को L2 पर भेजें। +जब कोई सबग्राफ अमान्य हो जाता है, तो आपके पास अपना सिग्नल निकालने का विकल्प होता है। इसी तरह, यदि कोई सबग्राफ L2 में स्थानांतरित हो गया है, तो आप Ethereum mainnet में अपना सिग्नल निकालने या इसे L2 पर भेजने का विकल्प चुन सकते हैं। ### माझे क्युरेशन यशस्वीरित्या हस्तांतरित झाले हे मला कसे कळेल? एल2 स्थानांतरण उपकरण को प्रारंभ करने के बाद, सिग्नल विवरण एक्सप्लोरर के माध्यम से लगभग 20 मिनट के बाद उपलब्ध होंगे। -### क्या मैं एक समय पर एक से अधिक सबग्राफ पर अपनी संरचना को स्थानांतरित कर सकता हूँ? +### हाँ, क्या मैं एक समय में एक से अधिक सबग्राफ पर अपनी curation स्थानांतरित कर सकता हूँ? वर्तमान में कोई थोक स्थानांतरण विकल्प उपलब्ध नहीं है। @@ -238,7 +238,7 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans 3. आर्बिट्रम पर स्थानांतरण की पुष्टि करें: -\*Note that you must confirm the transfer within 7 days otherwise your stake may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*कृपया ध्यान दें कि आपको 7 दिनों के भीतर हस्तांतरण की पुष्टि करनी होगी, अन्यथा आपका stake हो सकता है। अधिकांश मामलों में, यह चरण स्वचालित रूप से चलेगा, लेकिन अगर Arbitrum पर gas value में एक अचानक बढ़ोतरी होती है तो manual पुष्टि की आवश्यकता हो सकती है। इस प्रक्रिया के दौरान कोई भी समस्याएँ हो तो सहायता के लिए संसाधन उपलब्ध होंगे: समर्थन से संपर्क करें support@thegraph.com या [Discord](https://discord.gg/graphprotocol) पर। ### क्या मेरा सम्पूर्ण स्थानांतरण हो जाएगा? @@ -266,7 +266,7 @@ L2 ट्रान्स्फर टूलला तुमचा स्टे ### मी माझा हिस्सा हस्तांतरित करण्यापूर्वी मला आर्बिट्रमवर इंडेक्स करावे लागेल का? -आप पहले ही अपने स्टेक को प्रभावी रूप से हस्तांतरित कर सकते हैं, लेकिन आप L2 पर किसी भी पुरस्कार का दावा नहीं कर पाएंगे जब तक आप L2 पर सबग्राफ्स को आवंटित नहीं करते हैं, उन्हें इंडेक्स करते हैं, और पॉइंट ऑफ इंटरेस्ट (POI) प्रस्तुत नहीं करते। +आप प्रभावी रूप से अपनी stake को पहले स्थानांतरित कर सकते हैं इससे पहले कि आप indexing सेटअप करें, लेकिन जब तक आप सबग्राफ को L2 पर आवंटित नहीं करते, उन्हें index नहीं करते और POIs प्रस्तुत नहीं करते, तब तक आप L2 पर कोई इनाम प्राप्त नहीं कर पाएंगे। ### मी माझा इंडेक्सिंग स्टेक हलवण्यापूर्वी प्रतिनिधी त्यांचे प्रतिनिधी हलवू शकतात का? @@ -276,11 +276,11 @@ L2 ट्रान्स्फर टूलला तुमचा स्टे हाँ! प्रक्रिया कुछ अलग है, क्योंकि वेस्टिंग कॉन्ट्रैक्ट्स L2 गैस के लिए आवश्यक ETH को फॉरवर्ड नहीं कर सकते, इसलिए आपको पहले ही इसे जमा करना होगा। यदि आपका वेस्टिंग कॉन्ट्रैक्ट पूरी तरह से वेस्ट नहीं होता है, तो आपको पहले L2 पर एक समकक्ष वेस्टिंग कॉन्ट्रैक्ट को प्रारंभ करना होगा और आपको केवल इस L2 वेस्टिंग कॉन्ट्रैक्ट पर स्टेक को हस्तांतरित करने की अनुमति होगी। जब आप वेस्टिंग लॉक वॉलेट का उपयोग करके एक्सप्लोरर से जुड़ते हैं, तो यह प्रक्रिया आपको एक्सप्लोरर पर कनेक्ट करने के लिए गाइड कर सकती है। -### I already have stake on L2. Do I still need to send 100k GRT when I use the transfer tools the first time? +### L2 पर मेरी पहले से ही stake है। जब मैं पहली बार transfer tool का उपयोग करता हूँ तो क्या मुझे अभी भी 100k GRT भेजने की आवश्यकता है? -​Yes. The L1 smart contracts will not be aware of your L2 stake, so they will require you to transfer at least 100k GRT when you transfer for the first time. ​ +हाँ। L1 smart contracts को आपकी L2 हिस्सेदारी के बारे में पता नहीं होगा, इसलिए जब आप पहली बार transfer करेंगे तो उन्हें आपसे कम से कम 100k GRT transfer करने की आवश्यकता होगी। ​ -### Can I transfer my stake to L2 if I am in the process of unstaking GRT? +### यदि मैं GRT को unstake करने की प्रक्रिया में हूं तो क्या मैं अपनी stake L2 में स्थानांतरित कर सकता हूं? ​No. If any fraction of your stake is thawing, you have to wait the 28 days and withdraw it before you can transfer stake. The tokens that are being staked are "locked" and will prevent any transfers or stake to L2. @@ -377,25 +377,25 @@ L2 ट्रान्स्फर टूलला तुमचा स्टे \*यदि आवश्यक हो - अर्थात्, आप एक कॉन्ट्रैक्ट पते का उपयोग कर रहे हैं | -\*\*\*\*You must confirm your transaction to complete the balance transfer on Arbitrum. This step must be completed within 7 days or the balance could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*\*\*\* आपको Arbitrum पर balance हस्तांतरण पूरा करने के लिए अपने transcation की पुष्टि करनी होगी। इस कदम को 7 दिनों के भीतर पूरा करना होगा, अन्यथा balance खो सकता है। अधिकांश मामलों में, इस कदम को स्वचालित रूप से चलाया जाएगा, लेकिन अगर Arbitrum पर gas value में spike होती है तो कुछ स्थानिक पुष्टि की आवश्यकता हो सकती है। यदि इस प्रक्रिया के दौरान कोई समस्या होती है, तो सहायता के लिए संसाधन होंगे: support@thegraph.com पर समर्थन करें या [Discord](https://discord.gg/graphprotocol)पर। -### My vesting contract shows 0 GRT so I cannot transfer it, why is this and how do I fix it? +### मेरा vesting contract 0 GRT दिखाता है इसलिए मैं इसे स्थानांतरित नहीं कर सकता, ऐसा क्यों है और मैं इसे कैसे ठीक करूं? -​To initialize your L2 vesting contract, you need to transfer a nonzero amount of GRT to L2. This is required by the Arbitrum GRT bridge that is used by the L2 Transfer Tools. The GRT must come from the vesting contract's balance, so it does not include staked or delegated GRT. +अपने L2 vesting contract को आरंभ करने के लिए, आपको GRT की एक nonzero amount को L2 में स्थानांतरित करना होगा। यह Arbitrum GRT bridge के लिए आवश्यक है जिसका उपयोग L2 Transfer Tools द्वारा किया जाता है। GRTvesting contract's balance से आना चाहिए, इसलिए इसमें staked पर लगाया गया या प्रत्यायोजित GRT शामिल नहीं है। -If you've staked or delegated all your GRT from the vesting contract, you can manually send a small amount like 1 GRT to the vesting contract address from anywhere else (e.g. from another wallet, or an exchange). ​ +यदि आपने अपने सभी GRT कोvesting contract से staked or delegated कर दिया है, तो आप कहीं और से (e.g. from another wallet, or an exchange)vesting contract पते पर 1 GRT जैसी छोटी balance manual रूप से भेज सकते हैं। ​ -### I am using a vesting contract to stake or delegate, but I don't see a button to transfer my stake or delegation to L2, what do I do? +### मैं stake or delegate के लिए एक vesting contract का उपयोग कर रहा हूं, लेकिन मुझे अपनी stake or delegate को L2 में स्थानांतरित करने के लिए कोई button नहीं दिख रहा है, मैं क्या करूं? -​If your vesting contract hasn't finished vesting, you need to first create an L2 vesting contract that will receive your stake or delegation on L2. This vesting contract will not allow releasing tokens in L2 until the end of the vesting timeline, but will allow you to transfer GRT back to the L1 vesting contract to be released there. +​यदि आपका vesting contract समाप्त नहीं हुआ है, तो आपको पहले एक L2 vesting contract बनाना होगा जो L2 पर आपकी stake or delegation प्राप्त करेगा। यह निहित अनुबंध, vesting timeline के अंत तक L2 में token जारी करने की अनुमति नहीं देगा, लेकिन आपको वहां जारी होने वाले L1 vesting contract में GRT को वापस स्थानांतरित करने की अनुमति देगा। -When connected with the vesting contract on Explorer, you should see a button to initialize your L2 vesting contract. Follow that process first, and you will then see the buttons to transfer your stake or delegation in your profile. ​ +Explorer पर vesting contract से connect होने पर, आपको अपने L2 vesting contract को आरंभ करने के लिए एक button देखना चाहिए। पहले उस प्रक्रिया का पालन करें, और फिर आप अपनी profile में अपनी stake or delegation को स्थानांतरित करने के लिए button देखेंगे। ​ -### If I initialize my L2 vesting contract, will this also transfer my delegation to L2 automatically? +### यदि मैं अपना L2 vesting contract प्रारंभ करता हूँ, तो क्या इससे मेरा delegation स्वचालित रूप से L2 में स्थानांतरित हो जाएगा? -​No, initializing your L2 vesting contract is a prerequisite for transferring stake or delegation from the vesting contract, but you still need to transfer these separately. +​नहीं, अपने L2 vesting contract को आरंभ करना, vesting contract से stake or delegation को स्थानांतरित करने के लिए एक शर्त है, लेकिन आपको अभी भी इन्हें अलग से स्थानांतरित करने की आवश्यकता है। -You will see a banner on your profile prompting you to transfer your stake or delegation after you have initialized your L2 vesting contract. +आपको अपनी profile पर एक banner दिखाई देगा जो आपको अपना L2 vesting contract शुरू करने के बाद अपनी stake or delegation को स्थानांतरित करने के लिए प्रेरित करेगा। ### क्या मैं अपने निहित अनुबंध को वापस L1 पर ले जा सकता हूँ? diff --git a/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-guide.mdx index 22cea8b3617f..40b0ed0379b5 100644 --- a/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/hi/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph has made it easy to move to L2 on Arbitrum One. For each protocol part इन टूल्स के बारे में कुछ सामान्य प्रश्नों के उत्तर [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/) में दिए गए हैं। FAQs में इन टूल्स का उपयोग कैसे करें, वे कैसे काम करते हैं, और उनका उपयोग करते समय ध्यान में रखने वाली बातें विस्तृत रूप से समझाई गई हैं। -## अपने सबग्राफ को आर्बिट्रम (L2) में कैसे स्थानांतरित करें +## अपने सबग्राफ को Arbitrum (L2) में स्थानांतरित कैसे करें -## अपने सबग्राफ़ स्थानांतरित करने के लाभ +## अपने सबग्राफ को ट्रांसफर करने के लाभ ग्राफ़ का समुदाय और मुख्य डेवलपर पिछले वर्ष से आर्बिट्रम में जाने की तैयारी कर रहे हैं (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)। आर्बिट्रम, एक परत 2 या "एल2" ब्लॉकचेन, एथेरियम से सुरक्षा प्राप्त करता है लेकिन काफी कम गैस शुल्क प्रदान करता है। -जब आप अपने सबग्राफ को दी ग्राफ नेटवर्क पर प्रकाशित या अपग्रेड करते हैं, तो आप प्रोटोकॉल पर स्मार्ट कॉन्ट्रैक्ट्स के साथ इंटरैक्ट कर रहे होते हैं और इसके लिए ईथरियम (ETH) का उपयोग करके गैस के लिए भुगतान करना आवश्यक होता है। अपने सबग्राफ को Arbitrum पर स्थानांतरित करके, आपके सबग्राफ के किसी भी भविष्य के अपडेट के लिए गैस शुल्क बहुत कम होगा। कम शुल्कों के साथ, और L2 पर क्यूरेशन बॉन्डिंग कर्व्स फ्लैट होने के कारण, अन्य क्यूरेटर्स को भी आपके सबग्राफ पर क्यूरेट करने में आसानी होगी, जिससे आपके सबग्राफ पर इंडेक्सर्स के लिए पुरस्कार बढ़ेंगे। इस कम लागत वाले वातावरण से इंडेक्सर्स को आपके सबग्राफ को इंडेक्स करने और सेव करने में सस्तापन होगा। आगामी महीनों में Arbitrum पर इंडेक्सिंग पुरस्कार बढ़ जाएगा और ईथिरियम मेननेट पर कम हो जाएगा, इसलिए और भी अधिक इंडेक्सर्स अपने स्टेक को स्थानांतरित करेंगे और उनके संचालन को L2 पर सेटअप करेंगे। +यहाँ पर जब आप अपना सबग्राफ को The Graph Network पर प्रकाशित या अपग्रेड करते हैं, तो आप प्रोटोकॉल पर स्मार्ट contracts के साथ इंटरैक्ट कर रहे होते हैं, और इसके लिए ETH का उपयोग करके गैस शुल्क का भुगतान करना आवश्यक होता है। अपने सबग्राफ को Arbitrum में स्थानांतरित करने से, आपके सबग्राफ के भविष्य के किसी भी अपडेट के लिए बहुत कम गैस शुल्क की आवश्यकता होगी। कम शुल्क, और L2 पर क्यूरेशन बॉन्डिंग कर्व्स के फ्लैट होने के कारण, अन्य Curators के लिए आपके सबग्राफ पर क्यूरेट करना आसान हो जाता है, जिससे आपके सबग्राफ पर Indexers के लिए पुरस्कार बढ़ जाते हैं। यह कम लागत वाला वातावरण Indexers के लिए आपके सबग्राफ को इंडेक्स और सर्व करने की लागत को भी कम कर देता है। आने वाले महीनों में Arbitrum पर Indexing पुरस्कार बढ़ेंगे और Ethereum मेननेट पर घटेंगे, जिससे अधिक से अधिक Indexers अपनी स्टेक ट्रांसफर कर रहे हैं और L2 पर अपनी ऑपरेशन्स सेटअप कर रहे हैं। -## सिग्नल, आपके L1 सबग्राफ और क्वेरी URL के साथ जो होता है, उसे समझने की प्रक्रिया: +## सिग्नल के साथ क्या होता है, आपके L1 सबग्राफ और क्वेरी URLs को समझना -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +सबग्राफ को Arbitrum पर ट्रांसफर करने के लिए Arbitrum GRT ब्रिज का उपयोग किया जाता है, जो कि मूल Arbitrum ब्रिज का उपयोग करके सबग्राफ को L2 पर भेजता है। "ट्रांसफर" मुख्य नेटवर्क पर सबग्राफ को निष्क्रिय कर देगा और ब्रिज का उपयोग करके L2 पर सबग्राफ को फिर से बनाने के लिए जानकारी भेजेगा। यह सबग्राफ मालिक द्वारा संकेतित GRT को भी शामिल करेगा, जो ब्रिज द्वारा ट्रांसफर स्वीकार करने के लिए शून्य से अधिक होना आवश्यक है। -जब आप सबग्राफ को स्थानांतरित करने का विकल्प चुनते हैं, तो यह सबग्राफ के सभी क्यूरेशन सिग्नल को GRT में रूपांतरित कर देगा। इसका मतलब है कि मुख्यनेट पर सबग्राफ को "विलीन" किया जाएगा। आपके क्यूरेशन के अनुरूप GRT को सबग्राफ के साथ L2 पर भेजा जाएगा, जहां वे आपके प्रतिनिधित्व में सिग्नल निर्माण करने के लिए उपयोग होंगे। +जब आप सबग्राफ को ट्रांसफर करने का विकल्प चुनते हैं, तो यह सबग्राफ के सभी क्यूरेशन सिग्नल को GRT में बदल देगा। यह मुख्य नेटवर्क पर सबग्राफ को "डिप्रिकेट" करने के समान है। आपकी क्यूरेशन के अनुरूप GRT को L2 पर Subgraph के साथ भेजा जाएगा, जहाँ इनका उपयोग आपके लिए सिग्नल मिंट करने के लिए किया जाएगा। -अन्य क्यूरेटर्स का विकल्प होता है कि क्या वे अपने अंशिक GRT को विद्वेष्टित करें या उसे भी L2 पर स्थानांतरित करें ताकि वे उसी सबग्राफ पर सिग्नल निर्मित कर सकें। अगर कोई सबग्राफ का मालिक अपने सबग्राफ को L2 पर स्थानांतरित नहीं करता है और अधिकारिक रूप से उसे एक कॉन्ट्रैक्ट कॉल के माध्यम से विलीन करता है, तो क्यूरेटर्स को सूचित किया जाएगा और उन्हें उनके क्यूरेशन को वापस लेने का अधिकार होगा। +अन्य Curators यह चुन सकते हैं कि वे अपने GRT के भाग को निकालें, या इसे L2 पर स्थानांतरित करके उसी सबग्राफ पर संकेत को मिंट करें। यदि कोई सबग्राफ मालिक अपने सबग्राफ को L2 पर स्थानांतरित नहीं करता है और अनुबंध कॉल के माध्यम से इसे मैन्युअल रूप से अमान्य कर देता है, तो Curators को सूचित किया जाएगा और वे अपने क्यूरेशन को वापस लेने में सक्षम होंगे। -Subgraph को स्थानांतरित करते ही, curation को GRT में रूपांतरित किये जाने के कारण Indexers को subgraph को index करने के लिए अब और rewards नहीं मिलेगा। हालांकि, ऐसे Indexers भी होंगे जो 1) स्थानांतरित subgraphs की सेवा 24 घंटे तक करते रहेंगे और 2) तुरंत L2 पर subgraph को indexing करने की प्रारंभ करेंगे। क्योंकि इन Indexers ने पहले से ही subgraph को indexed किया होता है, इसलिए subgraph को sync करने की प्रतीक्षा करने की आवश्यकता नहीं होगी, और L2 subgraph को तकनीकी रूप से तुरंत carry किया जा सकेगा। +जैसे ही सबग्राफ ट्रांसफर हो जाता है, क्योंकि सारी curation GRT में कन्वर्ट हो जाती है, Indexers को अब सबग्राफ को index करने के लिए कोई रिवॉर्ड नहीं मिलेगा। हालांकि, कुछ Indexers होंगे जो 1) ट्रांसफर किए गए सबग्राफ को 24 घंटे तक सर्व करते रहेंगे, और 2) तुरंत L2 पर सबग्राफ को index करना शुरू कर देंगे। चूंकि इन Indexers के पास पहले से ही सबग्राफ indexed है, इसलिए सबग्राफ को sync होने का इंतजार करने की कोई आवश्यकता नहीं होगी, और L2 Subgraph को लगभग तुरंत क्वेरी करना संभव होगा। -L2 सबग्राफ के क्वेरी को एक विभिन्न URL पर ( 'arbitrum-gateway.thegraph.com' पर) किया जाना चाहिए, लेकिन L1 URL काम करना जारी रखेगा कम से कम 48 घंटे तक। उसके बाद, L1 गेटवे क्वेरी को L2 गेटवे के लिए आगे प्रेषित करेगा (कुछ समय के लिए), लेकिन इससे लैटेंसी बढ़ सकती है, इसलिए संभावना है कि आपको सभी क्वेरी को नए URL पर जल्द से जल्द स्विच कर लेने की सिफारिश की जाए। +L2 सबग्राफ के लिए क्वेरी अब एक अलग URL (`arbitrum-gateway.thegraph.com`) पर की जानी चाहिए, लेकिन L1 URL कम से कम 48 घंटे तक काम करता रहेगा। उसके बाद, L1 गेटवे कुछ समय के लिए क्वेरी को L2 गेटवे पर फॉरवर्ड करेगा, लेकिन इससे विलंब (latency) बढ़ जाएगा, इसलिए सभी क्वेरी को जल्द से जल्द नए URL पर स्विच करने की सिफारिश की जाती है। ## अपना L2 वॉलेट चुनना -जब आपने मुख्यनेट पर अपने सबग्राफ को प्रकाशित किया, तो आपने एक कनेक्टेड वॉलेट का उपयोग सबग्राफ बनाने के लिए किया और यह वॉलेट वह NFT स्वामित्व करता है जो इस सबग्राफ का प्रतिनिधित्व करता है और आपको अपडेट प्रकाशित करने की अनुमति देता है। +जब आपने अपना सबग्राफ मुख्य नेटवर्क पर प्रकाशित किया, तो आपने सबग्राफ बनाने के लिए एक जुड़े हुए वॉलेट का उपयोग किया, और यह वॉलेट उस NFT का मालिक है जो इस सबग्राफ का प्रतिनिधित्व करता है और आपको अपडेट प्रकाशित करने की अनुमति देता है। -सबग्राफ को Arbitrum पर स्थानांतरित करते समय, आप एक विभिन्न वॉलेट का चयन कर सकते हैं जो L2 पर इस सबग्राफ NFT का स्वामित्व करेगा। +जब सबग्राफ को Arbitrum में ट्रांसफर किया जाता है, तो आप एक अलग वॉलेट चुन सकते हैं जो L2 पर इस सबग्राफ NFT का मालिक होगा। अगर आप "सामान्य" wallet जैसे MetaMask का उपयोग कर रहे हैं (जिसे बाह्यिक अधिकारित खाता या EOA कहा जाता है, यानी एक wallet जो smart contract नहीं है), तो यह वैकल्पिक है और सिफारिश की जाती है कि आप L1 में के समान मालिक पता बनाए रखें।बटुआ -अगर आप स्मार्ट कॉन्ट्रैक्ट वॉलेट का उपयोग कर रहे हैं, जैसे कि मल्टिसिग (उदाहरणस्वरूप, एक सेफ), तो एक विभिन्न L2 वॉलेट पता चुनना अनिवार्य है, क्योंकि यह बहुत संभावना है कि यह खाता केवल मुख्यनेट पर मौजूद है और आप इस वॉलेट का उपयोग अर्बिट्रम पर लेन-देन करने के लिए नहीं कर सकते हैं। अगर आप स्मार्ट कॉन्ट्रैक्ट वॉलेट या मल्टिसिग का उपयोग करना चाहते हैं, तो अर्बिट्रम पर एक नया वॉलेट बनाएं और उसका पता अपने सबग्राफ के L2 मालिक के रूप में उपयोग करें। +यदि आप एक स्मार्ट contract वॉलेट का उपयोग कर रहे हैं, जैसे कि मल्टीसिग (जैसे कि Safe), तो एक अलग L2 वॉलेट एड्रेस चुनना अनिवार्य है, क्योंकि यह संभावना है कि यह खाता केवल मेननेट पर मौजूद हो और आप इस वॉलेट का उपयोग करके Arbitrum पर लेन-देन(transaction) नहीं कर पाएंगे। यदि आप स्मार्ट कॉन्ट्रैक्ट वॉलेट या मल्टीसिग का उपयोग जारी रखना चाहते हैं, तो Arbitrum पर एक नया वॉलेट बनाएं और इसके एड्रेस को अपने सबग्राफ के L2 ओनर के रूप में उपयोग करें। -**यह महत्वपूर्ण है कि आप एक वॉलेट पता का उपयोग करें जिस पर आपका नियंत्रण है, और जिससे आप अर्बिट्रम पर लेन-देन कर सकते हैं। अन्यथा, सबग्राफ हानि हो जाएगा और उसे पुनः प्राप्त नहीं किया जा सकता।** +**यह बहुत महत्वपूर्ण है कि आप एक ऐसे वॉलेट पते का उपयोग करें जिसे आप नियंत्रित कर सकते हैं और जो Arbitrum पर लेनदेन कर सकता है। अन्यथा, सबग्राफ खो जाएगा और इसे पुनर्प्राप्त नहीं किया जा सकेगा।** ## स्थानांतरण के लिए तैयारी: कुछ ETH को ब्रिज करना -सबग्राफ को स्थानांतरित करने में एक लेन-देन को ब्रिज के माध्यम से भेजना शामिल है, और फिर अर्बिट्रम पर एक और लेन-देन को प्रारंभ करना। पहली लेन-देन मुख्यनेट पर ETH का उपयोग करता है, और जब संदेश L2 पर प्राप्त होता है, तो गैस के भुगतान के लिए कुछ ETH को शामिल करता है। हालांकि, अगर यह गैस पर्याप्त नहीं होता है, तो आपको लेन-देन को पुनः प्रयास करना होगा और गैस के लिए सीधे L2 पर भुगतान करना होगा (यह "चरण 3: स्थानांतरण की पुष्टि करना" है, नीचे दिए गए हैं)। यह कदम **स्थानांतरण की प्रारंभिक करने के 7 दिनों के भीतर कार्यान्वित किया जाना चाहिए।** इसके अलावा, दूसरी लेन-देन ("चरण 4: L2 पर स्थानांतरण को समाप्त करना") को सीधे अर्बिट्रम पर किया जाएगा। इन कारणों से, आपको किसी एक Arbitrum वॉलेट पर कुछ ETH की आवश्यकता होगी। यदि आप मल्टिसिग या स्मार्ट कॉन्ट्रैक्ट खाता का उपयोग कर रहे हैं, तो ETH को उन्हीं सामान्य (EOA) वॉलेट में होना चाहिए जिसका आप लेन-देन कार्यान्वित करने के लिए उपयोग कर रहे हैं, मल्टिसिग वॉलेट में नहीं। +सबग्राफ ट्रांसफर करने की प्रक्रिया में ब्रिज के माध्यम से एक लेन-देन(transaction) भेजना शामिल होता है, और फिर Arbitrum पर एक और लेन-देन(transaction) को निष्पादित करना होता है। पहला लेन-देन(transaction) मेननेट पर ETH का उपयोग करता है और इसमें कुछ ETH शामिल होता है ताकि जब संदेश L2 पर प्राप्त हो, तो गैस शुल्क का भुगतान किया जा सके। हालाँकि, यदि यह गैस अपर्याप्त होती है, तो आपको लेन-देन(transaction) को पुनः प्रयास करना होगा और सीधे L2 पर गैस शुल्क का भुगतान करना होगा (यह नीचे दिए गए "Step 3: Confirming the transfer" का हिस्सा है)। यह स्टेप ट्रांसफर शुरू करने के 7 दिनों के भीतर निष्पादित किया जाना चाहिए। इसके अलावा, दूसरा लेन-देन(transaction) ("Step 4: Finishing the transfer on L2") सीधे Arbitrum पर किया जाएगा। इन कारणों से, आपके पास Arbitrum वॉलेट में कुछ ETH होना आवश्यक है। यदि आप multisig या स्मार्ट कॉन्ट्रैक्ट अकाउंट का उपयोग कर रहे हैं, तो ETH को उस नियमित (EOA) वॉलेट में होना चाहिए जिसका उपयोग आप ट्रांज़ैक्शन निष्पादित करने के लिए कर रहे हैं, न कि multisig वॉलेट में। आप कुछ एक्सचेंजों पर ETH खरीद सकते हैं और उसे सीधे अर्बिट्रम में विद्वेष्टित कर सकते हैं, या आप अर्बिट्रम ब्रिज का उपयोग करके ETH को मुख्यनेट वॉलेट से L2 में भेज सकते हैं: [bridge.arbitrum.io](http://bridge.arbitrum.io)। क्योंकि अर्बिट्रम पर गैस शुल्क कम होते हैं, आपको केवल थोड़ी सी राशि की आवश्यकता होनी चाहिए। यह सिफारिश की जाती है कि आप अपने लेन-देन को स्वीकृति प्राप्त करने के लिए कम थ्रेशहोल्ड (उदाहरणस्वरूप 0.01 ETH) से प्रारंभ करें। -## सबग्राफ ट्रांसफर टूल ढूँढना +## सबग्राफ ट्रांसफर टूल खोजना -आप सबग्राफ स्टूडियो पर अपने सबग्राफ के पेज को देखते समय L2 ट्रांसफर टूल पा सकते हैं: +आप अपने सबग्राफ के पेज पर सबग्राफ Studio में जाकर L2 Transfer Tool पा सकते हैं: - ![transfer tool](/img/L2-transfer-tool1.png) -यह भी उपलब्ध है एक्सप्लोरर पर अगर आप ऐसे वॉलेट से कनेक्ट हो जाते हैं जिसका सबग्राफ का स्वामित्व है, और उस सबग्राफ के पेज पर एक्सप्लोरर पर: +यह Explorer पर भी उपलब्ध है यदि आप उस वॉलेट से जुड़े हैं जो किसी सबग्राफ का मालिक है और Explorer पर उस सबग्राफ के पेज पर है: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ L2 सबग्राफ के क्वेरी को एक विभिन ## चरण 1: स्थानांतरण की प्रारंभिक कदम -स्थानांतरण की प्रारंभिक करने से पहले, आपको तय करना होगा कि L2 पर सबग्राफ का स्वामित्व किस पते पर होगा (ऊपर "अपने L2 वॉलेट का चयन करना" देखें), और यह मजबूती से सिफारिश की जाती है कि अर्बिट्रम पर गैस के लिए कुछ ETH ब्रिज कर दिया गया हो (ऊपर "स्थानांतरण की तैयारी: कुछ ETH को ब्रिज करना" देखें)। +इससे पहले कि आप ट्रांसफर शुरू करें, आपको यह तय करना होगा कि L2 पर कौन सा एड्रेस सबग्राफ का स्वामी होगा (देखें "अपना L2 वॉलेट चुनना" ऊपर), और यह अत्यधिक अनुशंसा की जाती है कि आपके पास पहले से ही Arbitrum पर कुछ ETH गैस के लिए ब्रिज किया हुआ हो (देखें "ट्रांसफर की तैयारी: कुछ ETH ब्रिज करना" ऊपर)। -यह भी ध्यान दें कि सबग्राफ को स्थानांतरित करने के लिए सबग्राफ के साथ एक ही खाते में कोई भी सिग्नल की गई राशि होनी चाहिए; अगर आपने सबग्राफ पर सिग्नल नहीं किया है तो आपको थोड़ी सी क्यूरेशन जोड़नी होगी (एक छोटी राशि जैसे 1 GRT जोड़ना काफी होगा)। +सबग्राफ को ट्रांसफर करने के लिए आवश्यक है कि उसी खाते पर सबग्राफ के साथ कुछ न कुछ सिग्नल मौजूद हो जो सबग्राफ का मालिक है; यदि आपने सबग्राफ पर सिग्नल नहीं किया है, तो आपको थोड़ा सा क्यूरेशन जोड़ना होगा (जैसे 1 GRT जोड़ना पर्याप्त होगा)। -स्थानांतरण टूल खोलने के बाद, आपको "प्राप्ति वॉलेट पता" फ़ील्ड में L2 वॉलेट पता दर्ज करने की अनुमति मिलेगी - **सुनिश्चित करें कि आपने यहाँ सही पता डाला है।** "सबग्राफ स्थानांतरित करें" पर क्लिक करने से आपको अपने वॉलेट पर लेन-देन कार्यान्वित करने के लिए प्रोम्प्ट किया जाएगा (ध्यान दें कि L2 गैस के भुगतान के लिए कुछ ETH मान शामिल है)। इससे स्थानांतरण प्रारंभ होगा और आपका L1 सबग्राफ विलीन हो जाएगा (इसके पीछे के प्रक्रिया के बारे में अधिक जानकारी के लिए "सिग्नल, आपके L1 सबग्राफ और क्वेरी URL के साथ क्या होता है की समझ" देखें)। +ट्रांसफर टूल खोलने के बाद, आप "Receiving wallet address" फ़ील्ड में L2 वॉलेट पता दर्ज कर सकते हैं - **सुनिश्चित करें कि आपने यहां सही पता दर्ज किया है।** "Transfer सबग्राफ" पर क्लिक करने से आपको अपने वॉलेट में लेन-देन(transaction) निष्पादित करने के लिए संकेत मिलेगा (ध्यान दें कि L2 गैस के भुगतान के लिए इसमें कुछ ETH मूल्य शामिल होता है); इससे ट्रांसफर शुरू होगा और आपका L1 सबग्राफ अप्रचलित हो जाएगा। (इस प्रक्रिया के पीछे क्या होता है, इसे समझने के लिए ऊपर "Understanding what happens with signal, your L1 सबग्राफ and query URLs" अनुभाग देखें)। -इस कदम को कार्यान्वित करते समय, **सुनिश्चित करें कि आप 7 दिन से कम समय में चरण 3 को पूरा करने जाते हैं, अन्यथा सबग्राफ और आपका सिग्नल GRT हानि हो सकते हैं।** यह अर्बिट्रम पर L1-L2 संदेशिकरण कैसे काम करता है के कारण है: ब्रिज के माध्यम से भेजे गए संदेश "पुनः प्रयासनीय टिकट" होते हैं जिन्हें 7 दिन के भीतर कार्यान्वित किया जाना चाहिए, और पहले कार्यान्वयन में अगर अर्बिट्रम पर गैस की मूल्य में वृद्धि होती है तो पुनः प्रयास की आवश्यकता हो सकती है। +यदि आप इस चरण को निष्पादित करते हैं, तो सुनिश्चित करें कि आप 7 दिनों से कम समय में चरण 3 तक पूरा करें, अन्यथा सबग्राफ और आपका signal GRT खो जाएगा। यह Arbitrum पर L1-L2 मैसेजिंग के काम करने के तरीके के कारण है: ब्रिज के माध्यम से भेजे गए संदेश "retry-able tickets" होते हैं, जिन्हें 7 दिनों के भीतर निष्पादित किया जाना आवश्यक होता है, और यदि Arbitrum पर गैस की कीमत में उतार-चढ़ाव होता है, तो प्रारंभिक निष्पादन को पुनः प्रयास करने की आवश्यकता हो सकती है। ![Start the transfer to L2](/img/startTransferL2.png) -## चरण 2: सबग्राफ को L2 तक पहुँचने की प्रतीक्षा करना +## चरण 2: सबग्राफ के L2 तक पहुंचने की प्रतीक्षा करना -जब आप स्थानांतरण की प्रारंभिक करते हैं, तो आपके L1 सबग्राफ को L2 भेजने वाले संदेश को अर्बिट्रम ब्रिज के माध्यम से प्रसारित होना चाहिए। यह लगभग 20 मिनट लगता है (ब्रिज मुख्यनेट ब्लॉक को "सुरक्षित" बनाने के लिए प्रत्येक लेनदेन के मुख्यनेट ब्लॉक के लिए प्रतीक्षा करता है, जिसमें संभावित चेन रीआर्ग से बचाया जा सकता है)। +ट्रांसफर शुरू करने के बाद, संदेश जो आपके L1 सबग्राफ को L2 पर भेजता है, उसे Arbitrum ब्रिज के माध्यम से प्रसारित होना चाहिए। इसमें लगभग 20 मिनट लगते हैं (ब्रिज मुख्यनेट ब्लॉक का इंतजार करता है जिसमें लेन-देन "सुरक्षित" हो ताकि संभावित चेन रीऑर्ग से बचा जा सके)। इस प्रतीक्षा काल के बाद, अर्बिट्रम ल2 अनुबंधों पर स्थानांतरण को स्वतः कार्यान्वित करने का प्रयास करेगा। @@ -80,7 +80,7 @@ L2 सबग्राफ के क्वेरी को एक विभिन ## चरण 3: स्थानांतरण की पुष्टि करना -अधिकांश मामलों में, यह कदम स्वचालित रूप से क्रियान्वित हो जाएगा क्योंकि स्टेप 1 में शामिल एल2 गैस काफी होता है ताकि आर्बिट्रम कॉन्ट्रैक्ट पर सबग्राफ प्राप्त करने वाले लेनदेन को क्रियान्वित किया जा सके। हालांकि, कुछ मामलों में, यह संभावित है कि आर्बिट्रम पर गैस मूल्यों में एक उछाल के कारण यह स्वचालित क्रियान्वित होने में विफल हो सकता है। इस मामले में, जो "टिकट" आपके सबग्राफ को एल2 पर भेजता है, वह लंबित हो जाएगा और 7 दिनों के भीतर पुनः प्रयास की आवश्यकता होगी। +इस ज्यादातर मामलों में, यह चरण स्वचालित रूप से निष्पादित हो जाएगा क्योंकि चरण 1 में शामिल L2 गैस आमतौर पर उस लेन-देन को निष्पादित करने के लिए पर्याप्त होती है जो Arbitrum कॉन्ट्रैक्ट्स पर सबग्राफ प्राप्त करता है। हालाँकि, कुछ मामलों में, यह संभव है कि Arbitrum पर गैस की कीमतों में अचानक वृद्धि के कारण यह स्वचालित निष्पादन विफल हो जाए। ऐसे में, जो "टिकट" आपके सबग्राफ को L2 पर भेजता है, वह लंबित रहेगा और इसे 7 दिनों के भीतर पुनः प्रयास करने की आवश्यकता होगी। यदि यह मामला आपके साथ होता है, तो आपको ऐसे L2 वॉलेट का उपयोग करके कनेक्ट करना होगा जिसमें आर्बिट्रम पर कुछ ETH हो, अपनी वॉलेट नेटवर्क को आर्बिट्रम पर स्विच करना होगा, और "पुनः प्रायोग की पुष्टि करें" पर क्लिक करके लेन-देन को पुनः प्रयास करने के लिए। @@ -88,33 +88,33 @@ L2 सबग्राफ के क्वेरी को एक विभिन ## चरण 4: L2 पर स्थानांतरण समाप्त करना -इस बिंदु पर, आपका सबग्राफ और GRT आर्बिट्रम पर प्राप्त हो चुके हैं, लेकिन सबग्राफ अबतक प्रकाशित नहीं हुआ है। आपको वह एल2 वॉलेट का उपयोग करके कनेक्ट करना होगा जिसे आपने प्राप्ति वॉलेट के रूप में चुना है, अपने वॉलेट नेटवर्क को आर्बिट्रम पर स्विच करना होगा, और "पब्लिश सबग्राफ" पर क्लिक करना होगा। +आपका सबग्राफ और GRT अब Arbitrum पर प्राप्त हो चुका है, लेकिन सबग्राफ अभी प्रकाशित नहीं किया गया है। आपको उस L2 वॉलेट का उपयोग करके कनेक्ट करना होगा जिसे आपने प्राप्त करने वाले वॉलेट के रूप में चुना था, अपने वॉलेट नेटवर्क को Arbitrum में स्विच करना होगा, और "प्रकाशित सबग्राफ" पर क्लिक करना होगा। -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![सबग्राफ प्रकाशित करें -](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![सबरग्राफ प्रकाशित होने की प्रतीक्षा करें](/img/waitForSubgraphToPublishL2TransferTools.png) -इससे सबग्राफ प्रकाशित हो जाएगा ताकि Arbitrum पर काम करने वाले इंडेक्सर उसकी सेवा करना शुरू कर सकें। यह भी उसी GRT का करेशन सिग्नल मिन्ट करेगा जो L1 से स्थानांतरित हुए थे। +यह सबग्राफ को प्रकाशित करेगा ताकि Arbitrum पर कार्यरत Indexers इसे प्रदान करना शुरू कर सकें। यह L1 से स्थानांतरित किए गए GRT का उपयोग करके क्यूरेशन सिग्नल भी मिंट करेगा। ## Step 5: query Step 5 को Update करना -आपकी सबग्राफ सफलतापूर्वक Arbitrum में स्थानांतरित की गई है! सबग्राफ का प्रश्न करने के लिए, नया URL होगा: +आपका सबग्राफ सफलतापूर्वक Arbitrum पर स्थानांतरित कर दिया गया है! सबग्राफ को क्वेरी करने के लिए, नया URL होगा: `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -ध्यान दें कि आर्बिट्रम पर सबग्राफ आईडी मुख्यनेट पर जितना भिन्न होगा, लेकिन आप हमेशा इसे एक्सप्लोरर या स्टूडियो पर ढूंढ सकते हैं। जैसा कि पहले उल्लिखित किया गया है ("सिग्नल, आपके L1 सबग्राफ और क्वेरी URL के साथ क्या होता है" देखें), पुराना L1 URL कुछ समय तक समर्थित किया जाएगा, लेकिन आपको सबग्राफ को L2 पर सिंक होने के बाद नए पते पर अपने क्वेरी को स्विच कर देना चाहिए। +सबग्राफ आईडी Arbitrum पर आपके मेननेट पर मौजूद आईडी से अलग होगी, लेकिन आप इसे हमेशा Explorer या Studio पर पा सकते हैं। जैसा कि ऊपर उल्लेख किया गया है (देखें "Understanding what happens with signal, your L1 सबग्राफ and query URLs"), पुराना L1 URL थोड़े समय के लिए समर्थित रहेगा, लेकिन आपको जैसे ही सबग्राफ L2 पर सिंक हो जाए, अपने क्वेरीज़ को नए पते पर स्विच कर लेना चाहिए। ## अपने क्यूरेशन को आर्बिट्रम (L2) में कैसे स्थानांतरित करें -## यह समझना कि एल2 में सबग्राफ़ स्थानांतरण पर क्यूरेशन का क्या होता है +## सबग्राफ को L2 पर ट्रांसफर करने पर क्यूरेशन के साथ क्या होता है, इसे समझना -जब कोई सबग्राफ के मालिक सबग्राफ को आर्बिट्रम पर ट्रांसफर करते हैं, तो सबग्राफ की सभी सिग्नल को एक साथ GRT में रूपांतरित किया जाता है। यह "ऑटो-माइग्रेटेड" सिग्नल के लिए भी लागू होता है, अर्थात्, सिग्नल जो सबग्राफ के किसी वर्शन या डिप्लॉयमेंट के लिए विशिष्ट नहीं है, लेकिन जो सबग्राफ के नवीनतम संस्करण का पालन करते हैं। +जब किसी सबग्राफ का मालिक एक सबग्राफ को Arbitrum में ट्रांसफर करता है, तो उस Subgraph का सारा signal एक ही समय में GRT में कन्वर्ट हो जाता है। यह "ऑटो-माइग्रेटेड" signal पर लागू होता है, यानी ऐसा signal जो किसी विशेष सबग्राफ संस्करण या डिप्लॉयमेंट से जुड़ा नहीं होता, बल्कि किसी सबग्राफ के नवीनतम संस्करण का अनुसरण करता है। -सिग्नल से GRT में इस परिवर्तन को वही होता है जो होता है अगर सबग्राफ के मालिक ने L1 में सबग्राफ को विच्छेद किया होता। जब सबग्राफ को विच्छेदित या स्थानांतरित किया जाता है, तो सभी क्यूरेशन सिग्नल को समयानुसार "जलाया" जाता है (क्यूरेशन बॉन्डिंग कर्व का उपयोग करके) और परिणित GRT को GNS स्मार्ट कॉन्ट्रैक्ट द्वारा रखा जाता है (जो सबग्राफ अपग्रेड और ऑटो-माइग्रेटेड सिग्नल को संभालता है)। इस प्रकार, उस सबग्राफ के प्रत्येक क्यूरेटर के पास उस GRT का दावा होता है जो उनके लिए उपग्रहानुशासित था। +यह रूपांतरण सिग्नल से GRT में उसी प्रकार होता है जैसे कि अगर Subgraph का मालिक L1 में सबग्राफ को डिप्रिकेट कर दे। जब सबग्राफ को डिप्रिकेट या ट्रांसफर किया जाता है, तो सभी क्यूरेशन सिग्नल एक साथ "बर्न" हो जाते हैं (क्यूरेशन बॉन्डिंग कर्व का उपयोग करके) और उत्पन्न हुआ GRT GNS स्मार्ट contract द्वारा रखा जाता है (जो कि Subgraph अपग्रेड और ऑटो-माइग्रेटेड सिग्नल को हैंडल करता है)। इस प्रकार, उस सबग्राफ के प्रत्येक Curator के पास उस GRT पर दावा करने का अधिकार होता है, जो उनके पास सबग्राफ के लिए उपलब्ध शेयरों के अनुपात में होता है। -इन जीआरटी की एक भाग, जो सबग्राफ के मालिक के संवर्ग के साथ मेल खाते हैं, वह एल2 में भेजे जाते हैं। +इन GRT का एक अंश, जो सबग्राफ के मालिक से संबंधित है, सबग्राफ के साथ L2 पर भेजा जाता है। -इस बिंदु पर, क्यूरेटेड GRT को अब और क्वेरी शुल्क नहीं बढ़ेंगे, इसलिए क्यूरेटर्स अपने GRT को वापस निकालने का चयन कर सकते हैं या उसे L2 पर उसी सबग्राफ में ट्रांसफर कर सकते हैं, जहां उसे नई क्यूरेशन सिग्नल बनाने के लिए उपयोग किया जा सकता है। इसे करने के लिए कोई जल्दी नहीं है क्योंकि GRT को अनिश्चितकाल तक रखा जा सकता है और हर कोई अपने हिस्से के अनुपात में एक निश्चित राशि प्राप्त करता है, चाहे वो जब भी करे। +At this point, the curated GRT अब कोई अतिरिक्त क्वेरी शुल्क नहीं जोड़ेगा, इसलिए Curators अपने GRT को निकालने या इसे उसी सबग्राफ पर L2 में स्थानांतरित करने का विकल्प चुन सकते हैं, जहां इसका उपयोग नए क्यूरेशन सिग्नल को मिंट करने के लिए किया जा सकता है। इसे तुरंत करने की कोई आवश्यकता नहीं है क्योंकि GRT को अनिश्चित काल तक रखा जा सकता है और सभी को उनके शेयरों के अनुपात में राशि मिलेगी, इस बात की परवाह किए बिना कि वे इसे कब करते हैं। ## अपना L2 वॉलेट चुनना @@ -130,9 +130,9 @@ L2 सबग्राफ के क्वेरी को एक विभिन ट्रांसफर शुरू करने से पहले, आपको निर्णय लेना होगा कि L2 पर क्यूरेशन किस पते का स्वामित्व करेगा (ऊपर "अपने L2 वॉलेट का चयन करना" देखें), और संदेश को L2 पर पुनः क्रियान्वित करने की आवश्यकता पड़ने पर आपके पास गैस के लिए पहले से ही कुछ ETH होने की सिफारिश की जाती है। आप कुछ एक्सचेंजों पर ETH खरीद सकते हैं और उसे सीधे Arbitrum पर निकाल सकते हैं, या आप मुख्यनेट वॉलेट से L2 में ETH भेजने के लिए आर्बिट्रम ब्रिज का उपयोग कर सकते हैं: [bridge.arbitrum.io](http://bridge.arbitrum.io) - क्योंकि आर्बिट्रम पर गैस शुल्क इतने कम होते हैं, तो आपको केवल थोड़ी सी राशि की आवश्यकता होगी, जैसे कि 0.01 ETH शायद पर्याप्त हो। -अगर वह सबग्राफ जिसे आप करेशन कर रहे हैं L2 पर स्थानांतरित किया गया है, तो आपको एक संदेश दिखाई देगा जो आपको एक स्थानांतरित सबग्राफ करेशन की जानकारी देगा। +अगर कोई सबग्राफ जिसे आप क्यूरेट कर रहे हैं, L2 पर ट्रांसफर कर दिया गया है, तो आपको Explorer पर एक संदेश दिखाई देगा जो आपको बताएगा कि आप एक ट्रांसफर किए गए सबग्राफ को क्यूरेट कर रहे हैं। -सबग्राफ पेज को देखते समय, आपको करेशन को वापस लेने या स्थानांतरित करने का चयन करने का विकल्प होता है। "Transfer Signal to Arbitrum" पर क्लिक करने से स्थानांतरण उपकरण खुल जाता है। +जब आप सबग्राफ पेज पर देखते हैं, तो आप क्यूरेशन को वापस लेने या ट्रांसफर करने का विकल्प चुन सकते हैं। "ट्रांसफर सिग्नल टू Arbitrum" पर क्लिक करने से ट्रांसफर टूल खुल जाएगा। ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ L2 सबग्राफ के क्वेरी को एक विभिन ## L1 पर अपना कार्यकाल वापस ले रहा हूँ -अगर आप चाहते हैं कि आप अपने GRT को L2 पर नहीं भेजें, या फिर आप पसंद करते हैं कि GRT को मैन्युअल रूप से ब्रिज करें, तो आप अपने क्यूरेटेड GRT को L1 पर निकाल सकते हैं। सबग्राफ पृष्ठ पर बैनर पर, "सिग्नल निकालें" चुनें और लेनदेन की पुष्टि करें; GRT आपके क्यूरेटर पते पर भेज दिया जाएगा। +यदि आप अपना GRT L2 पर भेजना पसंद नहीं करते हैं, या आप GRT को मैन्युअल रूप से ब्रिज करना चाहते हैं, तो आप L1 पर अपने क्यूरेट किए गए GRT को निकाल सकते हैं। सबग्राफ पेज पर बैनर में, "Withdraw Signal" चुनें और लेन-देन(transaction) की पुष्टि करें; GRT आपके Curator पते पर भेज दिया जाएगा। diff --git a/website/src/pages/hi/archived/sunrise.mdx b/website/src/pages/hi/archived/sunrise.mdx index 64396d2fb998..b129719e6006 100644 --- a/website/src/pages/hi/archived/sunrise.mdx +++ b/website/src/pages/hi/archived/sunrise.mdx @@ -7,74 +7,74 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## विकेंद्रीकृत डेटा का सूर्योदय क्या था? -"Decentralized Data का उदय" Edge & Node द्वारा आरंभ की गई एक पहल थी। इस पहल ने subgraph डेवलपर्स को The Graph के विकेंद्रीकृत नेटवर्क में सहजता से अपग्रेड करने में सक्षम बनाया। +The Sunrise of Decentralized Data एक पहल थी जिसे Edge & नोड द्वारा शुरू किया गया था। इस पहल ने subgraph डेवलपर्स को The Graph के विकेंद्रीकृत नेटवर्क में सहजता से अपग्रेड करने में सक्षम बनाया। -इस योजना ने The Graph इकोसिस्टम के पिछले विकासों पर आधारित किया, जिसमें नए प्रकाशित सबग्राफ पर क्वेरी सर्व करने के लिए एक अपग्रेडेड इंडेक्सर शामिल था। +यह योजना The Graph इकोसिस्टम में पिछले विकासों पर आधारित थी, जिसमें एक उन्नयन Indexer शामिल था ताकि नए प्रकाशित सबग्राफ पर क्वेरी प्रदान की जा सके। ### Hosted service का क्या होगा? -होस्टेड सेवा के क्वेरी एंडपॉइंट अब उपलब्ध नहीं हैं, और डेवलपर्स होस्टेड सेवा पर नए सबग्राफ्स को तैनात नहीं कर सकते हैं। +होस्टेड सेवा क्वेरी एंडपॉइंट अब उपलब्ध नहीं हैं, और डेवलपर्स होस्टेड सेवा पर नए Subgraph तैनात नहीं कर सकते। -अपग्रेड प्रक्रिया के दौरान, होस्टेड सर्विस सबग्राफ के मालिक अपने सबग्राफ को The Graph Network पर अपग्रेड कर सकते थे। इसके अतिरिक्त, डेवलपर्स ऑटो-अपग्रेड किए गए सबग्राफ को क्लेम करने में सक्षम थे। +होस्टेड सेवा Subgraph के मालिक अपग्रेड प्रक्रिया के दौरान अपने subgraph को The Graph Network में अपग्रेड कर सकते थे। इसके अतिरिक्त, डेवलपर्स स्वचालित रूप से अपग्रेड किए गए Subgraph को क्लेम कर सकते थे। ### क्या इस अपग्रेड से Subgraph Studio प्रभावित हुआ था? नहीं, सबग्राफ स्टूडियो पर Sunrise का कोई प्रभाव नहीं पड़ा। सबग्राफ तुरंत क्वेरी के लिए उपलब्ध थे, जो अपग्रेड किए गए Indexer द्वारा संचालित हैं, जो उसी इंफ्रास्ट्रक्चर का उपयोग करता है जैसा Hosted Service में होता है। -### सबग्राफ्स को Arbitrum पर क्यों प्रकाशित किया गया, क्या इसने एक अलग नेटवर्क को इंडेक्स करना शुरू किया? +### क्यों subgraph को Arbitrum पर प्रकाशित किया गया, क्या इसने किसी अलग नेटवर्क को इंडेक्स करना शुरू कर दिया? -The Graph Network को पहले Ethereum mainnet पर डिप्लॉय किया गया था, लेकिन गैस लागत को कम करने के लिए इसे बाद में Arbitrum One पर स्थानांतरित कर दिया गया। परिणामस्वरूप, सभी नए सबग्राफ को Arbitrum पर The Graph Network में प्रकाशित किया जाता है ताकि Indexers उन्हें सपोर्ट कर सकें। Arbitrum वह नेटवर्क है जिस पर सबग्राफ को प्रकाशित किया जाता है, लेकिन सबग्राफ [supported networks](/supported-networks/) में से किसी पर भी index कर सकते हैं +The Graph Network को शुरू में Ethereum mainnet पर डिप्लॉय किया गया था, लेकिन बाद में सभी उपयोगकर्ताओं के लिए गैस लागत कम करने के उद्देश्य से इसे Arbitrum One पर स्थानांतरित कर दिया गया। परिणामस्वरूप, सभी नए subgraph अब Arbitrum पर The Graph Network में प्रकाशित किए जाते हैं ताकि Indexers उन्हें सपोर्ट कर सकें। Arbitrum वह नेटवर्क है जहाँ subgraph प्रकाशित किए जाते हैं, लेकिन सबग्राफ किसी भी [supported networks](/supported-networks/) को इंडेक्स कर सकते हैं। ## About the Upgrade Indexer > अपग्रेड Indexer वर्तमान में सक्रिय है। -अपग्रेड Indexer को Hosted Service से The Graph Network में सबग्राफ़्स के अपग्रेड करने के अनुभव को सुधारने और उन मौजूदा सबग्राफ़्स के नए संस्करणों का समर्थन करने के लिए लागू किया गया था जो अभी तक इंडेक्स नहीं किए गए थे। +सुधार Indexer को लागू किया गया था ताकि hosted service से subgraph को The Graph Network में अपग्रेड करने के अनुभव को बेहतर बनाया जा सके और उन नए संस्करणों का समर्थन किया जा सके जो अभी तक इंडेक्स नहीं किए गए थे। ### अपग्रेड Indexer क्या करता है? -- यह उन चेन को बूटस्ट्रैप करता है जिन्हें अभी तक The Graph Network पर इंडेक्सिंग पुरस्कार नहीं मिले हैं और यह सुनिश्चित करता है कि एक Indexer उपलब्ध हो ताकि एक Subgraph प्रकाशित होने के तुरंत बाद क्वेरी को यथाशीघ्र सेवा दी जा सके। +- यह उन चेन को बूटस्ट्रैप करता है जिन्होंने अभी तक The Graph Network पर indexing रिवार्ड्स प्राप्त नहीं किए हैं और यह सुनिश्चित करता है कि एक Indexer उपलब्ध हो ताकि किसी subgraph के प्रकाशित होने के बाद यथासंभव शीघ्र क्वेरीज़ को सर्व किया जा सके। - यह उन chain को भी सपोर्ट करता है जो पहले केवल Hosted Service पर उपलब्ध थीं। सपोर्टेड chain की व्यापक सूची [यहां](/supported-networks/) देखें। -- जो Indexer अपग्रेड इंडेक्सर का संचालन करते हैं, वे नए सबग्राफ़ और अतिरिक्त चेन का समर्थन करने के लिए एक सार्वजनिक सेवा के रूप में ऐसा करते हैं जो इंडेक्सिंग पुरस्कारों की कमी का सामना कर रहे हैं, जब तक कि The Graph काउंसिल उन्हें मंजूरी नहीं देती। +- Indexers जो एक upgrade Indexer को संचालित करते हैं, वे नए subgraph और अतिरिक्त चेन का समर्थन करने के लिए एक सार्वजनिक सेवा के रूप में ऐसा करते हैं, जिन्हें The Graph Council द्वारा अनुमोदित किए जाने से पहले Indexing पुरस्कारों की कमी होती है। -### Why is Edge & Node running the upgrade Indexer? +### Edge & Node upgrade indexer क्यों चला रहे हैं? -Edge & Node ने ऐतिहासिक रूप से होस्टेड सेवा का प्रबंधन किया है और, परिणामस्वरूप, उनके पास होस्टेड सेवा के सबग्राफ के लिए पहले से ही समन्वयित डेटा है। +Edge & Node ऐतिहासिक रूप से होस्टेड सेवा को बनाए रखते थे और परिणामस्वरूप, उनके पास पहले से ही होस्टेड सेवा के लिए सिंक किया हुआ डेटा है subgraph. -### What does the upgrade indexer mean for existing Indexers? +### Existing Indexers के लिए upgrade indexer का क्या मतलब है? पहले केवल होस्टेड सेवा पर समर्थित चेन अब बिना indexing पुरस्कार के डेवलपर्स के लिएT he Graph Network पर उपलब्ध कराई गईं। -हालांकि, इस कार्रवाई ने किसी भी इच्छुक Indexer के लिए क्वेरी शुल्क को अनलॉक कर दिया और The Graph Network पर प्रकाशित सबग्राफ की संख्या बढ़ा दी। परिणामस्वरूप, Indexers के पास इन सबग्राफ को इंडेक्स करने और सेवा देने के लिए अधिक अवसर हैं, जो कि क्वेरी शुल्क के बदले में हैं, यहां तक कि जब तक किसी चेन के लिए इंडेक्सिंग इनाम सक्षम नहीं होते। +हालांकि, इस कार्रवाई से किसी भी इच्छुक Indexer के लिए क्वेरी शुल्क अनलॉक हो गया और The Graph Network पर प्रकाशित सबग्राफ की संख्या बढ़ गई। परिणामस्वरूप, Indexer को इन सबग्राफ को इंडेक्स करने और क्वेरी शुल्क के बदले सर्व करने के अधिक अवसर मिले, भले ही किसी चेन के लिए indexing रिवॉर्ड सक्षम न किए गए हों। -अपग्रेड इंडेक्सर Indexer समुदाय को The Graph Network पर सबग्राफ और नए चेन की संभावित मांग के बारे में जानकारी भी प्रदान करता है। +अपग्रेड Indexer समुदाय को यह जानकारी भी प्रदान करता है कि The Graph Network पर subgraph और नई चेन की संभावित मांग क्या हो सकती है। -### What does this mean for Delegators? +### Delegators के लिए यह क्या अर्थ है? -अपग्रेड Indexer डेलीगेटर्स के लिए एक शक्तिशाली अवसर प्रदान करता है। क्योंकि इससे अधिक सबग्राफ को होस्टेड सेवा से The Graph Network में अपग्रेड करने की अनुमति मिली, डेलीगेटर्स को बढ़ी हुई नेटवर्क गतिविधि का लाभ मिलता है। +सुधार Indexer एक शक्तिशाली अवसर प्रदान करता है Delegators के लिए। जैसे ही अधिक subgraph को होस्टेड सेवा से The Graph Network में अपग्रेड किया गया, Delegators को नेटवर्क गतिविधि में वृद्धि से लाभ मिलता है। ### क्या अपग्रेड किया गया Indexer मौजूदा Indexer के साथ पुरस्कारों के लिए प्रतिस्पर्धा करता था? -नहीं, अपग्रेड किया गया Indexer केवल प्रति Subgraph न्यूनतम राशि आवंटित करता है और indexing पुरस्कार एकत्र नहीं करता है। +नहीं, upgrade Indexer केवल प्रत्येक subgraph के लिए न्यूनतम राशि आवंटित करता है और indexing पुरस्कार एकत्र नहीं करता। -यह "आवश्यकता अनुसार" आधार पर काम करता है, एक बैकअप के रूप में कार्य करता है जब तक कि नेटवर्क में संबंधित चेन और सबग्राफ के लिए कम से कम तीन अन्य Indexer द्वारा पर्याप्त सेवा गुणवत्ता प्राप्त नहीं की जाती। +यह "जैसा आवश्यक हो" के आधार पर कार्य करता है, जब तक कि संबंधित चेन और subgraph के लिए नेटवर्क में कम से कम तीन अन्य Indexers द्वारा पर्याप्त सेवा गुणवत्ता प्राप्त नहीं की जाती, तब तक यह एक बैकअप के रूप में कार्य करता है। -### यह Subgraph डेवलपर्स को कैसे प्रभावित करता है? +### यह subgraph डेवलपर्स को कैसे प्रभावित करता है? -सबग्राफ डेवलपर्स अपने सबग्राफ को The Graph Network पर लगभग तुरंत क्वेरी कर सकते हैं, जब वे होस्टेड सेवा से या Subgraph Studio()/subgraphs/developing/publishing/publishing-a-subgraph/ से प्रकाशित करते हैं, क्योंकि इंडेक्सिंग के लिए कोई लीड टाइम आवश्यक नहीं है। कृपया ध्यान दें कि सबग्राफ बनाना(/developing/creating-a-subgraph/) इस अपग्रेड से प्रभावित नहीं हुआ था। +subgraph डेवलपर्स अपने subgraph को The Graph Network पर लगभग तुरंत क्वेरी कर सकते हैं, जब वे होस्टेड सर्विस से अपग्रेड करने के बाद या [subgraph Studio से पब्लिश](/subgraphs/developing/publishing/publishing-a-subgraph/) करने के बाद अपग्रेड करते हैं, क्योंकि indexing के लिए कोई लीड टाइम आवश्यक नहीं था। कृपया ध्यान दें कि [subgraph बनाना](/developing/creating-a-subgraph/) इस अपग्रेड से प्रभावित नहीं हुआ था। ### अपग्रेड Indexer डेटा उपभोक्ताओं को कैसे लाभ पहुंचाता है? -The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. +The upgrade Indexer network पर उन chains को सक्षम बनाता है जो पहले केवल hosted service पर समर्थित थीं। इसलिए, यह उस data के दायरे और उपलब्धता को बढ़ाता है जिसे network पर queried किया जा सकता है। ### अपग्रेड Indexer क्वेरीज़ की कीमत कैसे तय करता है? अपग्रेड में Indexer बाज़ार दर पर क्वेरीज़ की कीमत तय करता है ताकि क्वेरी शुल्क बाज़ार पर कोई प्रभाव न पड़े। -### अपग्रेड Indexer कब एक Subgraph का समर्थन करना बंद करेगा? +### अपग्रेड होने पर Indexer कब तक subgraph को सपोर्ट करना बंद कर देगा? -अपग्रेड Indexer एक Subgraph का समर्थन करता है जब तक कि कम से कम 3 अन्य Indexers सफलतापूर्वक और लगातार किए गए प्रश्नों का उत्तर नहीं देते। +The upgrade Indexer तब तक एक subgraph का समर्थन करता है जब तक कि कम से कम 3 अन्य Indexer सफलतापूर्वक और लगातार इसे किए गए क्वेरीज़ की सेवा नहीं देते। -इसके अतिरिक्त, अपग्रेड Indexer एक Subgraph का समर्थन करना बंद कर देता है यदि उसे पिछले 30 दिनों में क्वेरी नहीं किया गया है। +इसके अलावा, यदि किसी subgraph को पिछले 30 दिनों में क्वेरी नहीं किया गया है, तो अपग्रेड Indexer उसका समर्थन बंद कर देता है। -अन्य Indexer को उन सबग्राफ का समर्थन करने के लिए प्रोत्साहित किया जाता है जिनमें निरंतर क्वेरी वॉल्यूम होता है। अपग्रेड Indexer के लिए क्वेरी वॉल्यूम शून्य की ओर बढ़ना चाहिए, क्योंकि इसका आवंटन आकार छोटा होता है, और क्वेरी के लिए अन्य Indexer को प्राथमिकता दी जानी चाहिए। +अन्य Indexers को उन subgraph का समर्थन करने के लिए प्रोत्साहित किया जाता है जिनमें निरंतर क्वेरी वॉल्यूम होता है। अपग्रेड Indexer के लिए क्वेरी वॉल्यूम शून्य की ओर बढ़ना चाहिए, क्योंकि इसका आवंटन आकार छोटा है, और अन्य Indexers को इससे पहले क्वेरी के लिए चुना जाना चाहिए। diff --git a/website/src/pages/hi/contracts.mdx b/website/src/pages/hi/contracts.mdx index 0a57ae81839b..aae4d2906e17 100644 --- a/website/src/pages/hi/contracts.mdx +++ b/website/src/pages/hi/contracts.mdx @@ -4,7 +4,7 @@ title: Protocol Contracts import { ProtocolContractsTable } from '@/contracts' -Below are the deployed contracts which power The Graph Network. Visit the official [contracts repository](https://github.com/graphprotocol/contracts) to learn more. +नीचे deployed contracts हैं जो The Graph Network को शक्ति प्रदान करते हैं। अधिक जानने के लिए official [contracts repository](https://github.com/graphprotocol/contracts) पर जाएँ। ## Arbitrum @@ -20,7 +20,7 @@ This is the principal deployment of The Graph Network. ## Arbitrum Sepolia -This is the primary testnet for The Graph Network. Testnet is predominantly used by core developers and ecosystem participants for testing purposes. There are no guarantees of service or availability on The Graph's testnets. +यह The Graph Network का प्रमुख testnet है। Testnet का मुख्य रूप से core developers और ecosystem के participants द्वारा परीक्षण उद्देश्यों के लिए उपयोग किया जाता है। The Graph's के testnets पर सेवा या उपलब्धता की कोई guarantee नहीं है। diff --git a/website/src/pages/hi/global.json b/website/src/pages/hi/global.json index 5b5292d8b096..08b190f4facc 100644 --- a/website/src/pages/hi/global.json +++ b/website/src/pages/hi/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "मुख्य नेविगेशन", - "show": "Show navigation", - "hide": "Hide navigation", + "show": "नेविगेशन दिखाएं", + "hide": "नेविगेशन छिपाएँ", "subgraphs": "सबग्राफ", "substreams": "सबस्ट्रीम", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", - "resources": "Resources", - "archived": "Archived" + "sps": "सबस्ट्रीम पावर्ड सबग्राफ", + "tokenApi": "टोकन API", + "indexing": "indexing", + "resources": "संसाधन", + "archived": "संग्रहीत" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "अंतिम अपडेट", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "पढ़ने का समय -", + "minutes": "मिनट" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "पिछला पृष्ठ", + "next": "अगला पृष्ठ", + "edit": "GitHub पर संपादित करें", + "onThisPage": "इस पृष्ठ पर", + "tableOfContents": "विषय-सूची", + "linkToThisSection": "इस अनुभाग का लिंक" }, "content": { - "note": "Note", - "video": "Video" + "callout": { + "note": "नोट", + "tip": "सलाह", + "important": "जरूरी", + "warning": "चेतावनी", + "caution": "सावधानी" + }, + "video": "वीडियो" + }, + "openApi": { + "parameters": { + "pathParameters": "पथ पैरामीटर", + "queryParameters": "क्वेरी पैरामीटर", + "headerParameters": "हैडर पैरामीटर", + "cookieParameters": "कुकी पैरामीटर", + "parameter": "पैरामीटर", + "description": "Description", + "value": "मान", + "required": "आवश्यक", + "deprecated": "अवकाशप्राप्त", + "defaultValue": "डिफ़ॉल्ट मान", + "minimumValue": "न्यूनतम मान", + "maximumValue": "अधिकतम मान ", + "acceptedValues": "स्वीकृत मान", + "acceptedPattern": "स्वीकृत पैटर्न", + "format": "प्रारूप", + "serializationFormat": "सिरीयलाइज़ेशन प्रारूप" + }, + "request": { + "label": "इस एंडपॉइंट का परीक्षण करें", + "noCredentialsRequired": "कोई प्रमाण-पत्र आवश्यक नहीं", + "send": "अनुरोध भेजें" + }, + "responses": { + "potentialResponses": "संभावित प्रतिक्रियाएँ", + "status": "स्थिति", + "description": "Description", + "liveResponse": "लाइव प्रतिक्रिया", + "example": "उदाहरण" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "ओह! यह पृष्ठ अंतरिक्ष में खो गया...", + "subtitle": "पता सही है या नहीं, इसकी जाँच करें या नीचे दिए गए लिंक पर क्लिक करके हमारी वेबसाइट एक्सप्लोर करें।", + "back": "घर जाओ" } } diff --git a/website/src/pages/hi/index.json b/website/src/pages/hi/index.json index f50c21715290..006af907dc33 100644 --- a/website/src/pages/hi/index.json +++ b/website/src/pages/hi/index.json @@ -1,99 +1,175 @@ { "title": "Home", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", - "cta2": "Build your first subgraph" + "title": "The Graph डॉक्स", + "description": "अपनी वेब3 परियोजना को शुरू करें उन उपकरणों के साथ जो ब्लॉकचेन डेटा को निकालने, बदलने और लोड करने में सहायता करते हैं।", + "cta1": "The Graph कैसे काम करता है", + "cta2": "अपना पहला Subgraph बनाएं" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "अपनी जरूरतों के अनुसार समाधान चुनें—ब्लॉकचेन डेटा के साथ अपने तरीके से इंटरैक्ट क", "subgraphs": { "title": "सबग्राफ", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "ब्लॉकचेन डेटा को निकालें, प्रोसेस करें और ओपन APIs के साथ क्वेरी करें।", + "cta": "सबग्राफ विकसित करें" }, "substreams": { "title": "सबस्ट्रीम", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "ब्लॉकचेन डेटा प्राप्त करें और समानांतर निष्पादन के साथ उपयोग करें।", + "cta": "सबस्ट्रीम के साथ विकसित करें" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "सबस्ट्रीम पावर्ड सबग्राफ", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "सबस्ट्रीम-संचालित सबग्राफ सेट करें" }, "graphNode": { - "title": "ग्राफ-नोड", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "title": "Graph Node", + "description": "ब्लॉकचेन डेटा को इंडेक्स करें और इसे GraphQL क्वेरीज़ के माध्यम से सर्व करें।", + "cta": "स्थानीय Graph Node सेट करें" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "ब्लॉकचेन डेटा को फ्लैट फ़ाइलों में निकालें ताकि सिंक समय और स्ट्रीमिंग क्षमताओं में सुधार किया जा सके।", + "cta": "Firehose के साथ शुरुआत करें" } }, "supportedNetworks": { "title": "समर्थित नेटवर्क", + "details": "Network Details", + "services": "Services", + "type": "प्रकार", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "दस्तावेज़", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "The Graph {0} का समर्थन करता है। एक नया नेटवर्क जोड़ने के लिए, {1}", + "networks": "नेटवर्क्स ", + "completeThisForm": "इस फ़ॉर्म को पूरा करें " + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "नाम", + "id": "ID", + "subgraphs": "सबग्राफ", + "substreams": "सबस्ट्रीम", + "firehose": "Firehose", + "tokenapi": "टोकन API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "सबस्ट्रीम", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "बिलिंग", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { "title": "Guides", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "ग्राफ एक्सप्लोरर में डेटा खोजें", + "description": "सैकड़ों सार्वजनिक सबग्राफ का उपयोग करके मौजूदा ब्लॉकचेन डेटा प्राप्त करें।" }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Subgraph प्रकाशित करें", + "description": "अपने Subgraph को विकेंद्रीकृत नेटवर्क में जोड़ें।" }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "सबस्ट्रीम प्रकाशित करें", + "description": "अपनी सबस्ट्रीम पैकेज को सबस्ट्रीम रजिस्ट्री पर लॉन्च करें।" }, "queryingBestPractices": { - "title": "सर्वोत्तम प्रथाओं को क्वेरी करना", - "description": "Optimize your subgraph queries for faster, better results." + "title": "Querying Best Practices", + "description": "अपने Subgraph क्वेरीज़ को तेज़ और बेहतर परिणामों के लिए ऑप्टिमाइज़ करें।" }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "अनुकूलित टाइमसीरीज और एग्रीगेशन", + "description": "अपने Subgraph को कुशलता के लिए सरल बनाएं।" }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "API Key प्रबंधन", + "description": "आसानी से API कुंजियों को बनाएँ, प्रबंधित करें और सुरक्षित करें अपने Subgraph के लिए।" }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "The Graph पर स्थानांतरण", + "description": "किसी भी प्लेटफ़ॉर्म से आसानी से अपने Subgraph को अपग्रेड करें।" } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "वीडियो ट्यूटोरियल्स", + "watchOnYouTube": "YouTube पर देखें", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "The Graph को 1 मिनट में समझाया गया", + "description": "इस छोटे, गैर-तकनीकी वीडियो में जानें कि The Graph Web3 की रीढ़ (backbone) क्यों और कैसे है।" }, "whatIsDelegating": { - "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "title": "Delegating का क्या अर्थ है?", + "description": "यह वीडियो उन मुख्य अवधारणाओं को समझाने में मदद करता है जो delegating, जो कि staking का एक रूप है और The Graph को सुरक्षित करने में सहायता करता है, से पहले समझनी आवश्यक हैं।" }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Solana को सबस्ट्रीम-संचालित Subgraph के साथ इंडेक्स कैसे करें", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", - "minutes": "min" + "reading": "पढ़ने का समय -", + "duration": "अवधि", + "minutes": "मिनट" } } diff --git a/website/src/pages/hi/indexing/_meta-titles.json b/website/src/pages/hi/indexing/_meta-titles.json index 42f4de188fd4..52f24f7e7d81 100644 --- a/website/src/pages/hi/indexing/_meta-titles.json +++ b/website/src/pages/hi/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Indexer टूलिंग" } diff --git a/website/src/pages/hi/indexing/chain-integration-overview.mdx b/website/src/pages/hi/indexing/chain-integration-overview.mdx index 6a7c06a71a07..28458ea16d09 100644 --- a/website/src/pages/hi/indexing/chain-integration-overview.mdx +++ b/website/src/pages/hi/indexing/chain-integration-overview.mdx @@ -2,12 +2,12 @@ title: Chain Integration Process Overview --- -A transparent and governance-based integration process was designed for blockchain teams seeking [integration with The Graph protocol](https://forum.thegraph.com/t/gip-0057-chain-integration-process/4468). It is a 3-phase process, as summarised below. +[integration with The Graph protocol](https://forum.thegraph.com/t/gip-0057-chain-integration-process/4468) चाहने वाली blockchain teams के लिए एक transparent और governance-based integration प्रक्रिया designed की गई थी। यह 3-phase वाली प्रक्रिया है, जैसा कि नीचे संक्षेप में बताया गया है। ## Stage 1. Technical Integration - कृपया `ग्राफ-नोड` द्वारा नए chain समर्थन के लिए [New Chain इंटीग्रेशन](/indexing/new-chain-integration/) पर जाएं। -- Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. +- Teams एक Forum thread बनाकर protocol integration प्रक्रिया शुरू करती हैं [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Default Forum template का उपयोग करना अनिवार्य है। ## Stage 2. Integration Validation @@ -17,12 +17,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 3. Mainnet Integration -- Teams propose mainnet integration by submitting a Graph Improvement Proposal (GIP) and initiating a pull request (PR) on the [feature support matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) (more details on the link). -- The Graph Council reviews the request and approves mainnet support, providing a successful Stage 2 and positive community feedback. +- Teams Graph Improvement Proposal (GIP) submit करके और [feature support matrix] (https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) पर एक pull request (PR) शुरू करके mainnet integration का प्रस्ताव करती हैं। (more details on the link)। +- Graph Council request की reviews करती है और successful Stage 2 और positive community feedback प्रदान करते हुए mainnet समर्थन को मंजूरी देती है। --- -If the process looks daunting, don't worry! The Graph Foundation is committed to supporting integrators by fostering collaboration, offering essential information, and guiding them through various stages, including navigating governance processes such as Graph Improvement Proposals (GIPs) and pull requests. If you have questions, please reach out to [info@thegraph.foundation](mailto:info@thegraph.foundation) or through Discord (either Pedro, The Graph Foundation member, IndexerDAO, or other core developers). +यदि प्रक्रिया कठिन लगती है, तो चिंता न करें! Graph Foundation सहयोग को बढ़ावा देने, आवश्यक जानकारी प्रदान करने और Graph Improvement Proposals (GIPs) और pull अनुरोध जैसी शासन प्रक्रियाओं को navigate करने सहित विभिन्न stages के माध्यम से उनका मार्गदर्शन करके integrators का समर्थन करने के लिए प्रतिबद्ध है। यदि आपके कोई प्रश्न हैं, तो कृपया [info@thegraph.foundation](mailto:info@thegraph.foundation) या Discord (either Pedro, The Graph Foundation member, IndexerDAO, or other core developers) के माध्यम से संपर्क करें। Ready to shape the future of The Graph Network? [Start your proposal](https://github.com/graphprotocol/graph-improvement-proposals/blob/main/gips/0057-chain-integration-process.md) now and be a part of the web3 revolution! @@ -30,20 +30,20 @@ Ready to shape the future of The Graph Network? [Start your proposal](https://gi ## Frequently Asked Questions -### 1. How does this relate to the [World of Data Services GIP](https://forum.thegraph.com/t/gip-0042-a-world-of-data-services/3761)? +### 1. इसका इससे क्या संबंध है [World of Data Services GIP](https://forum.thegraph.com/t/gip-0042-a-world-of-data-services/3761)? -This process is related to the Subgraph Data Service, applicable only to new Subgraph `Data Sources`. +यह प्रक्रिया subgraph Data Service से संबंधित है, जो केवल नए Subgraph \`Data Sources' पर लागू होती है। -### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? +### 2. यदि mainnet पर network समर्थित होने के बाद Firehose और Substreams समर्थन आता है तो क्या होगा? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +यह केवल सबस्ट्रीम-powered सबग्राफ पर indexing rewards के समर्थन को प्रभावित करेगा। नए Firehose कार्यान्वयन को testnet पर परीक्षण की आवश्यकता होगी, जिसे इस GIP के Stage 2 में उल्लिखित पद्धति का पालन करते हुए किया जाएगा। इसी तरह, यदि कार्यान्वयन प्रभावी और विश्वसनीय साबित होता है, तो [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) पर (`सबस्ट्रीम data sources` सबग्राफ Feature) के लिए एक PR आवश्यक होगा, साथ ही indexing rewards के समर्थन के लिए एक नया GIP भी तैयार करना होगा। कोई भी इस PR और GIP को बना सकता है; Foundation इस प्रक्रिया में Council अनुमोदन के लिए सहायता करेगा। ### 3. पूर्ण प्रोटोकॉल समर्थन तक पहुंचने की प्रक्रिया में कितना समय लगेगा? -The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. +Mainnet का समय कई weeks होने की उम्मीद है, जो integration development के समय के आधार पर अलग-अलग होगा, चाहे अतिरिक्त शोध की आवश्यकता हो, testing और bug fixes, और, हमेशा की तरह, governance process का समय जिसके लिए community feedback की आवश्यकता होती है। -Protocol support for indexing rewards depends on the stakeholders' bandwidth to proceed with testing, feedback gathering, and handling contributions to the core codebase, if applicable. This is directly tied to the integration's maturity and how responsive the integration team is (who may or may not be the team behind the RPC/Firehose implementation). The Foundation is here to help support throughout the whole process. +Indexing rewards के लिए protocol समर्थन, यदि लागू हो, तो परीक्षण, feedback एकत्र करने और core codebase में योगदान को संभालने के लिए stakeholders की bandwidth पर निर्भर करता है। यह सीधे तौर पर integration की परिपक्वता और integration team कितनी संवेदनशील है (who may or may not be the team behind the RPC/Firehose implementation) से जुड़ी है। Foundation पूरी प्रक्रिया में सहायता के लिए यहां मौजूद है। -### 4. How will priorities be handled? +### 4. Priorities कैसे संभाली जाएंगी? \#3 के समान, यह समग्र तत्परता और शामिल हितधारकों की क्षमता पर निर्भर करेगा। उदाहरण के लिए, एक नए चेन के साथ एक नई Firehose कार्यान्वयन को उन एकीकरणों की तुलना में अधिक समय लग सकता है जो पहले से ही परीक्षण किए जा चुके हैं या जो शासन प्रक्रिया में आगे बढ़ चुके हैं। diff --git a/website/src/pages/hi/indexing/new-chain-integration.mdx b/website/src/pages/hi/indexing/new-chain-integration.mdx index 0cb393914982..f9fa6a3e209d 100644 --- a/website/src/pages/hi/indexing/new-chain-integration.mdx +++ b/website/src/pages/hi/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: नई श्रृंखला एकीकरण --- -चेन अपने पारिस्थितिकी तंत्र में सबग्राफ़ समर्थन लाने के लिए एक नया `graph-node` एकीकरण शुरू कर सकती हैं। सबग्राफ़ एक शक्तिशाली इंडेक्सिंग उपकरण हैं, जो डेवलपर्स के लिए संभावनाओं की एक नई दुनिया खोलते हैं। ग्राफ़ नोड पहले से ही यहाँ सूचीबद्ध चेन से डेटा को इंडेक्स करता है। यदि आप नए एकीकरण में रुचि रखते हैं, तो दो एकीकरण रणनीतियाँ हैं: +चेनें अपने इकोसिस्टम में Subgraph सपोर्ट लाने के लिए एक नया `graph-node` इंटीग्रेशन शुरू कर सकती हैं। Subgraph एक शक्तिशाली इंडेक्सिंग टूल हैं जो डेवलपर्स के लिए संभावनाओं की दुनिया खोलते हैं। `Graph Node` पहले से ही यहाँ सूचीबद्ध चेन से डेटा इंडेक्स करता है। यदि आप एक नए इंटीग्रेशन में रुचि रखते हैं, तो इसके लिए 2 इंटीग्रेशन रणनीतियाँ हैं: 1. EVM JSON-RPC 2. Firehose: सभी Firehose एकीकरण समाधान में Substreams शामिल हैं, जो Firehose पर आधारित एक बड़े पैमाने पर स्ट्रीमिंग इंजन है, जिसमें स्वदेशी `graph-node` समर्थन है, जो समानांतर रूपांतरण की अनुमति देता है। @@ -15,7 +15,7 @@ title: नई श्रृंखला एकीकरण यदि ब्लॉकचेन EVM समान है और क्लाइंट/नोड मानक EVM JSON-RPC API को एक्सपोज़ करता है, तो Graph Node को नए चेन को इंडेक्स करने में सक्षम होना चाहिए। -#### Testing an EVM JSON-RPC +#### एक EVM JSON-RPC का परीक्षण Graph Node को EVM चेन से डेटा इन्गेस्ट करने के लिए, RPC नोड को निम्नलिखित EVM JSON-RPC विधियों को एक्सपोज़ करना होगा: @@ -25,7 +25,7 @@ Graph Node को EVM चेन से डेटा इन्गेस्ट क - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, in a JSON-RPC batch request -- `trace_filter` *(सीमित ट्रेसिंग और विकल्पतः Graph Node के लिए आवश्यक)* +- `trace_filter` _(सीमित ट्रेसिंग और विकल्पतः Graph Node के लिए आवश्यक)_ ### 2. Firehose एकीकरण @@ -33,11 +33,11 @@ Graph Node को EVM चेन से डेटा इन्गेस्ट क > नोट: StreamingFast टीम द्वारा की गई सभी एकीकरणों में श्रृंखला के कोडबेस में Firehose प्रतिकृति प्रोटोकॉल के लिए रखरखाव शामिल है।StreamingFast किसी भी परिवर्तन को ट्रैक करता है और जब आप कोड बदलते हैं और जब StreamingFastकोड बदलता है, तो बाइनरी जारी करता है। इसमें प्रोटोकॉल के लिए Firehose/Substreamsबाइनरी जारी करना, श्रृंखला के ब्लॉक मॉडल के लिए Substreamsमॉड्यूल को बनाए रखना, और आवश्यकता होने पर ब्लॉकचेन नोड के लिए इंस्ट्रुमेंटेशन के साथ बाइनरी जारी करना शामिल है। -#### Integration for Non-EVM chains +#### Non-EVM चेन के लिए इंटीग्रेशन फायरहोज़ को चेन में एकीकृत करने का प्राथमिक तरीका RPC पॉलिंग रणनीति का उपयोग करना है। हमारी पॉलिंग एल्गोरिदम नए ब्लॉक के आने का पूर्वानुमान लगाएगी और उस समय के करीब नए ब्लॉक के लिए जाँच करने की दर बढ़ा देगी, जिससे यह एक बहुत कम लेटेंसी और प्रभावी समाधान बन जाता है। फायरहोज़ के एकीकरण और रखरखाव में मदद के लिए, [स्ट्रीमिंगफास्ट टीम](https://www.streamingfast.io/firehose-integration-program) से संपर्क करें। नए चेन और उनके एकीकृतकर्ताओं को फायरहोज़ और सबस्ट्रीम द्वारा उनके पारिस्थितिकी तंत्र में लाए गए [फोर्क जागरूकता](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) और विशाल समानांतर इंडेक्सिंग क्षमताओं की सराहना होगी। -#### Specific Instrumentation for EVM (`geth`) chains +#### EVM (' geth ') चेन के लिए विशिष्ट इंस्ट्रूमेंटेशन EVM चेन के लिए, एक गहरे स्तर के डेटा को प्राप्त करने के लिए `geth` [लाइव-ट्रेसर](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0) का उपयोग किया जाता है, जो गो-एथेरियम और स्ट्रीमिंगफास्ट के बीच सहयोग है, जो उच्च थ्रूपुट और समृद्ध लेनदेन ट्रेसिंग प्रणाली बनाने के लिए है। लाइव ट्रेसर सबसे व्यापक समाधान है, जो [विस्तारित](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) ब्लॉक विवरण का परिणाम है। यह नए इंडेक्सिंग पैरेडाइम्स की अनुमति देता है, जैसे राज्य परिवर्तनों, कॉल्स, पैरेंट कॉल ट्रीज़ के आधार पर घटनाओं का पैटर्न मिलाना, या स्मार्ट कॉन्ट्रैक्ट में वास्तविक वेरिएबल्स में बदलाव के आधार पर घटनाओं को ट्रिगर करना। @@ -47,19 +47,19 @@ EVM चेन के लिए, एक गहरे स्तर के डे ## EVM विचार - JSON-RPC और Firehose के बीच का अंतर -JSON-RPC और Firehose दोनों ही सबग्राफ के लिए उपयुक्त हैं, लेकिन एक Firehose हमेशा आवश्यक होता है यदि डेवलपर्स [सबस्ट्रीम](https://substreams.streamingfast.io) के साथ निर्माण करना चाहते हैं। सबस्ट्रीम का समर्थन करने से डेवलपर्स को नए chain के लिए [सबस्ट्रीम-powered सबग्राफ](/subgraphs/cookbook/substreams-powered-subgraphs/) बनाने की अनुमति मिलती है, और इसके परिणामस्वरूप आपके सबग्राफ की प्रदर्शन क्षमता में सुधार हो सकता है। इसके अतिरिक्त, Firehose — जो कि `ग्राफ-नोड` के JSON-RPC extraction layer का एक drop-in replacement है — सामान्य indexing के लिए आवश्यक RPC कॉल्स की संख्या को 90% तक घटा देता है। +JSON-RPC और Firehose दोनों ही सबग्राफ के लिए उपयुक्त हैं, लेकिन उन डेवलपर्स के लिए Firehose हमेशा आवश्यक होता है जो [सबस्ट्रीम](https://substreams.streamingfast.io) के साथ निर्माण करना चाहते हैं। सबस्ट्रीम का समर्थन करने से डेवलपर्स को नए चेन के लिए [सबस्ट्रीम -powered सबग्राफ](/subgraphs/cookbook/substreams-powered-subgraphs/) बनाने में मदद मिलती है और यह आपके सबग्राफ के प्रदर्शन को बेहतर बनाने की क्षमता रखता है। इसके अतिरिक्त, Firehose — `graph-node` की JSON-RPC एक्सट्रैक्शन लेयर के ड्रॉप-इन रिप्लेसमेंट के रूप में — सामान्य indexing के लिए आवश्यक RPC कॉल्स की संख्या को 90% तक कम कर देता है। -- सभी `getLogs` कॉल्स और राउंडट्रिप्स को एकल स्ट्रीम द्वारा प्रतिस्थापित किया जाता है, जो सीधे `graph-node` के केंद्र में पहुंचती है; यह एकल ब्लॉक मॉडल सभी सबग्राफ्स के लिए काम करता है जिन्हें यह प्रोसेस करता है। +- सभी `getLogs` कॉल और राउंडट्रिप्स को एकल स्ट्रीम द्वारा बदल दिया जाता है, जो सीधे `graph-node` के केंद्र में पहुँचती है; यह उन सभी Subgraph के लिए एक एकल ब्लॉक मॉडल प्रदान करता है जिनका यह प्रोसेस करता है। -> **NOTE**: EVM chains के लिए Firehose-based integration के लिए अभी भी Indexers को chain के संग्रह RPC node को subgraph को ठीक से index करने के लिए चलाने की आवश्यकता होगी। यह `eth_call` RPC विधि द्वारा आम तौर पर पहुंच योग्य smart contract स्थिति प्रदान करने में Firehosesकी असमर्थता के कारण है। (It's worth reminding that eth_calls are [not a good practice for developers](/)) +> नोट: Firehose-आधारित एकीकरण के लिए EVM चेन पर अभी भी Indexers को चेन का आर्काइव RPC नोड चलाने की आवश्यकता होगी ताकि सबग्राफ को सही तरीके से Index किया जा सके। इसका कारण यह है कि Firehose आमतौर पर  `eth_call` RPC मेथड द्वारा एक्सेस किए जाने वाली स्मार्ट contract स्थिति प्रदान नहीं कर सकता। (यह याद दिलाना महत्वपूर्ण है कि `eth_calls` डेवलपर्स के लिए एक अच्छी प्रैक्टिस नहीं है)। ## Graph Node Configuration -ग्राफ नोड को कॉन्फ़िगर करना आपके स्थानीय वातावरण को तैयार करने के समान आसान है। एक बार जब आपका स्थानीय वातावरण सेट हो जाता है, तो आप एक उपग्राफ को स्थानीय रूप से डिप्लॉय करके एकीकरण का परीक्षण कर सकते हैं। +ग्राफ-नोड को कॉन्फ़िगर करना उतना ही आसान है जितना कि अपने स्थानीय वातावरण को तैयार करना। एक बार जब आपका स्थानीय वातावरण सेट हो जाता है, तो आप स्थानीय रूप से एक सबग्राफ को तैनात करके एकीकरण का परीक्षण कर सकते हैं। 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC or Firehose compliant URL +2. [इस पंक्ति](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) को नए नेटवर्क नाम और EVM JSON-RPC या Firehose संगत URL को शामिल करने के लिए संशोधित करें। > कृपया पर्यावरण चर ethereum को खुद नाम में बदलें नहीं। यही रहना चाहिए, चाहे network का नाम भिन्न हो। @@ -67,4 +67,4 @@ JSON-RPC और Firehose दोनों ही सबग्राफ के ल ## सबस्ट्रीम-संचालित सबग्राफ की सेवा -StreamingFast द्वारा संचालित Firehose/सबस्ट्रीम इंटीग्रेशन के लिए, बुनियादी सबस्ट्रीम मॉड्यूल (जैसे डिकोड किए गए लेनदेन, log और स्मार्ट-contract आयोजन) और सबस्ट्रीम कोडजेन टूल्स का बेसिक सपोर्ट शामिल है। ये टूल्स [सबस्ट्रीम-powered सबग्राफ](/substreams/sps/introduction/) को सक्षम बनाने की क्षमता प्रदान करते हैं। [ मार्गदर्शक](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) का अनुसरण करें और `सबस्ट्रीम codegen सबग्राफ` चलाकर कोडजेन टूल्स का अनुभव लें। +StreamingFast के नेतृत्व वाले Firehose/Substreams एकीकरणों के लिए, मूलभूत सबस्ट्रीम मॉड्यूल (जैसे कि डिकोड किए गए लेन-देन, लॉग्स और स्मार्ट-कॉन्ट्रैक्ट इवेंट्स) और सबस्ट्रीम codegen टूल्स के लिए बुनियादी समर्थन शामिल है। ये टूल्स [सबस्ट्रीम-powered सबग्राफ](/substreams/sps/introduction/) को सक्षम करने की क्षमता प्रदान करते हैं। [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) का पालन करें और `substreams codegen subgraph` कमांड चलाकर स्वयं codegen टूल्स का अनुभव करें। diff --git a/website/src/pages/hi/indexing/overview.mdx b/website/src/pages/hi/indexing/overview.mdx index f1109f1f70c9..baf5fa61aacd 100644 --- a/website/src/pages/hi/indexing/overview.mdx +++ b/website/src/pages/hi/indexing/overview.mdx @@ -1,47 +1,47 @@ --- title: Indexing का अवलोकन -sidebarTitle: अवलोकन +sidebarTitle: Overview --- -Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn query fees that are rebated according to an exponential rebate function. +Indexers, The Graph Network में node operators होते हैं जो Graph Tokens (GRT) stake करके indexing और query processing services प्रदान करते हैं। वे अपनी सेवाओं के लिए query fees और indexing rewards अर्जित करते हैं। इसके अलावा, उन्हें query fees भी मिलती हैं, जो एक exponential rebate function के अनुसार rebate की जाती हैं। जीआरटी जो प्रोटोकॉल में दांव पर लगा है, विगलन अवधि के अधीन है और यदि अनुक्रमणिका दुर्भावनापूर्ण हैं और अनुप्रयोगों को गलत डेटा प्रदान करते हैं या यदि वे गलत तरीके से अनुक्रमणित करते हैं तो इसे घटाया जा सकता है। इंडेक्सर्स नेटवर्क में योगदान करने के लिए डेलीगेटर्स से प्रत्यायोजित हिस्सेदारी के लिए पुरस्कार भी अर्जित करते हैं। -इंडेक्सर्स सबग्राफ के क्यूरेशन सिग्नल के आधार पर इंडेक्स के लिए सबग्राफ का चयन करते हैं, जहां क्यूरेटर GRT को यह इंगित करने के लिए दांव पर लगाते हैं कि कौन से सबग्राफ उच्च-गुणवत्ता वाले हैं और उन्हें प्राथमिकता दी जानी चाहिए। उपभोक्ता (उदाहरण के लिए अनुप्रयोग) पैरामीटर भी सेट कर सकते हैं जिसके लिए इंडेक्सर्स अपने सबग्राफ के लिए प्रश्नों को प्रोसेस करते हैं और क्वेरी शुल्क मूल्य निर्धारण के लिए वरीयताएँ निर्धारित करते हैं। +Indexers किसी सबग्राफ के curation signal के आधार पर उसे चुनते हैं, जहाँ Curators GRT को स्टेक करते हैं ताकि यह संकेत दिया जा सके कि कौन से Subgraph उच्च-गुणवत्ता वाले हैं और प्राथमिकता दी जानी चाहिए। Consumers (जैसे कि applications) यह भी निर्धारित कर सकते हैं कि कौन से Indexers उनके सबग्राफ के लिए queries को प्रोसेस करें और query fee pricing के लिए अपनी प्राथमिकताएँ सेट कर सकते हैं। ## FAQ -### What is the minimum stake required to be an Indexer on the network? +### नेटवर्क पर Indexer बनने के लिए न्यूनतम स्टेक कितना आवश्यक है? -The minimum stake for an Indexer is currently set to 100K GRT. +Indexer के लिए न्यूनतम स्टेक वर्तमान में 100K GRT निर्धारित है। -### What are the revenue streams for an Indexer? +### एक Indexer के लिए राजस्व स्रोत क्या हैं? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**पूछताछ शुल्क rebates** - नेटवर्क पर क्वेरी सर्व करने के लिए किए गए भुगतान। ये भुगतान एक Indexer और एक गेटवे के बीच स्टेट चैनलों के माध्यम से संचालित होते हैं। गेटवे से प्रत्येक क्वेरी अनुरोध में एक भुगतान शामिल होता है और संबंधित प्रतिक्रिया में क्वेरी परिणाम की वैधता का प्रमाण होता है। -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**indexing रिवार्ड्स** - 3% वार्षिक प्रोटोकॉल-वाइड मुद्रास्फीति के माध्यम से उत्पन्न, indexing रिवार्ड्स उन Indexers को वितरित किए जाते हैं जो नेटवर्क के लिए सबग्राफ डिप्लॉयमेंट को इंडेक्स कर रहे हैं। -### How are indexing rewards distributed? +### Indexing इनाम कैसे वितरित किए जाते हैं? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards प्रोटोकॉल मुद्रास्फीति से आते हैं, जो 3% वार्षिक जारी करने के लिए सेट किया गया है। इन्हें सभी सबग्राफ पर कुल क्यूरेशन सिग्नल के अनुपात के आधार पर वितरित किया जाता है, और फिर Indexers को उनके द्वारा उस सबग्राफ पर आवंटित स्टेक के अनुपात में वितरित किया जाता है। **एक आवंटन को मान्य प्रूफ ऑफ Indexing (POI) के साथ बंद किया जाना चाहिए, जो मध्यस्थता चार्टर द्वारा निर्धारित मानकों को पूरा करता हो, ताकि इसे पुरस्कारों के लिए योग्य माना जा सके।** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +समुदाय द्वारा कई उपकरण बनाए गए हैं जो इनाम की गणना करने में मदद करते हैं; आपको इनका संग्रह [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c) में संगठित रूप में मिलेगा। आप #Delegators और #Indexers चैनलों में भी उपकरणों की एक अद्यतन सूची [Discord server](https://discord.gg/graphprotocol) पर पा सकते हैं। यहाँ हम एक [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) को लिंक कर रहे हैं जो indexer software stack के साथ एकीकृत है। -### What is a proof of indexing (POI)? +### Indexing का प्रमाण (POI) क्या है? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs का उपयोग नेटवर्क में यह सत्यापित करने के लिए किया जाता है कि कोई Indexer उन सबग्राफ को Indexing कर रहा है जिन पर उन्होंने आवंटन किया है। जब किसी आवंटन को बंद किया जाता है, तो वर्तमान युग के पहले ब्लॉक के लिए एक POI प्रस्तुत करना आवश्यक होता है ताकि वह आवंटन Indexing पुरस्कारों के लिए पात्र हो सके। किसी ब्लॉक के लिए POI उस ब्लॉक तक और उसमें शामिल सभी entity store लेनदेन के लिए एक डाइजेस्ट होता है, जो एक विशिष्ट Subgraph परिनियोजन के लिए होता है। -### When are indexing rewards distributed? +### indexing पुरस्कार कब वितरित किए जाते हैं? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +आवंटन सक्रिय रहते हुए और 28 युगों के भीतर आवंटित होने पर लगातार इनाम अर्जित करते रहते हैं। इनाम Indexers द्वारा एकत्र किए जाते हैं और तब वितरित किए जाते हैं जब उनके आवंटन बंद हो जाते हैं। यह या तो मैन्युअल रूप से होता है, जब भी Indexer उन्हें बलपूर्वक बंद करना चाहता है, या 28 युगों के बाद एक Delegator Indexer के लिए आवंटन बंद कर सकता है, लेकिन इससे कोई इनाम नहीं मिलता। 28 युग अधिकतम आवंटन अवधि है (फिलहाल, एक युग लगभग ~24 घंटे तक चलता है)। -### Can pending indexing rewards be monitored? +### क्या लंबित indexing पुरस्कारों की निगरानी की जा सकती है? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +RewardsManager contract में एक केवल-पढ़ने योग्य [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) फ़ंक्शन है, जिसका उपयोग किसी विशिष्ट आवंटन के लिए लंबित इनाम की जाँच करने के लिए किया जा सकता है। -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +कई समुदाय द्वारा बनाए गए डैशबोर्ड में पेंडिंग रिवॉर्ड्स के मान होते हैं और इन्हें मैन्युअल रूप से निम्नलिखित कदमों का पालन करके आसानी से चेक किया जा सकता है: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. [mainnet सबग्राफ](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) को क्वेरी करें ताकि सभी सक्रिय आवंटनों के लिए ID प्राप्त की जा सके। ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Etherscan का उपयोग करके `getRewards()` कॉल करें: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- [ईथरस्कैन इंटरफेस पर रिवॉर्ड्स contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) पर जाएं। +- `getRewards()` को कॉल करने के लिए: + - **9. getRewards** ड्रॉपडाउन का विस्तार करें। + - इनपुट में **allocationID** दर्ज करें। + - कृपया **Query** बटन पर क्लिक करें। -### What are disputes and where can I view them? +### क्या होते हैं और मैं उन्हें कहाँ देख सकता हूँ? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +Indexers की queries और आवंटन दोनों को The Graph में विवाद अवधि के दौरान विवादित किया जा सकता है। विवाद अवधि विवाद के प्रकार के अनुसार भिन्न होती है। Queries/अभिप्रमाणन के लिए 7 अवधियों को विवाद विंडो होती है, जबकि आवंटन के लिए 56 युगों की अवधि होती है। इन अवधियों के बीतने के बाद, आवंटन या queries के खिलाफ कोई विवाद नहीं खोला जा सकता। जब कोई विवाद खोला जाता है, तो Fishermen को न्यूनतम 10,000 GRT की जमा राशि की आवश्यकता होती है, जिसे विवाद के अंतिम निर्णय और समाधान दिए जाने तक लॉक कर दिया जाता है। Fishermen वे नेटवर्क प्रतिभागी होते हैं जो विवाद खोलते हैं। -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +विवादों के **तीन** संभावित परिणाम होते हैं, और मछुआरों की जमा राशि भी। -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- यदि विवाद अस्वीकार कर दिया जाता है, तो फ़िशरमैन द्वारा जमा किया गया GRT नष्ट कर दिया जाएगा, और विवादित Indexer पर कोई दंड नहीं लगाया जाएगा। +- यदि विवाद ड्रा के रूप में निपटाया जाता है, तो मछुआरों की जमा राशि वापस कर दी जाएगी, और विवादित Indexer पर कोई दंड नहीं लगाया जाएगा। +- यदि विवाद स्वीकार कर लिया जाता है, तो मछुआरों द्वारा जमा किया गया GRT वापस कर दिया जाएगा, विवादित Indexer को दंडित किया जाएगा, और मछुआरों को दंडित किए गए GRT का 50% मिलेगा। -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +विवादों को UI में एक Indexer's प्रोफ़ाइल पृष्ठ पर Disputes टैब के अंतर्गत देखा जा सकता है। -### What are query fee rebates and when are they distributed? +### पूछताछ शुल्क रिबेट्स क्या हैं और वे कब वितरित किए जाते हैं? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +पूछताछ शुल्क गेटवे द्वारा एकत्र किए जाते हैं और Indexers को घातांकीय छूट फ़ंक्शन के अनुसार वितरित किए जाते हैं (देखें GIP [यहाँ](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162))। घातांकीय छूट फ़ंक्शन को यह सुनिश्चित करने के तरीके के रूप में प्रस्तावित किया गया है कि Indexers queries की सही सेवा करके सर्वोत्तम परिणाम प्राप्त करें। यह Indexers को एक बड़ी मात्रा में स्टेक आवंटित करने के लिए प्रोत्साहित करके काम करता है (जो किसी queries की सेवा करते समय गलती करने पर स्लैश किया जा सकता है) जो वे एकत्र कर सकने वाली पूछताछ शुल्क की मात्रा के सापेक्ष होती है। -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +एक बार आवंटन बंद हो जाने के बाद, रिबेट्स को Indexer द्वारा क्लेम किया जा सकता है। क्लेम करने पर, पूछताछ शुल्क रिबेट्स को Indexer और उनके Delegators के बीच पूछताछ शुल्क कट और घातीय रिबेट फ़ंक्शन के आधार पर वितरित किया जाता है। -### What is query fee cut and indexing reward cut? +### पूछताछ शुल्क कटौती और indexing पुरस्कार कटौती क्या हैं? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +`queryFeeCut` और `indexingRewardCut` मान delegation पैरामीटर हैं, जिन्हें Indexer cooldownBlocks के साथ सेट कर सकता है ताकि Indexer और उनके Delegators के बीच GRT के वितरण को नियंत्रित किया जा सके। Delegation पैरामीटर सेट करने के निर्देशों के लिए [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) के अंतिम चरण देखें। -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **queryFeeCut** - वह % जो पूछताछ शुल्क रिबेट्स में से Indexer को वितरित किया जाएगा। यदि इसे 95% पर सेट किया गया है, तो जब एक एलोकेशन बंद होगी, तो Indexer को अर्जित किए गए पूछताछ शुल्क का 95% प्राप्त होगा, और शेष 5% Delegators को जाएगा। -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **indexingRewardCut** - वह % जो Indexing पुरस्कारों में से Indexer को वितरित किया जाएगा। यदि इसे 95% पर सेट किया जाता है, तो जब कोई आवंटन बंद होता है, तो Indexer को Indexing पुरस्कारों का 95% प्राप्त होगा और Delegators शेष 5% को साझा करेंगे। -### How do Indexers know which subgraphs to index? +### Indexers को कैसे पता चलता है कि कौन से सबग्राफ को index करना है? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers उन्नत तकनीकों को लागू करके सबग्राफ indexing निर्णय लेने में खुद को अलग कर सकते हैं, लेकिन सामान्य विचार देने के लिए, हम नेटवर्क में सबग्राफ का मूल्यांकन करने के लिए उपयोग की जाने वाली कुछ प्रमुख मीट्रिक्स पर चर्चा करेंगे: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - किसी विशेष Subgraph पर लागू किए गए नेटवर्क curation signal का अनुपात उस Subgraph में रुचि का एक अच्छा संकेतक होता है, विशेष रूप से प्रारंभिक चरण में जब क्वेरी वॉल्यूम बढ़ रहा होता है। -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **क्वेरी फीस संग्रहित **- किसी विशेष सबग्राफ के लिए संग्रहित क्वेरी फीस की ऐतिहासिक डेटा भविष्य की मांग का एक अच्छा संकेतक है। -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **राशि दांव पर लगी हुई** - अन्य Indexers के व्यवहार की निगरानी करना या कुल दांव का विशिष्ट सबग्राफ की ओर आवंटित अनुपात देखना, एक Indexer को सबग्राफ क्वेरी के लिए आपूर्ति पक्ष की निगरानी करने में मदद कर सकता है। इससे वे उन सबग्राफ की पहचान कर सकते हैं जिनमें नेटवर्क आत्मविश्वास दिखा रहा है या ऐसे सबग्राफ जिनमें अधिक आपूर्ति की आवश्यकता हो सकती है। -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraph जिनके लिए कोई indexing रिवार्ड नहीं है -**कुछ सबग्राफ को indexing इनाम नहीं मिलते हैं, मुख्य रूप से इसलिए क्योंकि वे असमर्थित सुविधाओं जैसे कि IPFS का उपयोग कर रहे हैं या वे मुख्य नेटवर्क के बाहर किसी अन्य नेटवर्क से क्वेरी कर रहे हैं। यदि कोई सबग्राफ indexing इनाम उत्पन्न नहीं कर रहा है, तो आपको उस पर एक संदेश दिखाई देगा। -### What are the hardware requirements? +### हार्डवेयर आवश्यकताएँ क्या हैं? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **छोटा** - शुरुआत में कुछ सबग्राफ को index करने के लिए पर्याप्त, लेकिन संभवतः विस्तार करने की आवश्यकता होगी। +- **स्टैंडर्ड **- डिफ़ॉल्ट सेटअप, यह वही है जो उदाहरण k8s/terraform परिनियोजन मैनिफेस्ट में उपयोग किया जाता है। +- **मध्यम**- एक प्रोडक्शन Indexer जो 100 सबग्राफ को सपोर्ट करता है और 200-500 अनुरोध प्रति सेकंड प्रोसेस करता है। +- **बड़ा **- वर्तमान में उपयोग किए जा रहे सभी सबग्राफ को इंडेक्स करने और संबंधित ट्रैफ़िक के लिए अनुरोधों को सर्व करने के लिए तैयार। -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| सेटअप | Postgres
(CPUs) | Postgres
(मेमोरी in GBs) | Postgres
(डिस्क in TBs) | VMs
(CPUs) | VMs
(मेमोरी in GBs) | +| ----- | :------------------: | :---------------------------: | :--------------------------: | :-------------: | :----------------------: | +| छोटा | 4 | 8 | 1 | 4 | 16 | +| मानक | 8 | 30 | 1 | 12 | 48 | +| मध्यम | 16 | 64 | 2 | 32 | 64 | +| बड़ा | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### कोई Indexer को कौन-कौन सी बुनियादी सुरक्षा सावधानियाँ बरतनी चाहिए? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **ऑपरेटर वॉलेट** - एक ऑपरेटर वॉलेट सेट अप करना एक महत्वपूर्ण एहतियात है क्योंकि यह एक Indexer को अपने उन कुंजियों के बीच अलगाव बनाए रखने की अनुमति देता है जो स्टेक को नियंत्रित करती हैं और वे जो दिन-प्रतिदिन के संचालन के नियंत्रण में होती हैं। निर्देशों के लिए (Stake in Protocol](/indexing/overview/#stake-in-the-protocol) देखें। -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Firewall**- केवल Indexer सेवा को सार्वजनिक रूप से एक्सपोज़ किया जाना चाहिए और विशेष ध्यान एडमिन पोर्ट्स और डेटाबेस एक्सेस को लॉक करने पर दिया जाना चाहिए: Graph Node JSON-RPC एंडपॉइंट (डिफ़ॉल्ट पोर्ट: 8030), Indexer प्रबंधन API एंडपॉइंट (डिफ़ॉल्ट पोर्ट: 18000), और Postgres डेटाबेस एंडपॉइंट (डिफ़ॉल्ट पोर्ट: 5432) को एक्सपोज़ नहीं किया जाना चाहिए। -## Infrastructure +## इंफ्रास्ट्रक्चर -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +Indexer के इंफ्रास्ट्रक्चर के केंद्र में Graph Node होता है, जो इंडेक्स किए गए नेटवर्क की निगरानी करता है, डेटा को सबग्राफ परिभाषा के अनुसार निकालता और लोड करता है, और इसे एक [GraphQL API](/about/#how-the-graph-works) के रूप में सर्व करता है। Graph Node को प्रत्येक इंडेक्स किए गए नेटवर्क से डेटा एक्सपोज़ करने वाले एक एंडपॉइंट से कनेक्ट करने की आवश्यकता होती है; डेटा स्रोत करने के लिए एक IPFS नोड; अपने स्टोर के लिए एक PostgreSQL डेटाबेस; और Indexer घटकों से, जो इसे नेटवर्क के साथ इंटरैक्शन की सुविधा प्रदान करते हैं। -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL डेटाबेस** - यह Graph Node के लिए मुख्य स्टोर है, जहाँ Subgraph डेटा संग्रहीत किया जाता है। Indexer सेवा और एजेंट भी इस डेटाबेस का उपयोग state channel डेटा, cost models, Indexing नियमों और allocation क्रियाओं को संग्रहीत करने के लिए करते हैं। -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **डेटा एंडपॉइंट** - EVM-संगत नेटवर्क्स के लिए, Graph Node को एक ऐसे एंडपॉइंट से कनेक्ट करने की आवश्यकता होती है जो EVM-संगत JSON-RPC API को एक्सपोज़ करता हो। यह एक सिंगल क्लाइंट के रूप में हो सकता है या यह एक अधिक जटिल सेटअप हो सकता है जो मल्टीपल क्लाइंट्स के बीच लोड बैलेंस करता हो। यह जानना महत्वपूर्ण है कि कुछ सबग्राफ को विशेष क्लाइंट क्षमताओं की आवश्यकता हो सकती है, जैसे कि आर्काइव मोड और/या पैरिटी ट्रेसिंग API। -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (संस्करण 5 से कम)** - सबग्राफ डिप्लॉयमेंट मेटाडेटा IPFS नेटवर्क पर स्टोर किया जाता है। The Graph Node मुख्य रूप से सबग्राफ डिप्लॉयमेंट के दौरान IPFS node तक पहुंचता है ताकि सबग्राफ मैनिफेस्ट और सभी लिंक की गई फ़ाइलों को प्राप्त किया जा सके। नेटवर्क Indexers को अपना स्वयं का IPFS node होस्ट करने की आवश्यकता नहीं है, नेटवर्क के लिए एक IPFS node होस्ट किया गया है: https://ipfs.network.thegraph.com. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Indexer सेवा** - आवश्यक बाहरी संचार को नेटवर्क के साथ संभालती है। लागत मॉडल और इंडेक्सिंग स्थितियों को साझा करती है, गेटवे से आने वाले क्वेरी अनुरोधों को एक Graph Node तक पहुंचाती है, और गेटवे के साथ स्टेट चैनलों के माध्यम से क्वेरी भुगतान को प्रबंधित करती है। -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - ऑनचेन पर Indexers की इंटरैक्शन को सुविधाजनक बनाता है, जिसमें नेटवर्क पर पंजीकरण करना, अपने ग्राफ-नोड पर सबग्राफ परिनियोजन का प्रबंधन करना और आवंटनों का प्रबंधन करना शामिल है। -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Prometheus मेट्रिक्स सर्वर** - The Graph Node और Indexer घटक अपने मेट्रिक्स को मेट्रिक्स सर्वर में लॉग करते हैं। -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +कृपया ध्यान दें: चुस्त स्केलिंग का समर्थन करने के लिए, यह अनुशंसा की जाती है कि क्वेरी और indexing संबंधी चिंताओं को विभिन्न सेट के नोड्स के बीच विभाजित किया जाए: क्वेरी नोड्स और इंडेक्स नोड्स। -### Ports overview +### पोर्ट्स का अवलोकन -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **महत्वपूर्ण**: पोर्ट्स को सार्वजनिक रूप से एक्सपोज़ करने में सावधानी बरतें - **प्रशासनिक पोर्ट्स** को सुरक्षित रखा जाना चाहिए। इसमें नीचे दिए गए Graph Node JSON-RPC और Indexer प्रबंधन एंडपॉइंट्स शामिल हैं। -#### ग्राफ-नोड +#### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| पोर्ट | उद्देश्य | रूट्स | आर्गुमेंट्स | पर्यावरण वेरिएबल्स | +| ----- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | ------------------ | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for सबग्राफ subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - | -#### Indexer Service +#### Indexer सेवा -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| पोर्ट | उद्देश्य | Routes | CLI Argument | Environment Variable | +| ----- | ---------------------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(भुगतान किए गए सबग्राफ क्वेरीज़ के लिए) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - | -#### Indexer Agent +#### Indexer एजेंट -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| पोर्ट | उद्देश्य | Routes | CLI Argument | Environment Variable | +| ----- | ----------------------- | ------ | -------------------------- | --------------------------------------- | +| 8000 | Indexer प्रबंधन API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Google Cloud पर Terraform का उपयोग करके सर्वर अवसंरचना सेटअप करें -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> Indexers वैकल्पिक रूप से AWS, Microsoft Azure, या Alibaba का उपयोग कर सकते हैं। -#### Install prerequisites +#### आवश्यक पूर्वापेक्षाएँ स्थापित करें - Google Cloud SDK -- Kubectl command line tool +- Kubectl कमांड लाइन टूल - Terraform -#### Create a Google Cloud Project +#### Google Cloud प्रोजेक्ट बनाएं -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Clone करें या [Indexer repository](https://github.com/graphprotocol/indexer) पर जाएं। -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- `./terraform` डायरेक्टरी पर जाएं, यही वह स्थान है जहां सभी कमांड निष्पादित की जानी चाहिए। ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Google Cloud के साथ प्रमाणीकृत करें और एक नया प्रोजेक्ट बनाएं। ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Google Cloud Console के बिलिंग पेज का उपयोग करके नए प्रोजेक्ट के लिए बिलिंग सक्षम करें। -- Create a Google Cloud configuration. +- Google Cloud कॉन्फ़िगरेशन बनाएँ। ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- आवश्यक Google Cloud APIs सक्षम करें। ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- सर्विस अकाउंट बनाएं। ```sh svc_name= @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- डाटाबेस और Kubernetes क्लस्टर के बीच peering सक्षम करें, जो अगले चरण में बनाया जाएगा। ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- न्यूनतम Terraform कॉन्फ़िगरेशन फ़ाइल बनाएँ (आवश्यकतानुसार अपडेट करें)। ```sh indexer= @@ -260,24 +260,24 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### टेराफॉर्म का उपयोग करके इंफ्रास्ट्रक्चर बनाएं -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +[variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) फ़ाइल को पढ़ने के बाद, इस डायरेक्टरी में terraform.tfvars नाम की एक फ़ाइल बनाएँ (या पिछली स्टेप में बनाई गई फ़ाइल को संशोधित करें)। प्रत्येक वेरिएबल के लिए, जहाँ आप डिफ़ॉल्ट मान को ओवरराइड करना चाहते हैं या जहाँ आपको कोई मान सेट करने की आवश्यकता है, terraform.tfvars में एक सेटिंग दर्ज करें। -- Run the following commands to create the infrastructure. +- इन्फ्रास्ट्रक्चर बनाने के लिए निम्नलिखित कमांड चलाएँ। ```sh -# Install required plugins +#आवश्यक प्लगइन इंस्टॉल करें terraform init -# View plan for resources to be created +#बनने वाले संसाधनों की योजना देखें terraform plan -# Create the resources (expect it to take up to 30 minutes) +#संसाधनों का निर्माण करें (इसे पूरा होने में 30 मिनट तक लग सकते हैं) terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +नई क्लस्टर के लिए क्रेडेंशियल्स को ~/.kube/config में डाउनलोड करें और इसे अपने डिफ़ॉल्ट संदर्भ के रूप में सेट करें। ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the Indexer +#### Indexer के लिए Kubernetes घटकों का निर्माण -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- डायरेक्टरी `k8s/overlays` को एक नई डायरेक्टरी `$dir` में कॉपी करें, और `$dir/kustomization.yaml` में `bases` एंट्री को इस तरह समायोजित करें कि यह `k8s/base` डायरेक्टरी की ओर इशारा करे। -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- सभी फ़ाइलों को `$dir` में पढ़ें और टिप्पणियों में दिए गए निर्देशों के अनुसार किसी भी मान को समायोजित करें। -Deploy all resources with `kubectl apply -k $dir`. +सभी संसाधनों को `kubectl apply -k $dir` के साथ परिनियोजित करें। -### ग्राफ-नोड +### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) एक ओपन सोर्स Rust इम्प्लीमेंटेशन है जो Ethereum ब्लॉकचेन को इवेंट सोर्स करके एक डेटा स्टोर को डिटर्मिनिस्टिक तरीके से अपडेट करता है, जिसे GraphQL एंडपॉइंट के जरिए क्वेरी किया जा सकता है। डेवलपर्स सबग्राफ का उपयोग करके अपनी स्कीमा को परिभाषित करते हैं और ब्लॉकचेन से सोर्स किए गए डेटा को ट्रांसफॉर्म करने के लिए एक सेट ऑफ मैपिंग्स बनाते हैं, और Graph Node पूरी चेन को सिंक करने, नए ब्लॉक्स की मॉनिटरिंग करने और इसे एक GraphQL एंडपॉइंट के जरिए सर्व करने का काम संभालता है। -#### Getting started from source +#### सोर्स से शुरू करना -#### Install prerequisites +#### आवश्यक पूर्वापेक्षाएँ स्थापित करें - **Rust** @@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **उबंटू उपयोगकर्ताओं के लिए अतिरिक्त आवश्यकताएँ** - उबंटू पर एक ग्राफ-नोड चलाने के लिए कुछ अतिरिक्त पैकेजों की आवश्यकता हो सकती है। ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### सेटअप -1. Start a PostgreSQL database server +1. PostgreSQL डेटाबेस सर्वर शुरू करें ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. [Graph Node](https://github.com/graphprotocol/graph-node) रिपॉजिटरी को क्लोन करें और सोर्स को बिल्ड करने के लिए `cargo build` कमांड चलाएँ। -3. Now that all the dependencies are setup, start the Graph Node: +3. अब जब सभी dependencies सेटअप हो गई हैं, तो Graph Node शुरू करें: ```sh cargo run -p graph-node --release -- \ @@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### Docker का उपयोग शुरू करना -#### Prerequisites +#### आवश्यक शर्तें -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Ethereum नोड** -डिफ़ॉल्ट रूप से, Docker Compose सेटअप मुख्य नेटवर्क (mainnet) का उपयोग करेगा:[http://host.docker.internal:8545](http://host.docker.internal:8545) आपके होस्ट मशीन पर Ethereum node से कनेक्ट करने के लिए। आप `docker-compose.yaml` को अपडेट करके इस नेटवर्क नाम और URL को बदल सकते हैं। -#### Setup +#### सेटअप -1. Clone Graph Node and navigate to the Docker directory: +1. Clone Graph Node और Docker डायरेक्टरी पर नेविगेट करें: ```sh git clone https://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml `using the included script: +2. सिर्फ़ Linux उपयोगकर्ताओं के लिए - `docker-compose.yaml` में `host.docker.internal` की जगह होस्ट IP एड्रेस का उपयोग करें, दिए गए स्क्रिप्ट का उपयोग करके: ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. एक लोकल Graph Node शुरू करें जो आपके Ethereum endpoint से कनेक्ट होगा: ```sh docker-compose up ``` -### Indexer components +### Indexer घटक -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: +नेटवर्क में सफलतापूर्वक भाग लेने के लिए लगभग निरंतर निगरानी और इंटरैक्शन की आवश्यकता होती है, इसलिए हमने एक TypeScript application का एक सूट बनाया है जो Indexers नेटवर्क भागीदारी को सुगम बनाता है। तीन Indexer घटक हैं: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - यह एजेंट नेटवर्क और Indexer's स्वयं के बुनियादी ढांचे की निगरानी करता है और ऑनचेन पर कौन-कौन से सबग्राफ डिप्लॉयमेंट को इंडेक्स और आवंटित किया जाएगा, तथा प्रत्येक के लिए कितना आवंटित किया जाएगा, इसका प्रबंधन करता है। -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer सेवा** - यह एकमात्र घटक है जिसे बाहरी रूप से एक्सपोज़ करने की आवश्यकता होती है। यह सेवा सबग्राफ क्वेरीज़ को The Graph नोड तक पहुंचाती है, क्वेरी भुगतान के लिए स्टेट चैनल प्रबंधित करती है, और गेटवे जैसे क्लाइंट्स को महत्वपूर्ण निर्णय लेने की जानकारी साझा करती है। -- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. +- **Indexer CLI** - कमांड लाइन इंटरफ़ेस जो Indexer एजेंट को प्रबंधित करने के लिए उपयोग किया जाता है। यह Indexers को लागत मॉडल, मैनुअल अलोकेशन, एक्शन कतार, और Indexing नियमों को प्रबंधित करने की अनुमति देता है। -#### Getting started +#### शुरू करना -The Indexer agent and Indexer service should be co-located with your Graph Node infrastructure. There are many ways to set up virtual execution environments for your Indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://discord.gg/graphprotocol)! Remember to [stake in the protocol](/indexing/overview/#stake-in-the-protocol) before starting up your Indexer components! +Indexer agent और Indexer service को आपके Graph Node इंफ्रास्ट्रक्चर के साथ ही रखना चाहिए। आपके Indexer components के लिए वर्चुअल execution environments सेटअप करने के कई तरीके हैं; यहाँ हम बताएंगे कि उन्हें baremetal पर NPM पैकेज या source से कैसे चलाया जाए, या फिर Kubernetes और Docker के ज़रिए Google Cloud Kubernetes Engine पर कैसे रन किया जाए। अगर ये सेटअप उदाहरण आपके इंफ्रास्ट्रक्चर के लिए उपयुक्त नहीं हैं, तो संभवतः कोई कम्युनिटी गाइड उपलब्ध होगी, हमें [Discord](https://discord.gg/graphprotocol) पर आकर हैलो कहें! शुरू करने से पहले [protocol में stake करें!](/indexing/overview/#stake-in-the-protocol) -#### From NPM packages +#### NPM पैकेजों से - ```sh npm install -g @graphprotocol/indexer-service @@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### स्रोत से ```sh # From Repo root directory @@ -418,16 +418,16 @@ cd packages/indexer-cli ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### Docker का उपयोग -- Pull images from the registry +- रजिस्ट्र्री से इमेजेस प्राप्त करें ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +या स्रोत से स्थानीय रूप से छवियाँ बनाएं ```sh # Indexer service @@ -442,24 +442,24 @@ docker build \ -t indexer-agent:latest \ ``` -- Run the components +- कंपोनेंट्स चलाएं ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/). +**नोट**: कंटेनर शुरू करने के बाद, Indexer सेवा [http://localhost:7600](http://localhost:7600) पर उपलब्ध होगी और Indexer एजेंट [http://localhost:18000/](http://localhost:18000/) पर Indexer प्रबंधन API को एक्सपोज़ करेगा। -#### Using K8s and Terraform +#### कुबेरनेट्स (K8s) और टेराफॉर्म का उपयोग -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section +देखें [Google Cloud पर Terraform का उपयोग करके सर्वर इंफ्रास्ट्रक्चर सेटअप करें अनुभाग](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) -#### Usage +#### उपयोग -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> नोट: सभी रनटाइम कॉन्फ़िगरेशन वेरिएबल्स या तो कमांड पर स्टार्टअप के समय पैरामीटर्स के रूप में लागू किए जा सकते हैं या फिर `COMPONENT_NAME_VARIABLE_NAME` प्रारूप में एनवायरनमेंट वेरिएबल्स के रूप में उपयोग किए जा सकते हैं (उदाहरण: `INDEXER_AGENT_ETHEREUM`)। -#### Indexer agent +#### Indexer एजेंट ```sh graph-indexer-agent start \ @@ -488,7 +488,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Indexer सेवा ```sh SERVER_HOST=localhost \ @@ -516,56 +516,58 @@ graph-indexer-service start \ #### Indexer CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +The Indexer CLI @graphprotocol/graph-clihttps://www.npmjs.com/package/@graphprotocol/graph-cli के लिए एक प्लगइन है, जिसे टर्मिनल में `graph indexer` कमांड के माध्यम से एक्सेस किया जा सकता है। ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### Indexer प्रबंधन Indexer CLI का उपयोग करके -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +**Indexer Management API** के साथ इंटरैक्ट करने के लिए सुझाया गया टूल **Indexer CLI** है, जो कि **Graph CLI** का एक एक्सटेंशन है। Indexer agent को एक Indexer से इनपुट की आवश्यकता होती है ताकि वह Indexer की ओर से नेटवर्क के साथ स्वायत्त रूप से इंटरैक्ट कर सके। +Indexer agent व्यवहार को परिभाषित करने के लिए **allocation management** मोड और **indexing rules** का उपयोग किया जाता है। Auto mode में, एक Indexer **indexing rules** का उपयोग करके यह तय कर सकता है कि वह किन को इंडेक्स और क्वेरी के लिए सर्व करेगा। इन नियमों को GraphQL API के माध्यम से प्रबंधित किया जाता है, जिसे agent द्वारा सर्व किया जाता है और यह Indexer Management API के रूप में जाना जाता है। +Manual mode में, एक Indexer **actions queue** का उपयोग करके allocation actions बना सकता है और उन्हें निष्पादित करने से पहले स्पष्ट रूप से अनुमोदित कर सकता है। Oversight mode में, **indexing rules** का उपयोग **actions queue** को भरने के लिए किया जाता है और इन्हें निष्पादित करने से पहले भी स्पष्ट अनुमोदन की आवश्यकता होती है। -#### Usage +#### उपयोग -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +**Indexer CLI** Indexer agent से कनेक्ट होता है, आमतौर पर पोर्ट-फॉरवर्डिंग के माध्यम से, जिससे CLI को वही सर्वर या क्लस्टर पर चलाने की जरूरत नहीं होती है। शुरुआत करने के लिए और कुछ संदर्भ देने के लिए, यहां CLI का संक्षिप्त विवरण दिया जाएगा। -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - Indexer प्रबंधन API से कनेक्ट करें। आमतौर पर, सर्वर से कनेक्शन पोर्ट फॉरवर्डिंग के माध्यम से खोला जाता है, जिससे CLI को आसानी से रिमोटली ऑपरेट किया जा सकता है। (उदाहरण: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph indexer rules get [options] [ ...]` - एक या अधिक इंडेक्सिंग नियम प्राप्त करें,``के रूप में `all` का उपयोग करके सभी नियम प्राप्त करें, या global का उपयोग करके वैश्विक डिफॉल्ट प्राप्त करें। एक अतिरिक्त आर्ग्यूमेंट --merged का उपयोग किया जा सकता है, जो यह निर्दिष्ट करता है कि डिप्लॉयमेंट-विशिष्ट नियम वैश्विक नियम के साथ मर्ज किए गए हैं। यह उसी तरह लागू होते हैं जैसे वे Indexer agent में होते हैं। -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ... `- एक या अधिक indexing नियम सेट करें। -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - यदि उपलब्ध हो तो किसी सबग्राफ डिप्लॉयमेंट का Indexing शुरू करें और इसका `decisionBasis` को `always` पर सेट करें, ताकि Indexer एजेंट इसे हमेशा Index करने के लिए चुने। यदि ग्लोबल नियम always पर सेट है, तो नेटवर्क पर उपलब्ध सभी सबग्राफ को Index किया जाएगा। -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - किसी डिप्लॉयमेंट की इंडेक्सिंग को रोकें और इसका `decisionBasis` को कभी नहीं पर सेट करें, जिससे यह डिप्लॉयमेंट को इंडेक्स करने के निर्णय में छोड़ देगा। -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — किसी deployment के लिए `decisionBasis` को `rules` पर सेट करें, ताकि Indexer agent यह तय करने के लिए indexing rules का उपयोग करे कि इस deployment को index करना है या नहीं। -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - एक या अधिक कार्यों को प्राप्त करें `all` का उपयोग करके या सभी कार्य प्राप्त करने के लिए `action-id` को खाली छोड़ दें। एक अतिरिक्त आर्गुमेंट --status का उपयोग एक निश्चित स्थिति वाले सभी कार्यों को प्रदर्शित करने के लिए किया जा सकता है। -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer एक्शन कतार आवंटित ` - आवंटन क्रिया को कतारबद्ध करें -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer एक्शन कतार पुनः आवंटित ` - पुनः आवंटन क्रिया को कतारबद्ध करें -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer एक्शन कतार अनआवंटित ` - अनविन्यास क्रिया को कतारबद्ध करें -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` - यदि ID निर्दिष्ट नहीं है, तो कतार में सभी कार्रवाइयों को रद्द करें, अन्यथा स्पेस से अलग की गई आईडी की सूची को रद्द करें। -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` - कई क्रियाओं को निष्पादन के लिए अनुमोदित करें -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` - कार्यकर्ता को स्वीकृत क्रियाओं को तुरंत निष्पादित करने के लिए बाध्य करें -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +सभी कमांड जो आउटपुट में नियम दिखाते हैं, वे -output आर्गुमेंट का उपयोग करके समर्थित `-otuput` फ़ॉर्मेट (`table,` `yaml`, और `json`) में से किसी एक को चुन सकते हैं। -#### Indexing rules +#### indexing नियम -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing नियमों को या तो वैश्विक डिफ़ॉल्ट के रूप में या विशिष्ट सबग्राफ डिप्लॉयमेंट्स के लिए उनकी IDs का उपयोग करके लागू किया जा सकता है। `deployment` और `decisionBasis` फ़ील्ड अनिवार्य हैं, जबकि सभी अन्य फ़ील्ड वैकल्पिक हैं। जब किसी Indexing नियम में `rules` को `decisionBasis` के रूप में सेट किया जाता है, तो Indexer एजेंट उस नियम पर दिए गए गैर-शून्य थ्रेशोल्ड मानों की तुलना नेटवर्क से प्राप्त मानों से संबंधित डिप्लॉयमेंट के लिए करेगा। यदि सबग्राफ डिप्लॉयमेंट के मान किसी भी थ्रेशोल्ड से ऊपर (या नीचे) होते हैं, तो इसे Indexing के लिए चुना जाएगा। -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, अगर global rule का `minStake` **5** (GRT) है, तो कोई भी सबग्राफ deployment जिसमें 5 (GRT) से ज्यादा stake allocated है, उसे index किया जाएगा। Threshold rules में शामिल हैं `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, और `minAverageQueryFees`। -Data model: +डेटा मॉडल: ```graphql type IndexingRule { @@ -599,7 +601,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +indexing नियम का उदाहरण उपयोग: ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +613,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### कार्य सूची CLI -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +Indexer-cli एक `actions` मॉड्यूल प्रदान करता है जो मैन्युअल रूप से एक्शन कतार के साथ काम करने के लिए उपयोग किया जाता है। यह **Graphql API**, जो कि indexer management server द्वारा होस्ट की गई है, का उपयोग एक्शन कतार के साथ इंटरैक्ट करने के लिए करता है। -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +एक्शन एक्सीक्यूशन वर्कर केवल तभी कतार से आइटम उठाकर निष्पादित करेगा जब उनका `ActionStatus = approved` होगा। अनुशंसित मार्ग में, एक्शन को कतार में ActionStatus = queued के साथ जोड़ा जाता है, इसलिए उन्हें ऑनचेन निष्पादित होने के लिए अनुमोदित किया जाना चाहिए। सामान्य प्रवाह इस प्रकार होगा: -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- 3rd party ऑप्टिमाइज़र टूल या indexer-cli उपयोगकर्ता द्वारा कतार में क्रिया जोड़ी गई है +- Indexer `indexer-cli` का उपयोग करके सभी कतारबद्ध क्रियाओं को देख सकता है। +- Indexer (या अन्य सॉफ़्टवेयर) `indexer-cli` का उपयोग करके कतार में क्रियाओं को मंजूरी या रद्द कर सकता है। मंजूरी और रद्द करने वाले आदेश एक्शन आईडीज़ के एक एरे को इनपुट के रूप में लेते हैं। +- निर्वाचन कार्यकर्ता नियमित रूप से क्यू से अनुमोदित क्रियाओं के लिए पोल करता है। यह क्यू से `approved` क्रियाओं को प्राप्त करेगा, उन्हें निष्पादित करने का प्रयास करेगा, और निष्पादन की स्थिति के आधार पर डाटाबेस में मानों को `success` या `failed` के रूप में अपडेट करेगा। +- अगर कोई क्रिया सफल होती है तो कर्मचारी यह सुनिश्चित करेगा कि एक indexing नियम मौजूद हो जो एजेंट को यह बताए कि आगे बढ़ते हुए आवंटन को कैसे प्रबंधित करना है, यह उस स्थिति में उपयोगी होता है जब एजेंट `auto` या `oversight` मोड में हो और मैन्युअल क्रियाएं ली जा रही हों। +- Indexer एक्शन कतार की निगरानी कर सकता है ताकि एक्शन निष्पादन के इतिहास को देखा जा सके और यदि आवश्यक हो, तो असफल निष्पादन वाले एक्शन आइटम्स को पुनः अनुमोदित और अपडेट किया जा सके। एक्शन कतार उन सभी एक्शनों का इतिहास प्रदान करती है जो कतारबद्ध और लिए गए हैं। -Data model: +डेटा मॉडल: ```graphql Type ActionInput { @@ -657,7 +659,7 @@ ActionType { } ``` -Example usage from source: +स्रोत से उदाहरण उपयोग: ```bash graph indexer actions get all @@ -677,141 +679,141 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +Supported action types के लिए आवंटन प्रबंधन की विभिन्न इनपुट आवश्यकताएँ होती हैं: -- `Allocate` - allocate stake to a specific subgraph deployment +- Allocate - किसी विशिष्ट सबग्राफ डिप्लॉयमेंट के लिए स्टेक आवंटित करें - - required action params: + - आवश्यक क्रिया पैरामीटर्स: - deploymentID - - amount + - राशि -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `अनुदेश हटाएं` - आवंटन बंद करें, जिससे दांव को मुक्त किया जा सके और इसे कहीं और पुनः आवंटित किया जा सके। - - required action params: + - आवश्यक क्रिया पैरामीटर्स: - allocationID - deploymentID - - optional action params: + - वैकल्पिक क्रिया पैरामीटर: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - बल प्रयोग करें (दिए गए POI का उपयोग तब भी करें यदि यह ग्राफ-नोड द्वारा प्रदान किए गए से मेल नहीं खाता) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `पुनः आवंटित करें` - परमाणु रूप से आवंटन को बंद करें और उसी Subgraph परिनियोजन के लिए एक नया आवंटन खोलें - - required action params: + - आवश्यक क्रिया पैरामीटर: - allocationID - deploymentID - - amount - - optional action params: + - राशि + - वैकल्पिक क्रिया पैरामीटर्स: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - बल (दिए गए POI का उपयोग करने के लिए मजबूर करता है, भले ही वह ग्राफ-नोड द्वारा प्रदान किए गए डेटा से मेल न खाए) -#### Cost models +#### लागत मॉडल -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +कॉस्ट मॉडल बाज़ार और क्वेरी विशेषताओं के आधार पर क्वेरी के लिए डायनामिक मूल्य निर्धारण प्रदान करते हैं। Indexer Service प्रत्येक सबग्राफ के लिए गेटवे के साथ एक कॉस्ट मॉडल साझा करता है, जिसके लिए वे क्वेरी का जवाब देने का इरादा रखते हैं। बदले में, गेटवे इस कॉस्ट मॉडल का उपयोग प्रति क्वेरी Indexer चयन निर्णय लेने और चुने गए Indexers के साथ भुगतान पर बातचीत करने के लिए करते हैं। #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Agora भाषा क्वेरी के लिए लागत मॉडल घोषित करने के लिए एक लचीला प्रारूप प्रदान करती है। एक Agora मूल्य मॉडल बयानों का एक क्रम होता है जो प्रत्येक शीर्ष-स्तरीय GraphQL क्वेरी के लिए क्रम में निष्पादित होता है। प्रत्येक शीर्ष-स्तरीय क्वेरी के लिए, पहला कथन जो उससे मेल खाता है, उस क्वेरी के लिए मूल्य निर्धारित करता है। -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +एक कथन में एक ​predicate​ होता है, जिसका उपयोग GraphQL queries से मिलान करने के लिए किया जाता है, और एक cost expression होता है, जो मूल्यांकन किए जाने पर दशमलव GRT में एक लागत आउटपुट करता है। किसी क्वेरी में नामित आर्गुमेंट स्थिति के मानों को predicate में कैप्चर किया जा सकता है और expression में उपयोग किया जा सकता है। Globals भी सेट किए जा सकते हैं और expression में प्लेसहोल्डर्स के लिए प्रतिस्थापित किए जा सकते हैं। -Example cost model: +उदाहरण लागत मॉडल: ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +#यह कथन skip मान को प्राप्त करता है, +#शर्त में एक बूलियन अभिव्यक्ति का उपयोग करता है ताकि skip का उपयोग करने वाले विशिष्ट क्वेरीज़ का मिलान किया जा सके, +#और skip मान और SYSTEM_LOAD ग्लोबल के आधार पर लागत की गणना करने के लिए लागत अभिव्यक्ति का उपयोग करता है। query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +#यह डिफ़ॉल्ट किसी भी GraphQL अभिव्यक्ति से मेल खाएगा। +#यह ग्लोबल का उपयोग करके लागत की गणना करने के लिए अभिव्यक्ति में प्रतिस्थापित करता है। default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +उपरोक्त मॉडल का उपयोग करके उदाहरण क्वेरी लागत: -| Query | Price | +| Query | कीमत | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### लागत मॉडल लागू करना -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +कास्ट मॉडल को Indexer CLI के माध्यम से लागू किया जाता है, जो उन्हें Indexer एजेंट के Indexer Management API को पास करता है ताकि उन्हें डेटाबेस में संग्रहीत किया जा सके। इसके बाद, Indexer Service उन्हें लेगी और जब भी गेटवे इसकी मांग करेंगे, तो उन्हें कास्ट मॉडल प्रदान करेगी। ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## नेटवर्क के साथ इंटरैक्ट करना -### Stake in the protocol +### प्रोटोकॉल में staking -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +पहले कदम नेटवर्क में एक Indexer के रूप में भाग लेने के लिए प्रोटोकॉल को अनुमोदित करना, धन को स्टेक करना, और (वैकल्पिक रूप से) दिन-प्रतिदिन की प्रोटोकॉल इंटरैक्शन के लिए एक ऑपरेटर पता सेट करना शामिल हैं। -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> नोट: contract इंटरैक्शन के लिए इन निर्देशों में Remix का उपयोग किया जाएगा, लेकिन आप अपनी पसंद के किसी भी टूल का उपयोग कर सकते हैं ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), और [MyCrypto](https://www.mycrypto.com/account) कुछ अन्य ज्ञात टूल हैं)। -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +Once an Indexer ने प्रोटोकॉल में GRT को स्टेक कर दिया है, तो [Indexer components](/indexing/overview/#indexer-components) को शुरू किया जा सकता है और वे नेटवर्क के साथ अपनी इंटरैक्शन शुरू कर सकते हैं। -#### Approve tokens +#### स्वीकृत करें टोकन -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. ओपन द [Remix app](https://remix.ethereum.org/) एक ब्राउज़र में -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. `File Explorer` में **GraphToken.abi** नामक फ़ाइल बनाएं जिसमें [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json) हो। -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. `GraphToken.abi` चयनित और संपादक में खुला होने पर, Remix इंटरफ़ेस में `Deploy and run transactions` अनुभाग पर स्विच करें। -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. पर्यावरण के अंतर्गत `Injected Web3` चुनें और `Account` के अंतर्गत अपना Indexer पता चुनें। -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. GraphToken contract एड्रेस सेट करें - `At Address` के बगल में GraphToken कॉन्ट्रैक्ट एड्रेस (0xc944E90C64B2c07662A292be6244BDf05Cda44a7) पेस्ट करें और लागू करने के लिए `At address` बटन पर क्लिक करें। -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. `approve(spender, amount)` फ़ंक्शन को कॉल करके Staking कॉन्ट्रैक्ट को अप्रूव करें। spender को Staking contract एड्रेस (0xF55041E37E12cD407ad00CE2910B8269B01263b9) से भरें और amount में स्टेक किए जाने वाले टोकन (wei में) डालें। -#### Stake tokens +#### Staking टोकन -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. [Remix app](https://remix.ethereum.org/) को ब्राउज़र में खोलें -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. File Explorer में **Staking.abi** नाम की एक फ़ाइल बनाएं जिसमें स्टेकिंग ABI हो। -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. `Staking.abi` को संपादक में चयनित और खुला रखने के साथ, Remix इंटरफ़ेस में `Deploy and run transactions` अनुभाग पर स्विच करें। -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. पर्यावरण के अंतर्गत `Injected Web3` चुनें और `Account` के अंतर्गत अपना Indexer पता चुनें। -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Staking कॉन्ट्रैक्ट एड्रेस सेट करें - `At Address` के पास Staking contract एड्रेस (0xF55041E37E12cD407ad00CE2910B8269B01263b9) पेस्ट करें और इसे लागू करने के लिए `At address` बटन पर क्लिक करें। -6. Call `stake()` to stake GRT in the protocol. +6. `stake()` को कॉल करें ताकि प्रोटोकॉल में GRT को स्टेक किया जा सके। -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers दूसरे पते को अपने Indexer इंफ्रास्ट्रक्चर के लिए ऑपरेटर के रूप में अनुमोदित कर सकते हैं ताकि उन कुंजियों को अलग किया जा सके जो धन को नियंत्रित करती हैं और जो दिन-प्रतिदिन की क्रियाएँ जैसे सबग्राफ पर आवंटन करना और (भुगतान किए गए) क्वेरीज़ की सेवा करना कर रही हैं। ऑपरेटर सेट करने के लिए, `setOperator()` को ऑपरेटर पते के साथ कॉल करें। -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) पुरस्कारों के वितरण को नियंत्रित करने और रणनीतिक रूप से Delegators को आकर्षित करने के लिए, Indexers अपने delegation पैरामीटर्स को अपडेट कर सकते हैं। इसके लिए वे indexingRewardCut (parts per million), queryFeeCut (parts per million), और cooldownBlocks (ब्लॉक्स की संख्या) को अपडेट कर सकते हैं। ऐसा करने के लिए, setDelegationParameters() को कॉल करें। निम्नलिखित उदाहरण में queryFeeCut को सेट किया गया है ताकि 95% क्वेरी रिबेट्स Indexer को और 5% Delegators को वितरित किए जाएं, indexingRewardCut को सेट किया गया है ताकि 60% Indexing पुरस्कार Indexer को और 40% Delegators को वितरित किए जाएं, और cooldownBlocks अवधि को 500 ब्लॉक्स पर सेट किया गया है। ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### delegation मानक सेट करना -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +`setDelegationParameters()` फ़ंक्शन [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) में आवश्यक है, जो Indexers को उन मापदंडों को सेट करने की अनुमति देता है जो उनके Delegators के साथ इंटरैक्शन को परिभाषित करते हैं, जिससे उनके इनाम साझा करने और delegation क्षमता को प्रभावित किया जाता है। -### How to set delegation parameters +### delegation मापदंड सेट करने का तरीका -To set the delegation parameters using Graph Explorer interface, follow these steps: +Graph Explorer इंटरफेस का उपयोग करके delegation पैरामीटर सेट करने के लिए, इन चरणों का पालन करें: -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. [Graph Explorer](https://thegraph.com/explorer/)को नेविगेट करें +2. अपने वॉलेट को कनेक्ट करें। मल्टीसिग (जैसे Gnosis Safe) चुनें और फिर मुख्य नेटवर्क (mainnet) का चयन करें। ध्यान दें: आपको इस प्रक्रिया को Arbitrum One के लिए दोहराने की आवश्यकता होगी। +3. अपने वॉलेट को एक साइनर के रूप में कनेक्ट करें। +4. `सेटिंग्स` अनुभाग पर जाएं और `delegation पैरामीटर्स` का चयन करें। इन पैरामीटर्स को वांछित सीमा के भीतर प्रभावी कट प्राप्त करने के लिए कॉन्फ़िगर किया जाना चाहिए। प्रदान किए गए इनपुट फ़ील्ड में मान दर्ज करने पर, इंटरफ़ेस स्वचालित रूप से प्रभावी कट की गणना करेगा। वांछित प्रभावी कट प्रतिशत प्राप्त करने के लिए इन मानों को आवश्यकतानुसार समायोजित करें। +5. लेन-लेन-देन(transaction) को नेटवर्क पर जमा करें। -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> नोट: इस लेन-देन(transaction) की पुष्टि मल्टीसिग वॉलेट साइनर्स द्वारा की जानी होगी। -### The life of an allocation +### एक आवंटन का जीवन -After being created by an Indexer a healthy allocation goes through two states. +एक Indexer द्वारा बनाए जाने के बाद, एक स्वस्थ आवंटन दो अवस्थाओं से गुजरता है। -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **सक्रिय** - एक बार जब ऑनचेन पर आवंटन बनाया जाता है ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)), तो इसे सक्रिय माना जाता है। Indexer के स्वयं के और/या प्रत्यायोजित स्टेक का एक हिस्सा किसी सबग्राफ परिनियोजन की ओर आवंटित किया जाता है, जो उन्हें उस सबग्राफ परिनियोजन के लिए इंडेक्सिंग पुरस्कारों का दावा करने और क्वेरीज़ को सर्व करने की अनुमति देता है। Indexer एजेंट, Indexer नियमों के आधार पर आवंटन बनाने का प्रबंधन करता है। -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **बंद** - एक Indexer एक आवंटन को बंद करने के लिए स्वतंत्र होता है जब 1 युग (epoch) बीत चुका हो ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335) या उनका Indexer एजेंट **maxAllocationEpochs** (वर्तमान में 28 दिन) के बाद स्वचालित रूप से आवंटन बंद कर देगा। जब कोई आवंटन एक वैध प्रूफ ऑफ indexing (POI) के साथ बंद किया जाता है, तो उनके indexing पुरस्कार Indexer और उसके Delegators को वितरित किए जाते हैं ([अधिक जानें])(/indexing/overview/#how-are-indexing-rewards-distributed)। -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers को अनुशंसा दी जाती है कि वे onchain पर allocation बनाने से पहले सबग्राफ deployments को chainhead तक sync करने के लिए offchain syncing सुविधा का उपयोग करें। यह सुविधा विशेष रूप से उन सबग्राफ के लिए उपयोगी है जिन्हें sync होने में 28 epochs से अधिक समय लग सकता है या जिनके अनिश्चित रूप से विफल होने की संभावना हो सकती है। diff --git a/website/src/pages/hi/indexing/supported-network-requirements.mdx b/website/src/pages/hi/indexing/supported-network-requirements.mdx index 647eda3e6651..a9ea52e588e5 100644 --- a/website/src/pages/hi/indexing/supported-network-requirements.mdx +++ b/website/src/pages/hi/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| नेटवर्क | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_अंतिम बार अपडेट किया गया 22 जून 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| नेटवर्क | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_अंतिम बार अपडेट किया गया 22 जून 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/hi/indexing/tap.mdx b/website/src/pages/hi/indexing/tap.mdx index d2a42ac00ea5..bed6a68c4a5d 100644 --- a/website/src/pages/hi/indexing/tap.mdx +++ b/website/src/pages/hi/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP माइग्रेशन गाइड +title: GraphTally Guide --- -The Graph के नए भुगतान प्रणाली, Timeline Aggregation Protocol, TAP के बारे में जानें। यह प्रणाली तेज, कुशल माइक्रोट्रांजेक्शन प्रदान करती है जिसमें विश्वास को न्यूनतम किया गया है। +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. -## अवलोकन +## Overview -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) मौजूदा Scalar भुगतान प्रणाली का एक ड्रॉप-इन प्रतिस्थापन है। यह निम्नलिखित प्रमुख सुविधाएँ प्रदान करता है: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - सूक्ष्म भुगतानों को कुशलता से संभालता है। - ऑनचेन लेनदेन और लागतों में समेकन की एक परत जोड़ता है। - प्राप्तियों और भुगतान पर Indexers को नियंत्रण की अनुमति देता है, प्रश्नों के लिए भुगतान की गारंटी देता है। - यह विकेन्द्रीकृत, विश्वास रहित गेटवे को सक्षम बनाता है और कई भेजने वालों के लिए indexer-service के प्रदर्शन में सुधार करता है। -## विशिष्टताएँ +### विशिष्टताएँ -TAP एक प्रेषक को एक प्राप्तकर्ता को कई भुगतान करने की अनुमति देता है, TAP Receipts, जो इन भुगतानों को एकल भुगतान में एकत्र करता है, जिसे Receipt Aggregate Voucher भी कहा जाता है, जिसे RAV के नाम से भी जाना जाता है। यह एकत्रित भुगतान फिर ब्लॉकचेन पर सत्यापित किया जा सकता है, लेनदेन की संख्या को कम करता है और भुगतान प्रक्रिया को सरल बनाता है। +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. प्रत्येक क्वेरी के लिए, गेटवे आपको एक साइन किए गए रिसिप्ट भेजेगा जिसे आपके डेटाबेस में संग्रहीत किया जाएगा। फिर, इन क्वेरियों को एक अनुरोध के माध्यम से एक टेप-एजेंट द्वारा समेकित किया जाएगा। इसके बाद, आपको एक RAV प्राप्त होगा। आप नए रिसिप्ट्स के साथ इसे भेजकर RAV को अपडेट कर सकते हैं और इससे एक नया RAV उत्पन्न होगा जिसमें बढ़ी हुई राशि होगी। @@ -51,22 +51,22 @@ TAP एक प्रेषक को एक प्राप्तकर्ता | AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | | Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | -### गेटवे +### गेटवे -| घटक | Edge and Node Mainnet (Arbitrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | -| ---------------- | --------------------------------------------- | --------------------------------------------- | -| प्रेषक | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | -| हस्ताक्षरकर्ता | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | -| संकेन्द्रीयकर्ता | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | +| घटक | Edge and Node Mainnet (Arbitrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | +| ----------------- | --------------------------------------------- | --------------------------------------------- | +| प्रेषक | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| हस्ताक्षरकर्ता | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| संकेन्द्रीयकर्ता | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### आवश्यक शर्तें -एक Indexer चलाने की सामान्य आवश्यकताओं के अलावा, आपको TAP अपडेट को क्वेरी करने के लिए एक tap-escrow-subgraph एंडपॉइंट की आवश्यकता होगी। आप TAP को क्वेरी करने के लिए The Graph Network का उपयोग कर सकते हैं या अपने graph-node पर स्वयं होस्ट कर सकते हैं। +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (The Graph टेस्टनेट के लिए)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (The Graph टेस्टनेट के लिए)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (The Graph mainnet के लिए)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> नोट: `indexer-agent` वर्तमान में इस subgraph का indexing नेटवर्क subgraph डिप्लॉयमेंट की तरह नहीं करता है। इसके परिणामस्वरूप, आपको इसे मैन्युअल रूप से इंडेक्स करना होगा। +> `indexer-agent` वर्तमान में इस सबग्राफ की Indexing उसी तरह नहीं करता जैसे वह नेटवर्क सबग्राफ डिप्लॉयमेंट के लिए करता है। इसलिए, आपको इसे मैन्युअल रूप से इंडेक्स करना होगा। ## माइग्रेशन गाइड @@ -79,7 +79,7 @@ TAP एक प्रेषक को एक प्राप्तकर्ता 1. **Indexer एजेंट** - उसी प्रक्रिया का पालन करें'(https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - नया तर्क --tap-subgraph-endpoint दें ताकि नए TAP कोडपाथ्स को सक्रिय किया जा सके और TAP RAVs को रिडीम करने की अनुमति मिल सके। + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer सेवा** @@ -99,73 +99,72 @@ TAP एक प्रेषक को एक प्राप्तकर्ता "कम से कम कॉन्फ़िगरेशन के लिए, निम्नलिखित टेम्पलेट का उपयोग करें:" ```bash -#आपको नीचे दिए गए *सभी* मान अपनी सेटअप के अनुसार बदलने होंगे। -*नीचे दिए गए कुछ कॉन्फ़िग वैल्यू ग्लोबल ग्राफ नेटवर्क वैल्यू हैं, जिन्हें आप यहां पा सकते हैं: +#आपको नीचे दिए गए सभी मानों को अपने सेटअप के अनुसार बदलना होगा। # - +#कुछ कॉन्फ़िगरेशन नीचे वैश्विक ग्राफ नेटवर्क मान हैं, जिन्हें आप यहां देख सकते हैं: +# +#प्रो टिप: यदि आपको इस कॉन्फ़िगरेशन में कुछ मान वातावरण से लोड करने की आवश्यकता है, तो आप... +#पर्यावरणीय वेरिएबल्स के साथ अधिलेखित किया जा सकता है। उदाहरण के लिए, निम्नलिखित को +#[PREFIX]_DATABASE_POSTGRESURL से बदला जा सकता है, जहां PREFIX `INDEXER_SERVICE` या `TAP_AGENT` हो सकता है: # -#प्रो टिप: यदि आपको इस कॉन्फ़िग में कुछ मान environment से लोड करने की आवश्यकता है, तो आप environment वेरिएबल्स का उपयोग करके ओवरराइट कर सकते हैं। उदाहरण के लिए, निम्नलिखित को [PREFIX]_DATABASE_POSTGRESURL से बदला जा सकता है, जहां PREFIX `INDEXER_SERVICE` या `TAP_AGENT` हो सकता है: -[database] -#postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" +#[database] +#postgres_url="postgresql://indexer:${POSTGRES_PASSWORD} +@postgres:5432/indexer_components_0" + [indexer] indexer_address = "0x1111111111111111111111111111111111111111" operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" [database] - -Postgres डेटाबेस का URL जो indexer components के लिए उपयोग किया जाता है। वही डेटाबेस -जो indexer-agent द्वारा उपयोग किया जाता है। यह अपेक्षित है कि indexer-agent आवश्यक तालिकाएं बनाएगा। +# Indexer घटकों के लिए उपयोग किए जाने वाले Postgres डेटाबेस का URL। वही डेटाबेस +# जिसका उपयोग `indexer-agent` द्वारा किया जाता है। यह अपेक्षित है कि `indexer-agent` +#आवश्यक तालिकाएँ बनाएगा। postgres_url = "postgres://postgres@postgres:5432/postgres" [graph_node] -आपके graph-node के क्वेरी एंडपॉइंट का URL +# आपके graph-node के क्वेरी एंडपॉइंट का URL query_url = "" - -आपके graph-node के स्टेटस एंडपॉइंट का URL +# आपके graph-node के स्टेटस एंडपॉइंट का URL status_url = "" [subgraphs.network] -Graph Network subgraph के लिए क्वेरी URL। +# Graph Network सबग्राफ के लिए क्वेरी URL। query_url = "" - -वैकल्पिक, local graph-node में देखने के लिए deployment, यदि स्थानीय रूप से इंडेक्स किया गया है। -subgraph को स्थानीय रूप से इंडेक्स करना अनुशंसित है। -नोट: केवल query_url या deployment_id का उपयोग करें +# वैकल्पिक, स्थानीय `graph-node` में खोजने के लिए deployment, यदि स्थानीय रूप से इंडेक्स किया गया हो। +# सबग्राफ को स्थानीय रूप से इंडेक्स करना अनुशंसित है। +# नोट: केवल `query_url` या `deployment_id` का उपयोग करें deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -Escrow subgraph के लिए क्वेरी URL। +#Escrow Subgraph के लिए क्वेरी URL। query_url = "" - -वैकल्पिक, local graph-node में देखने के लिए deployment, यदि स्थानीय रूप से इंडेक्स किया गया है। -subgraph को स्थानीय रूप से इंडेक्स करना अनुशंसित है। -नोट: केवल query_url या deployment_id का उपयोग करें +#वैकल्पिक, स्थानीय `graph-node` में मौजूद deployment, यदि इसे स्थानीय रूप से index किया गया हो। +#स्थानीय रूप से सबग्राफ को index करना अनुशंसित है। +#नोट: केवल query_url या deployment_id में से किसी एक का उपयोग करें deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [blockchain] - -उस नेटवर्क का chain ID जिस पर graph network चल रहा है +#उस नेटवर्क का chain ID जिस पर The Graph नेटवर्क चल रहा है chain_id = 1337 - -TAP के receipt aggregate voucher (RAV) verifier का कॉन्ट्रैक्ट एड्रेस। +#TAP की receipt aggregate voucher (RAV) verifier का कॉन्ट्रैक्ट पता। receipts_verifier_address = "0x2222222222222222222222222222222222222222" ######################################## -#tap-agent के लिए विशिष्ट कॉन्फ़िगरेशन# +#tap-agent के लिए विशिष्ट कॉन्फ़िगरेशन ######################################## [tap] -#यह वह फीस की मात्रा है जिसे आप किसी भी समय जोखिम में डालने के लिए तैयार हैं। उदाहरण के लिए, -#यदि sender लंबे समय तक RAVs प्रदान करना बंद कर देता है और फीस इस -#राशि से अधिक हो जाती है, तो indexer-service sender से क्वेरी स्वीकार करना बंद कर देगा -#जब तक कि फीस को समेकित नहीं किया जाता। -#नोट: राउंडिंग त्रुटियों से बचने के लिए दशमलव मानों के लिए strings का उपयोग करें +#यह वह राशि है जिसे आप किसी भी समय जोखिम में डालने के लिए तैयार हैं। उदाहरण के लिए, +#यदि प्रेषक (sender) RAVs को लंबे समय तक प्रदान करना बंद कर देता है और शुल्क इस +#राशि से अधिक हो जाता है, तो indexer-service प्रेषक से क्वेरी स्वीकार करना बंद कर देगा +#जब तक कि शुल्क एकत्रित नहीं हो जाते। +#नोट: राउंडिंग त्रुटियों को रोकने के लिए दशमलव मूल्यों के लिए स्ट्रिंग्स का उपयोग करें #जैसे: #max_amount_willing_to_lose_grt = "0.1" max_amount_willing_to_lose_grt = 20 [tap.sender_aggregator_endpoints] -सभी senders और उनके aggregator endpoints के key-value -नीचे दिया गया यह उदाहरण E&N टेस्टनेट गेटवे के लिए है। +#सभी प्रेषकों और उनके aggregator endpoints की Key-Value +#नीचे दिया गया उदाहरण E&N testnet gateway के लिए है। 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" ``` @@ -187,7 +186,7 @@ max_amount_willing_to_lose_grt = 20 ### Grafana डैशबोर्ड -आप Grafana Dashboard (https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) डाउनलोड कर सकते हैं और इम्पोर्ट कर सकते हैं। +आप Grafana Dashboard (https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) डाउनलोड कर सकते हैं और इम्पोर्ट कर सकते हैं। ### लॉन्चपैड diff --git a/website/src/pages/hi/indexing/tooling/firehose.mdx b/website/src/pages/hi/indexing/tooling/firehose.mdx index d2a13417500b..59ee28be31eb 100644 --- a/website/src/pages/hi/indexing/tooling/firehose.mdx +++ b/website/src/pages/hi/indexing/tooling/firehose.mdx @@ -8,7 +8,7 @@ Firehose एक नई तकनीक है जिसे StreamingFast ने The Graph ने Go Ethereum/geth में विलय कर लिया है और [Live Tracer with v1.14.0 release](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0) को अपनाया है। -Firehose extracts, transforms and saves blockchain data in a highly performant file-based strategy. Blockchain developers can then access data extracted by Firehose through binary data streams. Firehose is intended to stand as a replacement for The Graph’s original blockchain data extraction layer. +Firehose अत्यधिक प्रदर्शन वाली file-based strategy में blockchain data को निकालता है, परिवर्तित करता है और सहेजता है। Blockchain developers binary data streams के माध्यम से Firehose द्वारा निकाले गए data तक पहुंच सकते हैं। Firehose का उद्देश्य Graph’s original blockchain data extraction layer के प्रतिस्थापन के रूप में खड़ा होना है। ## Firehose Documentation @@ -19,6 +19,6 @@ Firehose का दस्तावेज़ वर्तमान में Stre - Firehose का परिचय पढ़ें [Firehose introduction](https://firehose.streamingfast.io/introduction/firehose-overview) यह जानने के लिए कि यह क्या है और इसे क्यों बनाया गया। - [Prerequisites](https://firehose.streamingfast.io/introduction/prerequisites) के बारे में जानें ताकि Firehose को इंस्टॉल और डिप्लॉय किया जा सके। -### Expand Your Knowledge +### अपने ज्ञान का विस्तार करें - विभिन्न [Firehose components](https://firehose.streamingfast.io/architecture/components) के बारे में जानें। diff --git a/website/src/pages/hi/indexing/tooling/graph-node.mdx b/website/src/pages/hi/indexing/tooling/graph-node.mdx index 9acca5cf6557..8c08b11baec1 100644 --- a/website/src/pages/hi/indexing/tooling/graph-node.mdx +++ b/website/src/pages/hi/indexing/tooling/graph-node.mdx @@ -1,40 +1,40 @@ --- -title: ग्राफ-नोड +title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node वह घटक है जो सबग्राफ को अनुक्रमित करता है, और परिणामी डेटा को GraphQL API के माध्यम से क्वेरी करने के लिए उपलब्ध कराता है। इसलिए, यह Indexer स्टैक के लिए केंद्रीय है, और ग्राफ-नोड का सही संचालन एक सफल Indexer चलाने के लिए अत्यंत महत्वपूर्ण है। ग्राफ-नोड का संदर्भ और indexers के लिए उपलब्ध कुछ उन्नत विकल्पों का परिचय प्रदान करता है। विस्तृत दस्तावेज़ और निर्देश [Graph Node repository](https://github.com/graphprotocol/graph-node) में पाए जा सकते हैं। -## ग्राफ-नोड +## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) The Graph Network पर सबग्राफ को indexing करने के लिए रेफरेंस इंप्लीमेंटेशन है, जो ब्लॉकचेन क्लाइंट्स से जुड़ता है, सबग्राफ को indexing करता है और इंडेक्स किए गए डेटा को queries के लिए उपलब्ध कराता है। +[Graph Node](https://github.com/graphprotocol/graph-node) The Graph Network पर सबग्राफ को indexing करने के लिए संदर्भ कार्यान्वयन है, जो ब्लॉकचेन क्लाइंट्स से जुड़ता है, सबग्राफ को indexing करता है और अनुक्रमित डेटा को क्वेरी करने के लिए उपलब्ध कराता है। Graph Node (और पूरा indexer stack) को bare metal पर या एक cloud environment में चलाया जा सकता है। The Graph Protocol की मजबूती के लिए केंद्रीय indexing घटक की यह लचीलापन बहुत महत्वपूर्ण है। इसी तरह, ग्राफ-नोड को [साधन से बनाया जा सकता](https://github.com/graphprotocol/graph-node) है, या indexers [प्रदत्त Docker Images](https://hub.docker.com/r/graphprotocol/graph-node) में से एक का उपयोग कर सकते हैं। ### पोस्टग्रेएसक्यूएल डेटाबेस -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +ग्राफ नोड The Graph नेटवर्क पर सबग्राफ को Indexing करने के लिए एक संदर्भ कार्यान्वयन है, जो ब्लॉकचेन क्लाइंट से जुड़ता है, सबग्राफ को Indexing करता है और इंडेक्स किए गए डेटा को क्वेरी के लिए उपलब्ध कराता है। ### नेटवर्क क्लाइंट किसी नेटवर्क को इंडेक्स करने के लिए, ग्राफ़ नोड को एथेरियम-संगत JSON-RPC के माध्यम से नेटवर्क क्लाइंट तक पहुंच की आवश्यकता होती है। यह आरपीसी एक एथेरियम क्लाइंट से जुड़ सकता है या यह एक अधिक जटिल सेटअप हो सकता है जो कई में संतुलन लोड करता है। -कुछ सबग्राफ को केवल एक पूर्ण नोड की आवश्यकता हो सकती है, लेकिन कुछ में indexing फीचर्स होते हैं, जिनके लिए अतिरिक्त RPC कार्यक्षमता की आवश्यकता होती है। विशेष रूप से, ऐसे सबग्राफ जो indexing के हिस्से के रूप में `eth_calls` करते हैं, उन्हें एक आर्काइव नोड की आवश्यकता होगी जो [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898) को सपोर्ट करता हो। साथ ही, ऐसे सबग्राफ जिनमें `callHandlers` या `blockHandlers` के साथ एक `call` फ़िल्टर हो, उन्हें `trace_filter` सपोर्ट की आवश्यकता होती है ([trace module documentation यहां देखें](https://openethereum.github.io/JSONRPC-trace-module))। +कुछ सबग्राफ को केवल एक पूर्ण नोड की आवश्यकता हो सकती है, जबकि कुछ में अतिरिक्त RPC कार्यक्षमता की आवश्यकता होती है जो indexing सुविधाओं के लिए आवश्यक होती है। विशेष रूप से, ऐसे सबग्राफ जो indexing के हिस्से के रूप में `eth_calls` करते हैं, उन्हें एक आर्काइव नोड की आवश्यकता होगी जो [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898) का समर्थन करता है, और ऐसे सबग्राफ जिनमें `callHandlers` या `blockHandlers` हैं जिनमें `call` फ़िल्टर है, उन्हें `trace_filter` समर्थन की आवश्यकता होती है ([यहाँ ट्रेस मॉड्यूल दस्तावेज़ देखें](https://openethereum.github.io/JSONRPC-trace-module)). **नेटवर्क फायरहोज़** - फायरहोज़ एक gRPC सेवा है जो ब्लॉक्स का क्रमबद्ध, फिर भी फोर्क-अवेयर स्ट्रीम प्रदान करती है। इसे The Graph के कोर डेवलपर्स द्वारा बड़े पैमाने पर प्रभावी indexing का समर्थन करने के लिए विकसित किया गया है। यह वर्तमान में Indexer के लिए अनिवार्य नहीं है, लेकिन Indexers को इस तकनीक से परिचित होने के लिए प्रोत्साहित किया जाता है ताकि वे नेटवर्क के पूर्ण समर्थन के लिए तैयार रहें। फायरहोज़ के बारे में अधिक जानें [यहां](https://firehose.streamingfast.io/)। ### आईपीएफएस नोड्स -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +सबग्राफ तैनाती मेटाडेटा IPFS नेटवर्क पर संग्रहीत होता है। ग्राफ नोड मुख्य रूप से सबग्राफ तैनाती के दौरान IPFS नोड तक पहुँचता है ताकि सबग्राफ मैनिफेस्ट और सभी लिंक किए गए फ़ाइलों को प्राप्त कर सके। नेटवर्क Indexer को अपने स्वयं के IPFS नोड की मेज़बानी करने की आवश्यकता नहीं है। नेटवर्क के लिए एक IPFS नोड यहाँ होस्ट किया गया है: https://ipfs.network.thegraph.com. ### प्रोमेथियस मेट्रिक्स सर्वर -To enable monitoring and reporting, Graph Node can optionally log metrics to a Prometheus metrics server. +Monitoring and reporting को enable करने के लिए, Graph Node optionally रूप से metrics को Prometheus metrics server पर log कर सकता है। -### Getting started from source +### सोर्स से शुरू करना -#### Install prerequisites +#### आवश्यक पूर्वापेक्षाएँ स्थापित करें - **Rust** @@ -42,15 +42,15 @@ To enable monitoring and reporting, Graph Node can optionally log metrics to a P - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **उबंटू उपयोगकर्ताओं के लिए अतिरिक्त आवश्यकताएँ** - उबंटू पर एक ग्राफ-नोड चलाने के लिए कुछ अतिरिक्त पैकेजों की आवश्यकता हो सकती है। ```sh sudo apt-get install -y clang libpq-dev libssl-dev pkg-config ``` -#### Setup +#### सेटअप -1. Start a PostgreSQL database server +1. PostgreSQL डेटाबेस सर्वर शुरू करें ```sh initdb -D .postgres @@ -58,9 +58,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. [ग्राफ-नोड](https://github.com/graphprotocol/graph-node) रिपोजिटरी को क्लोन करें और स्रोत को बनाने के लिए cargo build कमांड चलाएँ। -3. Now that all the dependencies are setup, start the Graph Node: +3. अब जब सभी डिपेंडेंसीज़ सेटअप हो गई हैं, तो ग्राफ नोड शुरू करें: ```sh cargo run -p graph-node --release -- \ @@ -69,27 +69,27 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -### Getting started with Kubernetes +### Kubernetes के साथ शुरुआत करना Kubernetes का एक पूर्ण उदाहरण कॉन्फ़िगरेशन [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s) में पाया जा सकता है। ### Ports -When it is running Graph Node exposes the following ports: +जब यह चल रहा होता है तो Graph Node following port को expose करता है: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| पोर्ट | उद्देश्य | रूट्स | आर्गुमेंट्स | पर्यावरण वेरिएबल्स | +| ----- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | ------------------ | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for सबग्राफ subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus मेट्रिक्स | /metrics | \--metrics-port | - | > **प्रमुख बात**: सार्वजनिक रूप से पोर्ट्स को एक्सपोज़ करने में सावधानी बरतें - \*\*प्रशासनिक पोर्ट्स को लॉक रखना चाहिए। इसमें ग्राफ नोड JSON-RPC एंडपॉइंट भी शामिल है। ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, ग्राफ-नोड को एकल Graph Node instance, एकल PostgreSQL database, एक IPFS node, और नेटवर्क क्लाइंट्स की आवश्यकता होती है, जैसा कि सबग्राफ द्वारा अनुक्रमण के लिए आवश्यक होता है। इस सेटअप को क्षैतिज रूप से स्केल किया जा सकता है, कई Graph नोड और उन Graph नोड को समर्थन देने के लिए कई डेटाबेस जोड़कर। उन्नत उपयोगकर्ता ग्राफ-नोड की कुछ क्षैतिज स्केलिंग क्षमताओं का लाभ उठाना चाह सकते हैं, साथ ही कुछ अधिक उन्नत कॉन्फ़िगरेशन विकल्पों का भी, `config.toml` फ़ाइल और ग्राफ-नोड के पर्यावरण वेरिएबल्स के माध्यम से। @@ -114,15 +114,15 @@ indexers = [ "<.. list of all indexing nodes ..>" ] #### Multiple Graph Nodes -ग्राफ-नोड indexing को क्षैतिज रूप से स्केल किया जा सकता है, कई ग्राफ-नोड instances चलाकर indexing और queries को विभिन्न नोड्स पर विभाजित किया जा सकता है। यह सरलता से किया जा सकता है, जब Graph नोड को एक अलग `node_id` के साथ शुरू किया जाता है (जैसे कि Docker Compose फ़ाइल में), जिसे फिर `config.toml` फ़ाइल में [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion) को निर्दिष्ट करने के लिए और [deployment rules](#deployment-rules) के साथ सबग्राफ को नोड्स के बीच विभाजित करने के लिए इस्तेमाल किया जा सकता है। +ग्राफ नोड indexing क्षैतिज रूप से स्केल कर सकता है, विभिन्न नोड्स पर indexing और क्वेरी को विभाजित करने के लिए ग्राफ नोड के कई उदाहरण चलाते हुए। यह सरलता से किया जा सकता है, जब ग्राफ नोड्स को स्टार्टअप पर विभिन्न `node_id` के साथ कॉन्फ़िगर किया जाता है (जैसे, डॉकर कंपोज़ फ़ाइल में), जिसे फिर `config.toml` फ़ाइल में [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), और [deployment rules](#deployment-rules) के साथ नोड्स के बीच सबग्राफ विभाजित करने के लिए उपयोग किया जा सकता है। > ध्यान दें कि एक ही डेटाबेस का उपयोग करने के लिए कई ग्राफ़ नोड्स को कॉन्फ़िगर किया जा सकता है, जिसे स्वयं शार्डिंग के माध्यम से क्षैतिज रूप से बढ़ाया जा सकता है। #### Deployment rules -यहां कई Graph नोड दिए गए हैं, इसलिए नए सबग्राफ की तैनाती का प्रबंधन करना आवश्यक है ताकि एक ही subgraph को दो विभिन्न नोड द्वारा इंडेक्स न किया जाए, क्योंकि इससे टकराव हो सकता है। यह deployment नियमों का उपयोग करके किया जा सकता है, जो यह भी निर्दिष्ट कर सकते हैं कि यदि डेटाबेस sharding का उपयोग किया जा रहा है, तो subgraph का डेटा किस `shard` में स्टोर किया जाना चाहिए। Deployment नियम subgraph के नाम और उस नेटवर्क पर मिलान कर सकते हैं जिसमें तैनाती indexing हो रही है, ताकि निर्णय लिया जा सके। +कई Graph नोड को देखते हुए, नए सबग्राफ की तैनाती का प्रबंधन करना आवश्यक है ताकि एक ही सबग्राफ दो अलग-अलग नोड्स द्वारा अनुक्रमित न किया जाए, जिससे टकराव हो सकता है। इसे तैनाती नियमों का उपयोग करके किया जा सकता है, जो यह भी निर्दिष्ट कर सकते हैं कि यदि डेटाबेस शार्डिंग का उपयोग किया जा रहा है, तो एक सबग्राफ के डेटा को किस `shard` में संग्रहीत किया जाना चाहिए। तैनाती नियम सबग्राफ के नाम और उस नेटवर्क पर मेल खा सकते हैं जिसमें तैनाती indexing हो रही है ताकि निर्णय लिया जा सके। -Example deployment rule configuration: +उदाहरण deployment rule configuration: ```toml [deployment] @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -154,30 +154,30 @@ indexers = [ #### Dedicated query nodes -Nodes can be configured to explicitly be query nodes by including the following in the configuration file: +Configuration file में following को शामिल करके Nodes को explicitly रूप से query nodes होने के लिए configure किया जा सकता है: ```toml [general] query = "" ``` -Any node whose --node-id matches the regular expression will be set up to only respond to queries. +कोई भी node जिसका --node-id regular expression से mail खाता है, केवल प्रश्नों का जवाब देने के लिए set किया जाएगा। -#### Database scaling via sharding +#### Sharding के माध्यम से Database scaling -For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. +For most use cases, एक एकल Postgres database graph-node उदाहरण का support करने के लिए sufficient है। जब एक graph-node उदाहरण single Postgres database से आगे निकल जाता है, तो graph-node के data के भंडारण को कई Postgres databases में split करना possible है। सभी database मिलकर graph-node instance का store बनाते हैं। प्रत्येक personal database को shard कहा जाता है। -Shard का उपयोग subgraph deployments को कई डेटाबेस में विभाजित करने के लिए किया जा सकता है, और प्रतिकृति का उपयोग करके query लोड को डेटाबेस में फैलाने के लिए भी किया जा सकता है। इसमें यह कॉन्फ़िगर करना शामिल है कि प्रत्येक डेटाबेस के लिए प्रत्येक `ग्राफ-नोड` को अपने कनेक्शन पूल में कितने उपलब्ध डेटाबेस कनेक्शन रखने चाहिए। जैसे-जैसे अधिक सबग्राफ को index किया जा रहा है, यह अधिक महत्वपूर्ण होता जा रहा है। +Shards का उपयोग कई डेटाबेस में सबग्राफ डिप्लॉयमेंट को विभाजित करने के लिए किया जा सकता है, और साथ ही प्रतिकृतियों (replicas) का उपयोग करके क्वेरी लोड को डेटाबेस में वितरित करने के लिए भी किया जा सकता है। इसमें प्रत्येक `graph-node` के लिए प्रत्येक डेटाबेस में कनेक्शन पूल में रखे जाने वाले उपलब्ध डेटाबेस कनेक्शनों की संख्या को कॉन्फ़िगर करना शामिल है, जो कि जैसे-जैसे अधिक सबग्राफ इंडेक्स किए जा रहे हैं, उतना ही महत्वपूर्ण हो जाता है। शेयरिंग तब उपयोगी हो जाती है जब आपका मौजूदा डेटाबेस ग्राफ़ नोड द्वारा डाले गए भार के साथ नहीं रह सकता है, और जब डेटाबेस का आकार बढ़ाना संभव नहीं होता है। -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> यह सामान्यतः बेहतर होता है कि किसी एक डेटाबेस को जितना संभव हो उतना बड़ा बनाया जाए, इससे पहले कि shard शुरू की जाए। एक अपवाद यह है जब क्वेरी ट्रैफिक विभिन्न सबग्राफ के बीच बहुत असमान रूप से विभाजित होता है; ऐसे मामलों में, यदि उच्च-वॉल्यूम सबग्राफ को एक shard में रखा जाए और बाकी सब कुछ दूसरे shard में, तो यह काफी मदद कर सकता है क्योंकि इस सेटअप से यह संभावना बढ़ जाती है कि उच्च-वॉल्यूम सबग्राफ के लिए आवश्यक डेटा डेटाबेस-आंतरिक कैश में बना रहे और कम-वॉल्यूम सबग्राफ के कम आवश्यक डेटा द्वारा प्रतिस्थापित न हो। -In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. +Configuring connections करने के मामले में, postgresql.conf में max_connections से 400 (or maybe even 200) पर set करें और store_connection_wait_time_ms और store_connection_checkout_count Prometheus metrics देखें। ध्यान देने Noticeable wait times (anything above 5ms) एक संकेत है कि बहुत कम connection उपलब्ध हैं; high wait times database बहुत busy होने (like high CPU load) के कारण भी होगा। हालाँकि यदि database otherwise stable लगता है, तो high wait times indicate की संख्या connection बढ़ाने की आवश्यकता का संकेत देता है। configuration में, प्रत्येक graph-node उदाहरण कितने connection का उपयोग कर सकता है, यह एक upper limit है, और Graph Node को connections खुला नहीं रखेगा यदि इसकी आवश्यकता नहीं है। [यहाँ](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases) स्टोर कॉन्फ़िगरेशन के बारे में और पढ़ें। -#### Dedicated block ingestion +#### समर्पित ब्लॉक अंतर्ग्रहण यदि कई नोड्स कॉन्फ़िगर किए गए हैं, तो यह आवश्यक होगा कि एक नोड निर्दिष्ट किया जाए जो नए ब्लॉक्स के इनजेशन के लिए जिम्मेदार हो, ताकि सभी कॉन्फ़िगर किए गए इंडेक्स नोड्स chain हेड को बार-बार पूछताछ न करें। इसे `chains` नेमस्पेस के हिस्से के रूप में किया जाता है, जहां ब्लॉक इनजेशन के लिए उपयोग किए जाने वाले `node_id` को निर्दिष्ट किया जाता है: @@ -186,13 +186,13 @@ In terms of configuring connections, start with max_connections in postgresql.co ingestor = "block_ingestor_node" ``` -#### Supporting multiple networks +#### कई network का Supporting करना -The Graph Protocol उन नेटवर्क्स की संख्या बढ़ा रहा है जो indexing रिवार्ड्स के लिए सपोर्टेड हैं, और ऐसे कई सबग्राफ हैं जो अनसपोर्टेड नेटवर्क्स को indexing कर रहे हैं जिन्हें एक indexer प्रोसेस करना चाहेगा। `config.toml` फ़ाइल अभिव्यक्त और लचीली कॉन्फ़िगरेशन की अनुमति देती है: +The Graph Protocol उन नेटवर्क की संख्या बढ़ा रहा है जिन्हें Indexing पुरस्कारों के लिए समर्थित किया गया है, और कई सबग्राफ मौजूद हैं जो असमर्थित नेटवर्क को Indexers द्वारा संसाधित करने के लिए Indexing कर रहे हैं। `config.toml` फ़ाइल अभिव्यंजक और लचीले विन्यास की अनुमति देती है: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). -- Additional provider details, such as features, authentication and the type of provider (for experimental Firehose support) +- Additional provider details, जैसे सुविधाएँ, authentication और provider का प्रकार (for experimental Firehose support) `[chains]` अनुभाग उन Ethereum प्रदाताओं को नियंत्रित करता है जिनसे ग्राफ-नोड कनेक्ट होता है और जहाँ प्रत्येक chain के लिए ब्लॉक और अन्य मेटाडेटा संग्रहीत होते हैं। निम्नलिखित उदाहरण दो chain, mainnet और kovan को कॉन्फ़िगर करता है, जहाँ mainnet के लिए ब्लॉक vip shard में संग्रहीत होते हैं और kovan के लिए ब्लॉक primary shard में संग्रहीत होते हैं। mainnet chain दो अलग-अलग प्रदाताओं का उपयोग कर सकती है, जबकि kovan के पास केवल एक प्रदाता है। @@ -225,11 +225,11 @@ provider = [ { label = "kovan", url = "http://..", features = [] } ] ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +एक चालू Graph Node (या कई Graph Nodes!) को चलाने के बाद, अगली चुनौती उन Graph Nodes पर तैनात किए गए सबग्राफ को प्रबंधित करने की होती है। ग्राफ-नोड विभिन्न टूल्स प्रदान करता है जो सबग्राफ के प्रबंधन में मदद करते हैं। #### लॉगिंग -ग्राफ-नोड के log डिबगिंग और ग्राफ-नोड और विशिष्ट सबग्राफ के ऑप्टिमाइजेशन के लिए उपयोगी जानकारी प्रदान कर सकते हैं। ग्राफ-नोड विभिन्न log स्तरों का समर्थन करता है via `GRAPH_LOG` पर्यावरण चर, जिनमें निम्नलिखित स्तर होते हैं: error, warn, info, debug या trace। +ग्राफ-नोड के लॉग्स ग्राफ-नोड और विशिष्ट सबग्राफ की डिबगिंग और ऑप्टिमाइज़ेशन के लिए उपयोगी जानकारी प्रदान कर सकते हैं। ग्राफ-नोड `GRAPH_LOG` एनवायरमेंट वेरिएबल के माध्यम से विभिन्न लॉग स्तरों का समर्थन करता है, जिनमें निम्नलिखित स्तर शामिल हैं: error, warn, info, debug या trace। GraphQL queries कैसे चल रही हैं, इस बारे में अधिक विवरण प्राप्त करने के लिए `GRAPH_LOG_QUERY_TIMING` को `gql` पर सेट करना उपयोगी हो सकता है (हालांकि इससे बड़ी मात्रा में लॉग उत्पन्न होंगे)। @@ -245,66 +245,66 @@ Indexer रिपॉजिटरी एक [example Grafana configuration](https The graphman कमांड आधिकारिक कंटेनरों में शामिल है, और आप अपने ग्राफ-नोड कंटेनर में docker exec कमांड का उपयोग करके इसे चला सकते हैं। इसके लिए एक `config.toml` फ़ाइल की आवश्यकता होती है। -`graphman` कमांड्स का पूरा दस्तावेज़ ग्राफ नोड रिपॉजिटरी में उपलब्ध है। ग्राफ नोड `/docs` में [/docs/graphman.md](https://github.com/graphprotocol/ग्राफ-नोड/blob/master/docs/graphman.md) देखें। +`graphman` कमांड्स का पूरा दस्तावेज़ ग्राफ नोड रिपॉजिटरी में उपलब्ध है। ग्राफ नोड `/docs` में [/docs/graphman.md](https://github.com/graphprotocol/ग्राफ-नोड/blob/master/docs/graphman.md) देखें। -### सबग्राफ के साथ काम करना +### Subgraph के साथ कार्य करना #### अनुक्रमण स्थिति एपीआई -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +डिफ़ॉल्ट रूप से पोर्ट 8030/graphql पर उपलब्ध, indexing स्थिति API विभिन्न सबग्राफ के लिए indexing स्थिति की जाँच करने, proofs of indexing की जाँच करने, सबग्राफ सुविधाओं का निरीक्षण करने और अधिक के लिए कई तरीकों को उजागर करता है। पूर्ण स्कीमा [यहां](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql) उपलब्ध है। #### Indexing performance -There are three separate parts of the indexing process: +Indexing process के तीन अलग-अलग भाग हैं: -- Fetching events of interest from the provider +- Provider से रुचि के event लाए जा रहे हैं - उपयुक्त संचालकों के साथ घटनाओं को संसाधित करना (इसमें राज्य के लिए श्रृंखला को कॉल करना और स्टोर से डेटा प्राप्त करना शामिल हो सकता है) -- Writing the resulting data to the store +- Resulting data को store पर लिखना -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +ये चरण पाइपलाइन किए गए हैं (अर्थात वे समानांतर रूप से निष्पादित किए जा सकते हैं), लेकिन वे एक-दूसरे पर निर्भर हैं। जहाँ सबग्राफ को इंडेक्स करने में धीमापन होता है, वहाँ इसकी मूल वजह विशिष्ट सबग्राफ पर निर्भर करेगी। Common causes of indexing slowness: - Chain से प्रासंगिक आयोजन खोजने में लगने वाला समय (विशेष रूप से कॉल handler धीमे हो सकते हैं, क्योंकि ये `trace_filter` पर निर्भर करते हैं)। - Handler के हिस्से के रूप में बड़ी संख्या में `eth_calls` करना। -- A large amount of store interaction during execution -- A large amount of data to save to the store -- A large number of events to process -- Slow database connection time, for crowded nodes -- The provider itself falling behind the chain head -- Slowness in fetching new receipts at the chain head from the provider +- Execution के दौरान बड़ी मात्रा में store interaction +- Store में सहेजने के लिए बड़ी मात्रा में data +- Process करने के लिए बड़ी संख्या में events +- भीड़भाड़ वाले nodes के लिए Slow database connection समय +- Provider itself chain head के पीछे पड़ रहा है +- Provider से chain head पर नई receipt प्राप्त करने में Slowness -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +सबग्राफ Indexing मैट्रिक्स Indexing की धीमी गति के मूल कारण का निदान करने में मदद कर सकते हैं। कुछ मामलों में, समस्या स्वयं सबग्राफ में होती है, लेकिन अन्य मामलों में, बेहतर नेटवर्क प्रदाता, कम डेटाबेस प्रतिस्पर्धा और अन्य कॉन्फ़िगरेशन सुधार Indexing प्रदर्शन को उल्लेखनीय रूप से बेहतर बना सकते हैं। -#### विफल सबग्राफ +#### असफल Subgraph -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +Indexing के दौरान Subgraph असफल हो सकते हैं, यदि उन्हें अप्रत्याशित डेटा मिलता है, कोई घटक अपेक्षित रूप से कार्य नहीं कर रहा हो, या यदि event handlers या configuration में कोई बग हो। असफलता के दो सामान्य प्रकार हैं: -- Deterministic failures: these are failures which will not be resolved with retries +- Deterministic failures: ये ऐसी failures हैं जिन्हें resolved से हल नहीं किया जा सकता है - गैर-नियतात्मक विफलताएँ: ये प्रदाता के साथ समस्याओं या कुछ अप्रत्याशित ग्राफ़ नोड त्रुटि के कारण हो सकती हैं। जब एक गैर-नियतात्मक विफलता होती है, तो ग्राफ़ नोड समय के साथ पीछे हटते हुए विफल हैंडलर को फिर से प्रयास करेगा। -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +कुछ मामलों में, विफलता को Indexer द्वारा हल किया जा सकता है (उदाहरण के लिए, यदि त्रुटि सही प्रकार के provider की अनुपस्थिति के कारण है, तो आवश्यक provider जोड़ने से Indexing जारी रह सकती है)। हालाँकि, अन्य मामलों में, सबग्राफ कोड में परिवर्तन आवश्यक होता है। -> निश्चितात्मक विफलताएँ "अंतिम" मानी जाती हैं, जिनके लिए विफल ब्लॉक के लिए एक Proof of Indexing उत्पन्न किया जाता है, जबकि अनिर्णायक विफलताएँ नहीं होतीं, क्योंकि Subgraph "अविफल" हो सकता है और indexing जारी रख सकता है। कुछ मामलों में, अनिर्णायक लेबल गलत होता है, और Subgraph कभी भी त्रुटि को पार नहीं कर पाएगा; ऐसी विफलताओं को ग्राफ नोड रिपॉजिटरी पर मुद्दों के रूप में रिपोर्ट किया जाना चाहिए। +> निर्धारित विफलताओं को "अंतिम" माना जाता है, जिसमें असफल ब्लॉक के लिए एक Proof of Indexing उत्पन्न किया जाता है, जबकि अनिर्धारित विफलताओं को ऐसा नहीं माना जाता है, क्योंकि सबग्राफ संभवतः "असफल" होने से उबरकर पुनः Indexing जारी रख सकता है। कुछ मामलों में, अनिर्धारित लेबल गलत होता है, और सबग्राफ कभी भी इस त्रुटि को पार नहीं कर पाता; ऐसी विफलताओं को ग्राफ नोड रिपॉज़िटरी पर समस्याओं के रूप में रिपोर्ट किया जाना चाहिए। #### कैश को ब्लॉक और कॉल करें -ग्राफ-नोड कुछ डेटा को स्टोर में कैश करता है ताकि प्रोवाइडर से फिर से प्राप्त करने की आवश्यकता न हो। ब्लॉक्स को कैश किया जाता है, साथ ही `eth_calls` के परिणाम (जो कि एक विशिष्ट ब्लॉक से कैश किए जाते हैं)। यह कैशिंग "थोड़े बदले हुए subgraph" के दौरान indexing की गति को नाटकीय रूप से बढ़ा सकती है। +ग्राफ-नोड कुछ डेटा को स्टोर में कैश करता है ताकि प्रोवाइडर से पुनः प्राप्त करने से बचा जा सके। ब्लॉक्स को कैश किया जाता है, जैसे कि `eth_calls` के परिणाम (जिसे एक विशिष्ट ब्लॉक के रूप में कैश किया जाता है)। यह कैशिंग "resyncing" के दौरान थोड़ा बदले हुए सबग्राफ की indexing स्पीड को नाटकीय रूप से बढ़ा सकती है। -यदि कभी Ethereum नोड ने किसी समय अवधि के लिए गलत डेटा प्रदान किया है, तो वह कैश में जा सकता है, जिसके परिणामस्वरूप गलत डेटा या विफल सबग्राफ हो सकते हैं। इस स्थिति में, Indexer `graphman` का उपयोग करके ज़हरीले कैश को हटा सकते हैं, और फिर प्रभावित सबग्राफ को रीवाइंड कर सकते हैं, जो फिर (आशा है) स्वस्थ प्रदाता से ताज़ा डेटा प्राप्त करेंगे। +हालांकि, कुछ मामलों में, यदि कोई Ethereum नोड कुछ समय के लिए गलत डेटा प्रदान करता है, तो वह कैश में आ सकता है, जिससे गलत डेटा या असफल सबग्राफ हो सकते हैं। इस स्थिति में, Indexers `graphman` का उपयोग करके दूषित कैश को साफ कर सकते हैं और फिर प्रभावित सबग्राफको पुनः पीछे ले जा सकते हैं, जिससे वे (उम्मीद है) स्वस्थ प्रदाता से नया डेटा प्राप्त कर सकें। -If a block cache inconsistency is suspected, such as a tx receipt missing event: +यदि एक block cache inconsistency का संदेह है, जैसे कि tx receipt missing event: 1. `graphman chain list` का उपयोग करके chain का नाम पता करें। 2. `graphman chain check-blocks by-number ` यह जांच करेगा कि क्या कैश किया हुआ ब्लॉक प्रदाता से मेल खाता है, और यदि यह मेल नहीं खाता है तो ब्लॉक को कैश से हटा देगा। 1. यदि कोई अंतर है, तो पूरे कैश को `graphman chain truncate ` के साथ हटाना अधिक सुरक्षित हो सकता है। 2. यदि ब्लॉक प्रदाता से मेल खाता है, तो समस्या को सीधे प्रदाता के विरुद्ध डिबग किया जा सकता है। -#### Querying issues and errors +#### Issues और errors को query करना -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +एक बार जब सबग्राफ को इंडेक्स कर लिया जाता है, तो Indexers इससे जुड़े समर्पित क्वेरी एंडपॉइंट के माध्यम से क्वेरी प्रदान करने की उम्मीद कर सकते हैं। यदि Indexer महत्वपूर्ण मात्रा में क्वेरी सर्व करना चाहता है, तो एक समर्पित क्वेरी नोड की सिफारिश की जाती है, और बहुत अधिक क्वेरी वॉल्यूम के मामले में, Indexers को प्रतिकृति shard कॉन्फ़िगर करने पर विचार करना चाहिए ताकि क्वेरीज़ Indexing प्रक्रिया को प्रभावित न करें। हालाँकि, एक समर्पित क्वेरी नोड और प्रतिकृतियों के साथ भी, कुछ प्रश्नों को निष्पादित करने में लंबा समय लग सकता है, और कुछ मामलों में मेमोरी उपयोग में वृद्धि होती है और अन्य उपयोगकर्ताओं के लिए क्वेरी समय को नकारात्मक रूप से प्रभावित करती है। @@ -316,17 +316,17 @@ Once a subgraph has been indexed, indexers can expect to serve queries via the s ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +समस्याग्रस्त क्वेरीज़ अक्सर दो तरीकों से सामने आती हैं। कुछ मामलों में, उपयोगकर्ता स्वयं रिपोर्ट करते हैं कि कोई विशेष क्वेरी धीमी है। ऐसे में चुनौती यह होती है कि धीमेपन के कारण का निदान किया जाए - यह पता लगाया जाए कि यह कोई सामान्य समस्या है या किसी विशेष सबग्राफ या क्वेरी से संबंधित है। और फिर, यदि संभव हो, तो इसे हल किया जाए। अन्य मामलों में, क्वेरी नोड पर ट्रिगर उच्च मेमोरी उपयोग हो सकता है, इस मामले में सबसे पहले समस्या उत्पन्न करने वाली क्वेरी की पहचान करना चुनौती है। Indexers [qlog](https://github.com/graphprotocol/qlog/) का उपयोग करके ग्राफ-नोड के query logs को प्रोसेस और सारांशित कर सकते हैं। धीमे queries की पहचान और डिबग करने में मदद के लिए `GRAPH_LOG_QUERY_TIMING` को भी सक्षम किया जा सकता है। -Given a slow query, indexers have a few options. Of course they can alter their cost model, to significantly increase the cost of sending the problematic query. This may result in a reduction in the frequency of that query. However this often doesn't resolve the root cause of the issue. +Given a slow query, indexer के पास कुछ options होते हैं. Of course they can alter their cost model, problematic query भेजने की लागत में काफी increase कर सकते हैं। इसके result उस query की frequency में कमी हो सकती है। हालाँकि यह अक्सर issue के मूल कारण को हल नहीं करता है। ##### Account-like optimisation -Database tables that store entities seem to generally come in two varieties: 'transaction-like', where entities, once created, are never updated, i.e., they store something akin to a list of financial transactions, and 'account-like' where entities are updated very often, i.e., they store something like financial accounts that get modified every time a transaction is recorded. Account-like tables are characterized by the fact that they contain a large number of entity versions, but relatively few distinct entities. Often, in such tables the number of distinct entities is 1% of the total number of rows (entity versions) +Database tables that store entities seem to generally come in two varieties: 'transaction-like', आम तौर पर दो तरह में आती हैं: 'transaction-like', जहाँ संस्थाएँ, एक बार बनने के बाद, कभी-कभी updated नहीं होती हैं, यानी, वे financial transactions की सूची के तरह कुछ store करते हैं, और 'account-like' जहां संस्थाएं बार-बार updated होती हैं, यानी, वे financial accounts की सूची के तरह कुछ store करते हैं जो every time a transaction is recorded. Account-like tables are characterized by the fact that they contain a large number of entity versions, but relatively few distinct entities. बारंबार, ऐसे विद्वानों में अलग-अलग टुकड़ों की संख्या, कुल संख्या (entity versions) का 1% होती है अकाउंट-जैसी तालिकाओं के लिए, `ग्राफ-नोड` ऐसे queries जनरेट कर सकता है जो इस विवरण का लाभ उठाते हैं कि Postgres इतनी तेज़ दर पर डेटा स्टोर करते समय इसे कैसे प्रबंधित करता है। खासतौर पर, हाल के ब्लॉक्स के सभी संस्करण ऐसी तालिका के कुल स्टोरेज के एक छोटे से हिस्से में होते हैं। @@ -336,10 +336,10 @@ Database tables that store entities seem to generally come in two varieties: 'tr एक बार जब यह तय कर लिया जाता है कि एक तालिका खाता जैसी है, तो `graphman stats account-like .
` चलाने से उस तालिका के खिलाफ queries के लिए खाता जैसी अनुकूलन सक्षम हो जाएगा। इस अनुकूलन को फिर से बंद किया जा सकता है `graphman stats account-like --clear .
` के साथ। queries नोड्स को यह नोटिस करने में 5 मिनट तक का समय लग सकता है कि अनुकूलन को चालू या बंद किया गया है। अनुकूलन को चालू करने के बाद, यह सत्यापित करना आवश्यक है कि बदलाव वास्तव में उस तालिका के लिए queries को धीमा नहीं कर रहा है। यदि आपने Grafana को Postgres की निगरानी के लिए कॉन्फ़िगर किया है, तो धीमी queries `pg_stat_activity` में बड़ी संख्या में दिखाई देंगी, जो कई सेकंड ले रही हैं। ऐसे में, अनुकूलन को फिर से बंद करने की आवश्यकता होती है। -Uniswap- जैसे सबग्राफ़ के लिए, `pair` और `token` तालिकाएँ इस अनुकूलन के प्रमुख उम्मीदवार हैं, और ये डेटाबेस लोड पर नाटकीय प्रभाव डाल सकते हैं। +Uniswap जैसी Subgraphs के लिए, `pair` और `token` टेबल इस ऑप्टिमाइज़ेशन के लिए प्रमुख उम्मीदवार हैं, और डेटाबेस लोड पर इसका नाटकीय प्रभाव पड़ सकता है। #### सबग्राफ हटाना -> This is new functionality, which will be available in Graph Node 0.29.x +> यह new functionality है, जो garph node 0.29.x में उपलब्ध होगी -किसी बिंदु पर एक indexer एक दिए गए subgraph को हटाना चाहता है। इसे आसानी से `graphman drop` के माध्यम से किया जा सकता है, जो एक deployment और उसके सभी indexed डेटा को हटा देता है। डिप्लॉयमेंट को subgraph नाम, एक IPFS हैश `Qm..`, या डेटाबेस नामस्थान `sgdNNN` के रूप में निर्दिष्ट किया जा सकता है। आगे की दस्तावेज़ीकरण यहां उपलब्ध है [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop)। +At some point, एक Indexer किसी दिए गए Subgraph को हटाना चाह सकता है। यह आसानी से `graphman drop` के माध्यम से किया जा सकता है, जो एक deployment और उसके सभी indexed डेटा को हटा देता है। Deployment को या तो सबग्राफ नाम, एक IPFS हैश `Qm..`, या डेटाबेस namespace `sgdNNN` के रूप में निर्दिष्ट किया जा सकता है। आगे का दस्तावेज़ [यहाँ](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop) उपलब्ध है। diff --git a/website/src/pages/hi/indexing/tooling/graphcast.mdx b/website/src/pages/hi/indexing/tooling/graphcast.mdx index 216fc0a502c5..f4978a7b800d 100644 --- a/website/src/pages/hi/indexing/tooling/graphcast.mdx +++ b/website/src/pages/hi/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ title: Graphcast ग्राफकास्ट एसडीके (सॉफ्टवेयर डेवलपमेंट किट) डेवलपर्स को रेडियो बनाने की अनुमति देता है, जो गपशप-संचालित अनुप्रयोग हैं जो इंडेक्सर्स किसी दिए गए उद्देश्य को पूरा करने के लिए चला सकते हैं। हम निम्नलिखित उपयोग के मामलों के लिए कुछ रेडियो बनाने का भी इरादा रखते हैं (या अन्य डेवलपर्स/टीमों को सहायता प्रदान करते हैं जो रेडियो बनाना चाहते हैं): -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- अन्य इंडेक्सर्स से ताना सिंकिंग सबग्राफ, सबस्ट्रीम और फायरहोज डेटा के लिए नीलामी और समन्वय आयोजित करना। -- सक्रिय क्वेरी एनालिटिक्स पर स्व-रिपोर्टिंग, जिसमें सबग्राफ अनुरोध मात्रा, शुल्क मात्रा आदि शामिल हैं। -- इंडेक्सिंग एनालिटिक्स पर सेल्फ-रिपोर्टिंग, जिसमें सबग्राफ इंडेक्सिंग टाइम, हैंडलर गैस कॉस्ट, इंडेक्सिंग एरर, आदि शामिल हैं। +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - ग्राफ-नोड संस्करण, पोस्टग्रेज संस्करण, एथेरियम क्लाइंट संस्करण, आदि सहित स्टैक जानकारी पर स्व-रिपोर्टिंग। ### और अधिक जानें diff --git a/website/src/pages/hi/resources/_meta-titles.json b/website/src/pages/hi/resources/_meta-titles.json index f5971e95a8f6..dc887c723101 100644 --- a/website/src/pages/hi/resources/_meta-titles.json +++ b/website/src/pages/hi/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "अतिरिक्त भूमिकाएँ", + "migration-guides": "माइग्रेशन मार्गदर्शक" } diff --git a/website/src/pages/hi/resources/benefits.mdx b/website/src/pages/hi/resources/benefits.mdx index cb043820d821..d1c1a85c537d 100644 --- a/website/src/pages/hi/resources/benefits.mdx +++ b/website/src/pages/hi/resources/benefits.mdx @@ -14,70 +14,70 @@ socialImage: https://thegraph.com/docs/img/seo/benefits.jpg - Significantly lower monthly costs - $0 इंफ्रास्ट्रक्चर सेटअप लागत - सुपीरियर अपटाइम -- Access to hundreds of independent Indexers around the world +- सैकड़ों स्वतंत्र Indexer का Access विश्वभर में। - वैश्विक समुदाय द्वारा 24/7 तकनीकी सहायता ## लाभ समझाया ### कम और अधिक लचीला लागत संरचना -No contracts. No monthly fees. Only pay for the queries you use—with an average cost-per-query of $40 per million queries (~$0.00004 per query). Queries are priced in USD and paid in GRT or credit card. - -Query costs may vary; the quoted cost is the average at time of publication (March 2024). - -## Low Volume User (less than 100,000 queries per month) - -| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | -| :-: | :-: | :-: | -| मासिक सर्वर लागत\* | $350 प्रति माह | $0 | -| पूछताछ लागत | $0+ | $0 per month | -| इंजीनियरिंग का समय | $ 400 प्रति माह | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | -| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | 100,000 (Free Plan) | -| लागत प्रति क्वेरी | $0 | $0 | -| Infrastructure | केंद्रीकृत | विकेन्द्रीकृत | -| भौगोलिक अतिरेक | $750+ प्रति अतिरिक्त नोड | शामिल | -| अपटाइम | भिन्न | 99.9%+ | -| कुल मासिक लागत | $750+ | $0 | - -## Medium Volume User (~3M queries per month) - -| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | -| :-: | :-: | :-: | -| मासिक सर्वर लागत\* | $350 प्रति माह | $0 | -| पूछताछ लागत | $ 500 प्रति माह | $120 per month | -| इंजीनियरिंग का समय | $800 प्रति माह | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | -| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | ~3,000,000 | -| लागत प्रति क्वेरी | $0 | $0.00004 | -| Infrastructure | केंद्रीकृत | विकेन्द्रीकृत | -| इंजीनियरिंग खर्च | $ 200 प्रति घंटा | शामिल | -| भौगोलिक अतिरेक | प्रति अतिरिक्त नोड कुल लागत में $1,200 | शामिल | -| अपटाइम | भिन्न | 99.9%+ | -| कुल मासिक लागत | $1,650+ | $120 | - -## High Volume User (~30M queries per month) - -| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | -| :-: | :-: | :-: | -| मासिक सर्वर लागत\* | $1100 प्रति माह, प्रति नोड | $0 | -| पूछताछ लागत | $4000 | $1,200 per month | -| आवश्यक नोड्स की संख्या | 10 | Not applicable | -| इंजीनियरिंग का समय | $6,000 or more per month | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | -| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | ~30,000,000 | -| लागत प्रति क्वेरी | $0 | $0.00004 | -| Infrastructure | केंद्रीकृत | विकेन्द्रीकृत | -| भौगोलिक अतिरेक | प्रति अतिरिक्त नोड कुल लागत में $1,200 | शामिल | -| अपटाइम | भिन्न | 99.9%+ | -| कुल मासिक लागत | $11,000+ | $1,200 | +**कोई अनुबंध नहीं। कोई मासिक शुल्क नहीं। केवल उपयोग की गई **queries** के लिए भुगतान करें—औसत लागत $40 प्रति मिलियन **queries** (~$0.00004 प्रति **query**)। **Queries** की कीमत **USD** में होती है और भुगतान **GRT** या **क्रेडिट कार्ड** से किया जा सकता है।** + +Query लागत भिन्न हो सकती है; उद्धृत लागत प्रकाशन के समय (मार्च 2024) की औसत है। + +## कम वॉल्यूम उपयोगकर्ता (100,000 queries प्रति माह से कम) + +| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | +| :----------------------------: | :-------------------------------------: | :--------------------------------------------------------------------: | +| मासिक सर्वर लागत\* | $350 प्रति माह | $0 | +| पूछताछ लागत | $0+ | $0 प्रति माह | +| इंजीनियरिंग का समय | $ 400 प्रति माह | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | +| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | 100,000 (नि: शुल्क योजना) | +| लागत प्रति क्वेरी | $0 | $0 | +| इंफ्रास्ट्रक्चर | केंद्रीकृत | विकेन्द्रीकृत | +| भौगोलिक अतिरेक | $750+ प्रति अतिरिक्त नोड | शामिल | +| अपटाइम | भिन्न | 99.9%+ | +| कुल मासिक लागत | $750+ | $0 | + +## मध्यम वॉल्यूम उपयोगकर्ता (~3 मिलियन queries प्रति माह) + +| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | +| :----------------------------: | :----------------------------------------: | :--------------------------------------------------------------------: | +| मासिक सर्वर लागत\* | $350 प्रति माह | $0 | +| पूछताछ लागत | $ 500 प्रति माह | $120 प्रति माह | +| इंजीनियरिंग का समय | $800 प्रति माह | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | +| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | ~3,000,000 | +| लागत प्रति क्वेरी | $0 | $0.00004 | +| इंफ्रास्ट्रक्चर | केंद्रीकृत | विकेन्द्रीकृत | +| इंजीनियरिंग खर्च | $ 200 प्रति घंटा | शामिल | +| भौगोलिक अतिरेक | प्रति अतिरिक्त नोड कुल लागत में $1,200 | शामिल | +| अपटाइम | भिन्न | 99.9%+ | +| कुल मासिक लागत | $1,650+ | $120 | + +## उच्च वॉल्यूम उपयोगकर्ता (~30 मिलियन queries प्रति माह) + +| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | +| :----------------------------: | :-----------------------------------------: | :--------------------------------------------------------------------: | +| मासिक सर्वर लागत\* | $1100 प्रति माह, प्रति नोड | $0 | +| पूछताछ लागत | $4000 | $1,200 प्रति माह | +| आवश्यक नोड्स की संख्या | 10 | Not applicable | +| इंजीनियरिंग का समय | $6,000 or more per month | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | +| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | ~30,000,000 | +| लागत प्रति क्वेरी | $0 | $0.00004 | +| इंफ्रास्ट्रक्चर | केंद्रीकृत | विकेन्द्रीकृत | +| भौगोलिक अतिरेक | प्रति अतिरिक्त नोड कुल लागत में $1,200 | शामिल | +| अपटाइम | भिन्न | 99.9%+ | +| कुल मासिक लागत | $11,000+ | $1,200 | \*बैकअप की लागत सहित: $50-$100 प्रति माह इंजीनियरिंग समय $200 प्रति घंटे की धारणा पर आधारित है -Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. + डेटा उपभोक्ता के लिए लागत को दर्शाता है। निःशुल्क योजना की queries के लिए query fees अभी भी indexers को भुगतान की जाती है। -एस्टिमेटेड लागत केवल Ethereum Mainnet सबग्राफ़ के लिए है — अन्य नेटवर्कों पर `ग्राफ-नोड` को स्वयं होस्ट करने पर लागत और भी अधिक होती है। कुछ उपयोगकर्ताओं को अपने Subgraph को एक नई संस्करण में अपडेट करने की आवश्यकता हो सकती है। Ethereum गैस शुल्क के कारण, एक अपडेट की लागत लगभग ~$50 है जब लेख लिखा गया था। ध्यान दें कि [Arbitrum](/archived/arbitrum/arbitrum-faq/) पर गैस शुल्क Ethereum mainnet से काफी कम हैं। +Ethereum मेननेट सबग्राफ के लिए अनुमानित लागतें ही दी गई हैं — अन्य नेटवर्क पर `graph-node` को स्वयं होस्ट करने पर लागतें और भी अधिक होती हैं। कुछ उपयोगकर्ताओं को अपने सबग्राफ को नए संस्करण में अपडेट करने की आवश्यकता हो सकती है। Ethereum गैस शुल्क के कारण, एक अपडेट की लागत लेखन के समय लगभग $50 होती है। ध्यान दें कि [Arbitrum](/archived/arbitrum/arbitrum-faq/) पर गैस शुल्क Ethereum मेननेट की तुलना में काफी कम है। -एक सबग्राफ पर क्यूरेटिंग सिग्नल एक वैकल्पिक वन-टाइम, नेट-जीरो कॉस्ट है (उदाहरण के लिए, सिग्नल में $1k को सबग्राफ पर क्यूरेट किया जा सकता है, और बाद में वापस ले लिया जाता है - प्रक्रिया में रिटर्न अर्जित करने की क्षमता के साथ)। +Curating signal on a Subgraph एक वैकल्पिक एक-बार का, शुद्ध-शून्य लागत वाला प्रक्रिया है (उदाहरण के लिए, $1k का सिग्नल एक सबग्राफ पर क्यूरेट किया जा सकता है, और बाद में वापस लिया जा सकता है—जिसमें संभावित रूप से लाभ अर्जित करने का अवसर हो सकता है)। ## कोई सेटअप लागत नहीं और अधिक परिचालन दक्षता @@ -89,4 +89,4 @@ The Graph का विकेन्द्रीकृत नेटवर्क The Graph Network कम खर्चीला, उपयोग में आसान और बेहतर परिणाम प्रदान करता है, जब की graph-node को लोकल पर चलाने के मुकाबले। -आज ही The Graph Network का उपयोग शुरू करें, और सीखें कि कैसे [अपने subgraph को The Graph के विकेंद्रीकृत नेटवर्क पर प्रकाशित](/subgraphs/quick-start/) करें। +The Graph Network का उपयोग आज ही शुरू करें, और जानें कि अपने Subgraph को The Graph के विकेंद्रीकृत नेटवर्क पर कैसे [प्रकाशित करें](/subgraphs/quick-start/)। diff --git a/website/src/pages/hi/resources/glossary.mdx b/website/src/pages/hi/resources/glossary.mdx index d7c1fd85df2b..0fbc4aadd2e3 100644 --- a/website/src/pages/hi/resources/glossary.mdx +++ b/website/src/pages/hi/resources/glossary.mdx @@ -2,82 +2,86 @@ title: शब्दकोष --- -- **The Graph**: A decentralized protocol for indexing and querying data. +- ** The Graph**: डेटा को अनुक्रमण और क्वेरी करने के लिए एक विकेंद्रीकृत प्रोटोकॉल। -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- Query: डेटा के लिए अनुरोध। The Graph के संदर्भ में, query एक Subgraph से डेटा अनुरोधित करने की प्रक्रिया है, जिसका उत्तर एक Indexer द्वारा दिया जाता है। -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: API के लिए एक query language और मौजूदा डेटा से उन queries को पूरा करने के लिए एक runtime। The Graph, Subgraphs से query करने के लिए GraphQL का उपयोग करता है। -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: एक URL जिसका उपयोग किसी Subgraph से query करने के लिए किया जाता है। Subgraph Studio के परीक्षण endpoint का प्रारूप है: `https://api.studio.thegraph.com/query///` Graph Explorer का endpoint है: `https://gateway.thegraph.com/api//subgraphs/id/` Graph Explorer endpoint का उपयोग The Graph के विकेंद्रीकृत नेटवर्क पर Subgraphs से query करने के लिए किया जाता है। -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: एक ओपन API जो ब्लॉकचेन से डेटा निकालता है, उसे प्रोसेस करके संग्रहीत करता है ताकि उसे आसानी से GraphQL के माध्यम से query किया जा सके। डेवलपर्स The Graph Network पर Subgraphs बना सकते हैं, डिप्लॉय कर सकते हैं और प्रकाशित कर सकते हैं। एक बार indexing पूरी होने के बाद, कोई भी इस Subgraph को query कर सकता है। -- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: नेटवर्क प्रतिभागी जो ब्लॉकचेन से डेटा को अनुक्रमित करने के लिए अनुक्रमण नोड्स चलाते हैं और GraphQL क्वेरीज़ को सर्व करते हैं I -- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. +- ** Indexer राजस्व स्रोत**: Indexer को GRT में दो घटकों के साथ पुरस्कृत किया जाता है: क्वेरी शुल्क रिबेट्स और indexing रिवार्ड्स। - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: नेटवर्क पर queries को संसाधित करने के लिए Subgraph उपभोक्ताओं द्वारा किया गया भुगतान। - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: Subgraphs को index करने के बदले Indexers को मिलने वाले इनाम। Indexing rewards हर साल 3% GRT के नए जारीकरण से उत्पन्न होते हैं। -- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. +- Indexer's Self-Stake: वह राशि जो Indexers विकेन्द्रीकृत नेटवर्क में भाग लेने के लिए स्टेक करते हैं। न्यूनतम 100,000 GRT है, और इसकी कोई ऊपरी सीमा नहीं है। -- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. +- **Delegation Capacity**: वह अधिकतम मात्रा में GRT जो एक Indexer, Delegators से स्वीकार कर सकता है। Indexers केवल अपनी Indexer Self-Stake की 16 गुना तक ही स्वीकार कर सकते हैं, और अतिरिक्त delegation से पुरस्कारों में कमी आती है। उदाहरण के लिए, यदि किसी Indexer की Self-Stake 1M GRT है, तो उनकी Delegation Capacity 16M होगी। हालांकि, Indexers अपनी Self-Stake बढ़ाकर अपनी Delegation Capacity बढ़ा सकते हैं। -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: एक ऐसा Indexer जो उन Subgraph queries के लिए बैकअप के रूप में कार्य करता है जिन्हें नेटवर्क पर अन्य Indexers द्वारा संसाधित नहीं किया जाता। Upgrade Indexer अन्य Indexers के साथ प्रतिस्पर्धा नहीं करता। -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: वे नेटवर्क प्रतिभागी जो GRT रखते हैं और इसे Indexers को delegate करते हैं। इससे Indexers को नेटवर्क पर Subgraphs में अपना stake बढ़ाने में मदद मिलती है। बदले में, Delegators को उन Indexing Rewards का एक हिस्सा मिलता है, जो Indexers को Subgraphs प्रोसेस करने के लिए प्राप्त होते हैं। -- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. +- **Delegation Tax** : जब Delegators अपने GRT को Indexers को डेलीगेट करते हैं, तो उन्हें 0.5% शुल्क देना पड़ता है। इस शुल्क के भुगतान के लिए उपयोग किया गया GRT नष्ट (burn) कर दिया जाता है। -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: वे नेटवर्क प्रतिभागी जो उच्च-गुणवत्ता वाले Subgraphs की पहचान करते हैं और उन पर GRT **signal** करके curation shares प्राप्त करते हैं। जब Indexers किसी Subgraph पर query fees का दावा करते हैं, तो उसका 10% उस Subgraph के Curators को वितरित किया जाता है। GRT **signal** की गई राशि और किसी Subgraph को index करने वाले Indexers की संख्या के बीच एक सकारात्मक संबंध होता है। -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: Curators द्वारा Subgraphs पर GRT **signal** करने पर दिया जाने वाला 1% शुल्क। इस शुल्क के रूप में उपयोग किया गया GRT **burn** कर दिया जाता है। -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: कोई भी एप्लिकेशन या उपयोगकर्ता जो किसी Subgraph से query करता है। -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: वह डेवलपर जो The Graph के विकेंद्रीकृत नेटवर्क पर एक Subgraph बनाता और डिप्लॉय करता है। -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: एक YAML फ़ाइल जो Subgraph की GraphQL **schema**, **data sources**, और अन्य **metadata** को वर्णित करती है। [यहां](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) एक उदाहरण दिया गया है। -- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. +- **अवधियों को** : नेटवर्क के भीतर समय की एक इकाई। वर्तमान में, एक अवधियों को 6,646 ब्लॉक्स या लगभग 1 दिन के बराबर होता है। -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: एक Indexer अपने कुल GRT **stake** (जिसमें Delegators का stake भी शामिल है) को उन Subgraphs की ओर आवंटित कर सकता है, जो The Graph के विकेंद्रीकृत नेटवर्क पर प्रकाशित किए गए हैं। Allocations के विभिन्न **status** हो सकते हैं: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: जब कोई allocation ऑनचेन बनाई जाती है, तो उसे **active** माना जाता है। इसे **allocation खोलना** कहा जाता है और यह नेटवर्क को संकेत देता है कि Indexer किसी विशेष Subgraph को सक्रिय रूप से **index** कर रहा है और **queries** को संसाधित कर रहा है। Active allocations, Subgraph पर दिए गए **signal** और आवंटित किए गए **GRT** की मात्रा के अनुपात में **indexing rewards** अर्जित करते हैं। - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: कोई Indexer किसी दिए गए Subgraph पर अर्जित **indexing rewards** का दावा करने के लिए हालिया और मान्य **Proof of Indexing (POI)** जमा कर सकता है। इसे **allocation बंद करना** कहा जाता है। - किसी allocation को बंद करने से पहले, इसे कम से कम **एक epoch** तक खुला रहना आवश्यक है। + - अधिकतम allocation अवधि **28 epochs** होती है। + - यदि कोई Indexer 28 epochs से अधिक समय तक allocation को खुला रखता है, तो इसे **stale allocation** कहा जाता है। + - **Closed** स्थिति में भी, कोई **Fisherman** विवाद खोल सकता है और झूठे डेटा परोसने के लिए Indexer को चुनौती दे सकता है। -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: Subgraphs को बनाने, डिप्लॉय करने और प्रकाशित करने के लिए एक शक्तिशाली dapp। -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- ** मछुआरे**: The Graph Network में एक भूमिका होती है जिसे वे प्रतिभागी निभाते हैं जो Indexers द्वारा प्रदान किए गए डेटा की सटीकता और अखंडता की निगरानी करते हैं। जब कोई मछुआरा किसी क्वेरी प्रतिक्रिया या POI को गलत मानता है, तो वह Indexer के खिलाफ विवाद शुरू कर सकता है। यदि विवाद मछुआरे के पक्ष में जाता है, तो Indexer को 2.5% उनके स्वयं के स्टेक से काट लिया जाता है। इस राशि का 50% मछुआरे को उनके सतर्कता पुरस्कार के रूप में दिया जाता है, और शेष 50% को नष्ट (बर्न) कर दिया जाता है। यह तंत्र मछुआरों को नेटवर्क की विश्वसनीयता बनाए रखने में मदद करने के लिए प्रोत्साहित करने हेतु डिज़ाइन किया गया है, जिससे यह सुनिश्चित किया जा सके कि Indexers द्वारा प्रदान किया गया डेटा जवाबदेह हो। -- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. +- ** Arbitrators**: Arbitrators नेटवर्क प्रतिभागी होते हैं जिन्हें एक गवर्नेंस प्रक्रिया के माध्यम से नियुक्त किया जाता है। Arbitrator की भूमिका indexing और query विवादों के परिणाम का निर्णय लेना होती है। उनका लक्ष्य The Graph Network की उपयोगिता और विश्वसनीयता को अधिकतम करना होता है। -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- ** Slashing**: Indexers अपने self-staked GRT का slashing झेल सकते हैं यदि वे गलत POI प्रदान करते हैं या गलत डेटा सर्व करते हैं। Slashing प्रतिशत एक protocol parameter है, जो वर्तमान में एक Indexer के self-stake का 2.5% निर्धारित है। Slashed किए गए GRT का 50% उस Fisherman को जाता है जिसने गलत डेटा या गलत POI को विवादित किया था। बाकी 50% को जला दिया जाता है। -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: Subgraphs को **index** करने के बदले Indexers को मिलने वाले इनाम, जो **GRT** में वितरित किए जाते हैं। -- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. +- ** delegation रिवॉर्ड्स**: वे रिवॉर्ड्स जो Delegators को GRT को Indexers को डेलीगेट करने के लिए मिलते हैं। delegation रिवॉर्ड्स GRT में वितरित किए जाते हैं। -- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. +- ** GRT**: The Graph का कार्य उपयोगिता टोकन। GRT नेटवर्क में योगदान देने वाले सहभागियों के लिए आर्थिक प्रोत्साहन प्रदान करता है। -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: जब कोई Indexer अपनी **allocation बंद** करता है और किसी विशेष Subgraph पर अर्जित **indexing rewards** का दावा करना चाहता है, तो उसे एक **वैध और हालिया POI** प्रदान करना आवश्यक होता है। - **Fishermen** Indexer द्वारा प्रस्तुत POI पर विवाद कर सकते हैं। + - यदि विवाद Fisherman के पक्ष में हल होता है, तो संबंधित Indexer का **slashing** किया जाता है। -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: वह **component** जो Subgraphs को **index** करता है और उत्पन्न डेटा को **GraphQL API** के माध्यम से query करने के लिए उपलब्ध कराता है। यह Indexer **stack** का एक केंद्रीय भाग है, और Graph Node का सही संचालन एक सफल Indexer चलाने के लिए आवश्यक है। -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer Agent**: Indexer **stack** का एक हिस्सा, जो ऑनचेन इंटरैक्शन को सुविधाजनक बनाता है। इसमें नेटवर्क पर **पंजीकरण**, अपने **Graph Node(s)** पर Subgraph **deployments** का प्रबंधन, और **allocations** को संभालना शामिल है। -- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. +- ** The Graph Client**: एक लाइब्रेरी जो विकेंद्रीकृत तरीके से GraphQL-आधारित dapps बनाने के लिए उपयोग होती है। -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: नेटवर्क प्रतिभागियों के लिए एक **dapp**, जो उन्हें Subgraphs को एक्सप्लोर करने और प्रोटोकॉल के साथ इंटरैक्ट करने की सुविधा देता है। -- **Graph CLI**: A command line interface tool for building and deploying to The Graph. +- **Graph CLI**: The Graph पर निर्माण और परिनियोजन के लिए एक कमांड लाइन इंटरफ़ेस टूल। -- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. +- **कूलडाउन अवधि**: वह समय जो बचा है जब तक एक Indexer जिसने अपनी delegation पैरामीटर बदले हैं, उन्हें दोबारा बदल नहीं सकता। -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: स्मार्ट कॉन्ट्रैक्ट और UI, जो नेटवर्क प्रतिभागियों को Ethereum **mainnet** से **Arbitrum One** पर नेटवर्क-संबंधित संपत्तियों को ट्रांसफर करने में सक्षम बनाते हैं। प्रतिभागी **delegated GRT**, **Subgraphs**, **curation shares**, और Indexer का **self-stake** ट्रांसफर कर सकते हैं। -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: किसी Subgraph के **manifest**, **schema**, या **mappings** में अपडेट करके उसका नया संस्करण जारी करने की प्रक्रिया। -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: किसी Subgraph के पुराने संस्करण से नए संस्करण में **curation shares** को स्थानांतरित करने की प्रक्रिया (जैसे v0.0.1 से v0.0.2 में अपडेट होने पर)। diff --git a/website/src/pages/hi/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/hi/resources/migration-guides/assemblyscript-migration-guide.mdx index e7e4b62b509e..4973d9bba1f9 100644 --- a/website/src/pages/hi/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/hi/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,19 +2,20 @@ title: असेंबलीस्क्रिप्ट माइग्रेशन गाइड --- -अब तक, सबग्राफ [AssemblyScript के शुरुआती संस्करणों](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) में से एक का उपयोग कर रहे थे (v0.6)। अंततः हमने सबसे [नए उपलब्ध संस्करण](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10) के लिए समर्थन जोड़ दिया है! 🎉 +अब तक, सबग्राफ ने [AssemblyScript के शुरुआती संस्करणों](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) में से एक (v0.6) का उपयोग किया है। आखिरकार, हमने [नवीनतम उपलब्ध संस्करण](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10)(v0.19.10) के लिए समर्थन जोड़ दिया है! 🎉 -यह सबग्राफ डेवलपर्स को एएस भाषा और मानक पुस्तकालय की नई सुविधाओं का उपयोग करने में सक्षम करेगा। +यह सबग्राफ डेवलपर्स को AS भाषा और स्टैंडर्ड लाइब्रेरी की नई विशेषताओं का उपयोग करने में सक्षम बनाएगा। यह मार्गदर्शक उन सभी लोगों के लिए लागू है जो `graph-cli`/`graph-ts` का संस्करण `0.22.0` से कम उपयोग कर रहे हैं। यदि आप पहले से ही इस संस्करण (या उससे उच्च) पर हैं, तो आप पहले से ही AssemblyScript के संस्करण `0.19.10` का उपयोग कर रहे हैं 🙂 -> `0.24.0` संस्करण से, `graph-node` दोनों संस्करणों का समर्थन कर सकता है, यह इस पर निर्भर करता है कि subgraph manifest में कौन सा `apiVersion` निर्दिष्ट किया गया है। +> ध्यान दें: `0.24.0` संस्करण से, `graph-node` दोनों संस्करणों का समर्थन कर सकता है, जो apiVersion द्वारा निर्धारित किया जाता है जो सबग्राफ मैनिफेस्ट में निर्दिष्ट होता है। ## विशेषताएँ ### नई कार्यक्षमता -- `TypedArray` को अब `ArrayBuffer` से बनाया जा सकता है[ नए `wrap` static method ](https://www.assemblyscript.org/stdlib/typedarray.html#static-members)का उपयोग करके ([v0.8.1] (https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1))। +- `TypedArray` को अब `ArrayBuffer` से बनाया जा सकता है[ नए `wrap` static method ](https://www.assemblyscript.org/stdlib/typedarray.html#static-members)का उपयोग करके ([v0.8.1] + (https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1))। - नई मानक लाइब्रेरी फ़ंक्शन: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare` और `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - GenericClass के लिए x instanceof समर्थन जोड़ा गया ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) - `StaticArray` जो एक अधिक कुशल array प्रकार है, जोड़ा गया ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) @@ -44,7 +45,7 @@ title: असेंबलीस्क्रिप्ट माइग्रेश ## कैसे करें अपग्रेड? -1. अपने मानचित्रण `सबग्राफ.yaml` में `apiVersion` को `0.0.6` में बदलें: +1. अपनी `subgraph.yaml` फ़ाइल में `apiVersion` को `0.0.9` में बदलें: ```yaml ... @@ -52,7 +53,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +107,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -यदि आप अनिश्चित हैं कि किसे चुनना है, तो हम हमेशा सुरक्षित संस्करण का उपयोग करने की सलाह देते हैं। यदि मान मौजूद नहीं है, तो आप अपने सबग्राफ हैंडलर में वापसी के साथ एक शुरुआती if स्टेटमेंट करना चाहते हैं। +अगर आपको यकीन नहीं है कि किसे चुनना है, तो हम हमेशा सुरक्षित संस्करण का उपयोग करने की सलाह देते हैं। यदि मान मौजूद नहीं है, तो आप अपने सबग्राफ के handler में एक प्रारंभिक if स्टेटमेंट के साथ return का उपयोग कर सकते हैं। ### Variable Shadowing @@ -132,7 +133,7 @@ in assembly/index.ts(4,3) ### Null Comparisons -अपने सबग्राफ पर अपग्रेड करने से, कभी-कभी आपको इस तरह की त्रुटियाँ मिल सकती हैं: +अपने सबग्राफ को अपग्रेड करने पर, कभी-कभी आपको ऐसे त्रुटियाँ मिल सकती हैं: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -221,8 +222,8 @@ changetype(uint8Array) // काम करता है :) class Bytes extends Uint8Array {} class ByteArray extends Uint8Array {} -let bytes = new Bytes(2) -changetype(bytes) // काम करता है :) +let bytes = new Bytes(2); +changetype(bytes); // काम करता है :) ``` यदि आप केवल nullability को हटाना चाहते हैं, तो आप `as` ऑपरेटर (या `variable`) का उपयोग जारी रख सकते हैं, लेकिन यह सुनिश्चित करें कि आपको पता है कि वह मान null नहीं हो सकता है, अन्यथा यह टूट जाएगा। @@ -329,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -हमने इसके लिए असेंबलीस्क्रिप्ट कंपाइलर पर एक मुद्दा खोला है, लेकिन अभी के लिए यदि आप अपने सबग्राफ मैपिंग में इस तरह के ऑपरेशन करते हैं, तो आपको इससे पहले एक अशक्त जांच करने के लिए उन्हें बदलना चाहिए। +हमने इस मुद्दे को AssemblyScript compiler पर खोला है, लेकिन अभी के लिए, यदि आप अपनी सबग्राफ mappings में इस प्रकार के संचालन कर रहे हैं, तो आपको इसके पहले एक null जांच करनी चाहिए। ```typescript let wrapper = new Wrapper(y) @@ -351,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -यह संकलित होगा लेकिन रनटाइम पर टूट जाएगा, ऐसा इसलिए होता है क्योंकि मान प्रारंभ नहीं किया गया है, इसलिए सुनिश्चित करें कि आपके सबग्राफ ने उनके मानों को प्रारंभ किया है, जैसे: +यह संकलित हो जाएगा लेकिन रनटाइम पर टूट जाएगा, क्योंकि मान प्रारंभ नहीं किया गया है। इसलिए सुनिश्चित करें कि आपका सबग्राफ ने अपने मानों को प्रारंभ किया है, इस प्रकार: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/hi/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/hi/resources/migration-guides/graphql-validations-migration-guide.mdx index 71a47e6e2ac3..2285c96d1497 100644 --- a/website/src/pages/hi/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/hi/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: ग्राफक्यूएल सत्यापन माइग्रेशन गाइड +title: GraphQL Validations Migration Guide --- जल्द ही `ग्राफ़-नोड` [ग्राफ़क्यूएल सत्यापन विनिर्देश] (https://spec.graphql.org/June2018/#sec-Validation) के 100% कवरेज का समर्थन करेगा। @@ -20,7 +20,7 @@ title: ग्राफक्यूएल सत्यापन माइग् आप अपने ग्राफक्यूएल संचालन में किसी भी समस्या का पता लगाने और उन्हें ठीक करने के लिए सीएलआई माइग्रेशन टूल का उपयोग कर सकते हैं। वैकल्पिक रूप से आप `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` समापन बिंदु का उपयोग करने के लिए अपने ग्राफ़िकल क्लाइंट के समापन बिंदु को अपडेट कर सकते हैं। इस समापन बिंदु के विरुद्ध अपने प्रश्नों का परीक्षण करने से आपको अपने प्रश्नों में समस्याओं का पता लगाने में मदद मिलेगी। -> अगर आप [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) या [GraphQL Code Generator](https://the-guild.dev) का इस्तेमाल कर रहे हैं, तो सभी सबग्राफ को माइग्रेट करने की ज़रूरत नहीं है /graphql/codegen), वे पहले से ही सुनिश्चित करते हैं कि आपके प्रश्न मान्य हैं। +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## माइग्रेशन सीएलआई टूल diff --git a/website/src/pages/hi/resources/roles/curating.mdx b/website/src/pages/hi/resources/roles/curating.mdx index 3d50ad907083..cc804cd1be93 100644 --- a/website/src/pages/hi/resources/roles/curating.mdx +++ b/website/src/pages/hi/resources/roles/curating.mdx @@ -1,88 +1,88 @@ --- -title: क्यूरेटिंग +title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators, The Graph की विकेंद्रीकृत अर्थव्यवस्था में महत्वपूर्ण भूमिका निभाते हैं। वे वेब3 इकोसिस्टम के अपने ज्ञान का उपयोग यह मूल्यांकन करने और संकेत देने के लिए करते हैं कि किन सबग्राफ को The Graph Network द्वारा अनुक्रमित किया जाना चाहिए। Graph Explorer के माध्यम से, Curators नेटवर्क डेटा को देखकर संकेत देने के निर्णय लेते हैं। बदले में, The Graph Network उन Curators को पुरस्कृत करता है जो उच्च गुणवत्ता वाले सबग्राफ पर संकेत देते हैं, उन्हें उन सबग्राफ द्वारा उत्पन्न क्वेरी शुल्क का एक हिस्सा प्राप्त होता है। Indexers के लिए यह तय करने में कि किन सबग्राफ को अनुक्रमित किया जाए, GRT संकेतित की गई राशि एक प्रमुख विचार है। -## What Does Signaling Mean for The Graph Network? +## The Graph Network के लिए signal देने का क्या अर्थ है? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +इससे पहले कि उपभोक्ता किसी सबग्राफ पर क्वेरी कर सकें, उसे इंडेक्स किया जाना आवश्यक है। यही वह जगह है जहाँ क्यूरेशन काम आता है। ताकि Indexers उच्च गुणवत्ता वाले सबग्राफ पर पर्याप्त क्वेरी शुल्क कमा सकें, उन्हें यह जानने की जरूरत होती है कि किन सबग्राफ को इंडेक्स करना चाहिए। जब Curators किसी सबग्राफ पर संकेत देते हैं, तो यह Indexers को सूचित करता है कि कोई सबग्राफ मांग में है और इतनी उच्च गुणवत्ता का है कि उसे इंडेक्स किया जाना चाहिए। -Curators The Graph network को कुशल बनाते हैं और [संकेत देना](#how-to-signal) वह प्रक्रिया है जिसका उपयोग Curators यह बताने के लिए करते हैं कि कौन सा subgraph Indexer के लिए अच्छा है। Indexers Curator से आने वाले संकेत पर भरोसा कर सकते हैं क्योंकि संकेत देना के दौरान, Curators subgraph के लिए एक curation share मिंट करते हैं, जो उन्हें उस subgraph द्वारा उत्पन्न भविष्य के पूछताछ शुल्क के एक हिस्से का हकदार बनाता है। +Curators The Graph नेटवर्क को कुशल बनाते हैं और [signaling](#how-to-signal) वह प्रक्रिया है जिसका उपयोग Curators यह संकेत देने के लिए करते हैं कि किसी सबग्राफ को Indexers द्वारा इंडेक्स किया जाना चाहिए। Indexers Curators के संकेत पर भरोसा कर सकते हैं क्योंकि signaling करते समय, Curators सबग्राफ के लिए एक curation शेयर मिंट करते हैं, जिससे उन्हें उस सबग्राफ द्वारा उत्पन्न भविष्य की क्वेरी फीस का एक हिस्सा प्राप्त करने का अधिकार मिलता है। -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator संकेतों को ERC20 टोकन के रूप में प्रस्तुत किया जाता है, जिन्हें Graph Curation Shares (GCS) कहा जाता है। जो अधिक query शुल्क अर्जित करना चाहते हैं, उन्हें अपने GRT को उन सबग्राफ पर संकेतित करना चाहिए, जिनके बारे में वे भविष्यवाणी करते हैं कि वे नेटवर्क में शुल्क के प्रवाह को मजबूत बनाएंगे। Curators को गलत व्यवहार के लिए दंडित नहीं किया जा सकता, लेकिन नेटवर्क की अखंडता को नुकसान पहुंचाने वाले गलत निर्णय लेने से हतोत्साहित करने के लिए उन पर एक जमा कर (deposit tax) लगाया जाता है। यदि वे कम-गुणवत्ता वाले सबग्राफ पर curation करते हैं, तो वे कम query शुल्क अर्जित करेंगे क्योंकि या तो कम queries को प्रोसेस किया जाएगा या फिर उन्हें प्रोसेस करने के लिए कम Indexers उपलब्ध होंगे। -[Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) यह सुनिश्चित करता है कि सभी सबग्राफ को index किया जाए। किसी विशेष subgraph पर GRT को संकेत करने से अधिक indexers उस पर आकर्षित होते हैं। curation के माध्यम से अतिरिक्त Indexers को प्रोत्साहित करना queries की सेवा की गुणवत्ता को बढ़ाने के लिए है, जिससे latency कम हो और नेटवर्क उपलब्धता में सुधार हो। +[Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) यह सुनिश्चित करता है कि सभी सबग्राफ को Indexing मिले, जिससे किसी विशेष सबग्राफ पर GRT को संकेत देने से अधिक Indexers आकर्षित होंगे। इस क्यूरेशन के माध्यम से अतिरिक्त Indexers को प्रोत्साहित करना क्वेरीज़ की सेवा की गुणवत्ता को बढ़ाने का लक्ष्य रखता है, जिससे विलंबता (latency) कम हो और नेटवर्क उपलब्धता (availability) बेहतर हो। -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. -यदि आपको सेवा की गुणवत्ता बढ़ाने के लिए curation में सहायता की आवश्यकता हो, तो कृपया एज और नोड टीम को support@thegraph.zendesk.com पर अनुरोध भेजें और उन सबग्राफ को निर्दिष्ट करें जिनमें आपको सहायता चाहिए। +यदि आपको सेवा की गुणवत्ता बढ़ाने के लिए क्यूरेशन में सहायता की आवश्यकता हो, तो कृपया Edge & नोड टीम को support@thegraph.zendesk.com पर एक अनुरोध भेजें और निर्दिष्ट करें कि किन सबग्राफ के लिए आपको सहायता चाहिए। -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers ग्राफ एक्सप्लोरर में उन्हें दिखाई देने वाले क्यूरेशन सिग्नल के आधार पर सबग्राफ को इंडेक्स करने के लिए खोज सकते हैं (स्क्रीनशॉट नीचे दिया गया है)। -![Explorer सबग्राफ](/img/explorer-subgraphs.png) +Subgraph Studio आपको अपने सबग्राफ़ में सिग्नल जोड़ने की सुविधा देता है, जिसमें आप अपने सबग्राफ़ के क्यूरेशन पूल में उसी लेन-देन के साथ GRT जोड़ सकते हैं, जब इसे प्रकाशित किया जाता है. ## सिग्नल कैसे करें -Graph Explorer के Curator टैब में, curators नेटवर्क स्टैट्स के आधार पर कुछ सबग्राफ पर signal और unsignal कर सकेंगे। Graph Explorer में यह कैसे करना है, इसका चरण-दर-चरण अवलोकन पाने के लिए [यहाँ क्लिक करें](/subgraphs/explorer/)। +Graph Explorer में Curator टैब के भीतर, क्यूरेटर कुछ नेटवर्क आंकड़ों के आधार पर कुछ सबग्राफ पर सिग्नल और अनसिग्नल कर सकेंगे। Graph Explorer में इसे चरण-दर-चरण कैसे किया जाए, इसके लिए [यहाँ](/subgraphs/explorer/) क्लिक करें. -एक क्यूरेटर एक विशिष्ट सबग्राफ संस्करण पर संकेत देना चुन सकता है, या वे अपने सिग्नल को स्वचालित रूप से उस सबग्राफ के नवीनतम उत्पादन निर्माण में माइग्रेट करना चुन सकते हैं। दोनों मान्य रणनीतियाँ हैं और अपने स्वयं के पेशेवरों और विपक्षों के साथ आती हैं। +A Curator किसी विशिष्ट सबग्राफ संस्करण पर संकेत देने का चयन कर सकता है, या वे अपने संकेत को स्वचालित रूप से उस सबग्राफ के नवीनतम उत्पादन निर्माण में माइग्रेट करने के लिए चुन सकते हैं। दोनों वैध रणनीतियाँ हैं और इनके अपने फायदे और नुकसान हैं। -विशेष संस्करण पर संकेत देना विशेष रूप से उपयोगी होता है जब एक subgraph को कई dapp द्वारा उपयोग किया जाता है। एक dapp को नियमित रूप से subgraph को नई विशेषता के साथ अपडेट करने की आवश्यकता हो सकती है। दूसरी dapp एक पुराना, अच्छी तरह से परीक्षण किया हुआ उपग्राफ subgraph संस्करण उपयोग करना पसंद कर सकती है। प्रारंभिक क्यूरेशन curation पर, 1% मानक कर tax लिया जाता है। +Signaling किसी विशिष्ट संस्करण पर विशेष रूप से उपयोगी होता है जब एक सबग्राफ को कई dapps द्वारा उपयोग किया जाता है। एक dapp को नियमित रूप से नए फीचर्स के साथ सबग्राफ को अपडेट करने की आवश्यकता हो सकती है। वहीं, दूसरा dapp एक पुराने, अच्छी तरह से परीक्षण किए गए सबग्राफ संस्करण का उपयोग करना पसंद कर सकता है। प्रारंभिक curation के दौरान, 1% का मानक कर लिया जाता है। अपने सिग्नल को स्वचालित रूप से नवीनतम उत्पादन बिल्ड में माइग्रेट करना यह सुनिश्चित करने के लिए मूल्यवान हो सकता है कि आप क्वेरी शुल्क अर्जित करते रहें। हर बार जब आप क्यूरेट करते हैं, तो 1% क्यूरेशन टैक्स लगता है। आप हर माइग्रेशन पर 0.5% क्यूरेशन टैक्स भी देंगे। सबग्राफ डेवलपर्स को बार-बार नए संस्करण प्रकाशित करने से हतोत्साहित किया जाता है - उन्हें सभी ऑटो-माइग्रेटेड क्यूरेशन शेयरों पर 0.5% क्यूरेशन टैक्स देना पड़ता है। -> **नोट**पहला पता जो किसी विशेष subgraph को सिग्नल करता है, उसे पहला curator माना जाएगा और उसे बाकी आने वाले curators की तुलना में अधिक गैस-इंटेंसिव कार्य करना होगा क्योंकि पहला curator curation share टोकन को इनिशियलाइज़ करता है और टोकन को The Graph प्रॉक्सी में ट्रांसफर करता है। +> **नोट**:पहले किसी विशेष सबग्राफ को संकेत देने वाला पता पहले क्यूरेटर के रूप में माना जाता है और उसे बाकी क्यूरेटरों की तुलना में अधिक गैस-गहन कार्य करना होगा क्योंकि पहला क्यूरेटर क्यूरेशन शेयर टोकनों को प्रारंभ करता है और साथ ही The Graph प्रॉक्सी में टोकन स्थानांतरित करता है। ## Withdrawing your GRT -Curators have the option to withdraw their signaled GRT at any time. +Curators के पास किसी भी समय अपना signaled GRT वापस लेने का option होता है। -Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). +Delegating की प्रक्रिया के विपरीत, यदि आप अपना signaled GRT वापस लेने का निर्णय लेते हैं तो आपको cooldown period की प्रतीक्षा नहीं करनी होगी और entire amount प्राप्त होगी (minus the 1% curation tax)। -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator अपनी signal को वापस ले लेता है, Indexers यह चुन सकते हैं कि वे सबग्राफ को Indexing करते रहें, भले ही वर्तमान में कोई सक्रिय GRT signal न हो। -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +हालाँकि, यह सिफारिश की जाती है कि Curators अपने संकेतित GRT को उसी स्थान पर छोड़ दें, न केवल क्वेरी शुल्क का एक हिस्सा प्राप्त करने के लिए, बल्कि सबग्राफ की विश्वसनीयता और अपटाइम सुनिश्चित करने के लिए भी। ## जोखिम 1. क्वेरी बाजार द ग्राफ में स्वाभाविक रूप से युवा है और इसमें जोखिम है कि नवजात बाजार की गतिशीलता के कारण आपका %APY आपकी अपेक्षा से कम हो सकता है। -2. क्यूरेशन शुल्क - जब कोई क्यूरेटर किसी सबग्राफ़ पर GRT सिग्नल करता है, तो उसे 1% क्यूरेशन टैक्स देना होता है। यह शुल्क जला दिया जाता है। -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. बग के कारण सबग्राफ विफल हो सकता है। एक विफल सबग्राफ क्वेरी शुल्क अर्जित नहीं करता है। नतीजतन, आपको तब तक इंतजार करना होगा जब तक कि डेवलपर बग को ठीक नहीं करता है और एक नया संस्करण तैनात करता है। - - यदि आपने सबग्राफ के नवीनतम संस्करण की सदस्यता ली है, तो आपके शेयर उस नए संस्करण में स्वत: माइग्रेट हो जाएंगे। इस पर 0.5% क्यूरेशन टैक्स लगेगा। - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - जब कोई curator किसी सबग्राफ पर GRT को signal करता है, तो उसे 1% curation tax देना होता है। यह शुल्क जला दिया जाता है.. +3. (Ethereum only) जब क्यूरेटर अपने शेयरों को जलाकर GRT निकालते हैं, तो बचे हुए शेयरों का GRT मूल्यांकन कम हो जाएगा। ध्यान दें कि कुछ मामलों में, क्यूरेटर अपने शेयरों को एक ही बार में जलाने का निर्णय ले सकते हैं। यह स्थिति आम हो सकती है यदि कोई dapp डेवलपर अपने सबग्राफ का संस्करण अपडेट करना/सुधारना और क्वेरी करना बंद कर देता है या यदि कोई सबग्राफ विफल हो जाता है। परिणामस्वरूप, शेष क्यूरेटर केवल अपने प्रारंभिक GRT का एक अंश ही निकालने में सक्षम हो सकते हैं। कम जोखिम प्रोफ़ाइल वाले नेटवर्क भूमिका के लिए, देखें [Delegators](/resources/roles/delegating/delegating/)। +4. एक सबग्राफ किसी बग के कारण फेल हो सकता है। एक फेल हुआ सबग्राफ क्वेरी शुल्क प्राप्त नहीं करता है। इसके परिणामस्वरूप, आपको तब तक इंतजार करना होगा जब तक डेवलपर बग को ठीक नहीं करता और एक नया संस्करण डिप्लॉय नहीं करता। + - यदि आप किसी सबग्राफ के नवीनतम संस्करण की सदस्यता लिए हुए हैं, तो आपके शेयर स्वचालित रूप से उस नए संस्करण में स्थानांतरित हो जाएंगे। इसके लिए 0.5% क्यूरेशन टैक्स लिया जाएगा। + - यदि आपने किसी विशिष्ट सबग्राफ संस्करण पर संकेत दिया है और वह विफल हो जाता है, तो आपको मैन्युअल रूप से अपने क्यूरेशन शेयर जलाने होंगे। इसके बाद, आप नए सबग्राफ संस्करण पर संकेत दे सकते हैं, जिससे आपको 1% क्यूरेशन कर देना होगा। ## अवधि पूछे जाने वाले प्रश्न ### 1. क्यूरेटर क्वेरी फीस का कितना % कमाते हैं? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +सबग्राफ पर संकेत देने से, आप उन सभी क्वेरी शुल्कों में से एक हिस्सा अर्जित करेंगे जो सबग्राफ उत्पन्न करता है। सभी क्वेरी शुल्कों का 10% Curators को उनके curation shares के अनुसार प्रो-राटा आधार पर जाता है। यह 10% शासन के अधीन है। -### 2. मैं यह कैसे तय करूं कि कौन से सबग्राफ सिग्नल देने के लिए उच्च गुणवत्ता वाले हैं? +### 2. मुझे यह कैसे तय करना चाहिए कि कौन से सबग्राफ उच्च गुणवत्ता वाले हैं जिन पर संकेत देना है? -उच्च-गुणवत्ता वाले सबग्राफ खोजना एक जटिल कार्य है, लेकिन इसे कई अलग-अलग तरीकों से किया जा सकता है। एक Curator के रूप में, आपको उन भरोसेमंद सबग्राफ को देखना चाहिए जो query volume को बढ़ा रहे हैं। एक भरोसेमंद subgraph मूल्यवान हो सकता है यदि वह पूर्ण, सटीक हो और किसी dapp की डेटा आवश्यकताओं को पूरा करता हो। एक खराब डिज़ाइन किया गया subgraph संशोधित या पुनः प्रकाशित करने की आवश्यकता हो सकती है और अंततः असफल भी हो सकता है। यह Curators के लिए अत्यंत महत्वपूर्ण है कि वे किसी subgraph की संरचना या कोड की समीक्षा करें ताकि यह आकलन कर सकें कि subgraph मूल्यवान है या नहीं। +उच्च-गुणवत्ता सबग्राफ खोजने एक जटिल कार्य है, लेकिन इसे कई अलग-अलग तरीकों से अपनाया जा सकता है। एक Curator के रूप में, आपको उन भरोसेमंद सबग्राफकी तलाश करनी चाहिए जो query volume को बढ़ा रहे हैं। एक भरोसेमंद सबग्राफ मूल्यवान हो सकता है यदि यह पूर्ण, सटीक हो और किसी dapp की डेटा आवश्यकताओं का समर्थन करता हो। एक खराब तरीके से डिज़ाइन किया गया सबग्राफ संशोधित या पुनः प्रकाशित करने की आवश्यकता हो सकती है, और यह विफल भी हो सकता है। यह महत्वपूर्ण है कि Curators किसी Subgraph की संरचना या कोड की समीक्षा करें ताकि यह आकलन किया जा सके कि कोई सबग्राफ मूल्यवान है या नहीं। परिणामस्वरू -- क्यूरेटर नेटवर्क की अपनी समझ का उपयोग करके यह अनुमान लगाने की कोशिश कर सकते हैं कि भविष्य में कोई विशेष सबग्राफ़ अधिक या कम क्वेरी वॉल्यूम कैसे उत्पन्न कर सकता है। -- क्यूरेटर को Graph Explorer के माध्यम से उपलब्ध मेट्रिक्स को भी समझना चाहिए। जैसे कि पिछले क्वेरी वॉल्यूम और सबग्राफ़ डेवलपर कौन है, ये मेट्रिक्स यह तय करने में मदद कर सकते हैं कि किसी सबग्राफ़ पर सिग्नलिंग करना उचित है या नहीं। +- Curators अपने नेटवर्क की समझ का उपयोग करके यह भविष्यवाणी करने का प्रयास कर सकते हैं कि भविष्य में किसी व्यक्तिगत सबग्राफ में क्वेरी वॉल्यूम अधिक या कम कैसे हो सकता है। +- Curators को यह भी समझना चाहिए कि Graph Explorer के माध्यम से उपलब्ध मीट्रिक्स क्या हैं। पिछले क्वेरी वॉल्यूम और कौन सबग्राफ डेवलपर है जैसे मीट्रिक्स यह निर्धारित करने में मदद कर सकते हैं कि किसी सबग्राफ पर संकेत देना उचित है या नहीं। -### 3. What’s the cost of updating a subgraph? +### 3. किसी सबग्राफ को अपडेट करने की लागत क्या है? -नए subgraph संस्करण में अपनी curation shares को माइग्रेट करने पर 1% curation टैक्स लगता है। Curators नए subgraph संस्करण को सब्सक्राइब करने का विकल्प चुन सकते हैं। जब curator shares अपने आप नए संस्करण में माइग्रेट होती हैं, तो Curators को आधा curation टैक्स, यानी 0.5%, देना पड़ता है क्योंकि सबग्राफ को अपग्रेड करना एक ऑनचेन क्रिया है जो गैस खर्च करती है। +Migrating your curation shares to a new सबग्राफ version पर 1% का curation tax लगता है। Curators नए संस्करण की सदस्यता लेने का विकल्प चुन सकते हैं। जब curator shares स्वतः नए संस्करण में माइग्रेट होते हैं, तो Curators को आधा curation tax, यानी 0.5% देना होगा, क्योंकि सबग्राफ को अपग्रेड करना एक ऑनचेन प्रक्रिया है, जिसमें गैस शुल्क लगता है। -### 4. How often can I update my subgraph? +### 4. मैं अपना सबग्राफ कितनी बार अपडेट कर सकता हूँ? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +ऐसा सुझाव दिया जाता है कि आप अपने सबग्राफ को बहुत बार अपडेट न करें। अधिक जानकारी के लिए ऊपर दिए गए प्रश्न को देखें। ### 5. क्या मैं अपने क्यूरेशन शेयर बेच सकता हूँ? क्यूरेशन शेयरों को अन्य ERC20 टोकनों की तरह "खरीदा" या "बेचा" नहीं जा सकता, जिन्हें आप जानते होंगे। इन्हें केवल मिंट (निर्मित) या बर्न (नष्ट) किया जा सकता है। -As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +Arbitrum पर Curator के रूप में, आपको शुरू में जमा किया गया GRT (minus the tax) वापस मिलने की guarantee है। ### 6. Am I eligible for a curation grant? -Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. +Curation grants case-by-case आधार पर individual रूप से निर्धारित किया जाता है। यदि आपको curation में सहायता की आवश्यकता है, तो कृपया support@thegraph.zendesk.com पर एक request भेजें। अभी भी उलझन में? नीचे हमारे क्यूरेशन वीडियो गाइड देखें: diff --git a/website/src/pages/hi/resources/roles/delegating/delegating.mdx b/website/src/pages/hi/resources/roles/delegating/delegating.mdx index 398581e518b8..697f4d1a1db5 100644 --- a/website/src/pages/hi/resources/roles/delegating/delegating.mdx +++ b/website/src/pages/hi/resources/roles/delegating/delegating.mdx @@ -2,54 +2,54 @@ title: Delegating --- -To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +तुरंत डेलीगेट करना शुरू करने के लिए, यहाँ देखें [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one)। -## अवलोकन +## Overview -Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. +Delegator, Indexer को GRT डेलीगेट करके GRT अर्जित करते हैं, जिससे नेटवर्क की सुरक्षा और कार्यक्षमता में मदद मिलती है। -## Benefits of Delegating +## Delegationबनाने के लाभ -- Strengthen the network’s security and scalability by supporting Indexers. -- Earn a portion of rewards generated by the Indexers. +- Indexers का समर्थन करके नेटवर्क की सुरक्षा और विस्तार क्षमता को मजबूत करें। +- Indexer द्वारा उत्पन्न इनामों के एक हिस्से को कमाएं। -## How Does Delegation Work? +## delegation कैसे काम करता है? -Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. +Delegators उन Indexer(s) से GRT पुरस्कार अर्जित करते हैं जिनको वे अपना GRT डेलिगेट करने के लिए चुनते हैं। -An Indexer's ability to process queries and earn rewards depends on three key factors: +किसी Indexer की क्वेरी प्रोसेस करने और पुरस्कार अर्जित करने की क्षमता तीन मुख्य कारकों पर निर्भर करती है: -1. The Indexer's Self-Stake (GRT staked by the Indexer). -2. The total GRT delegated to them by Delegators. -3. The price the Indexer sets for queries. +1. Indexer'sकी स्वयं की स्टेक (Indexer द्वारा स्टेक किया गया GRT)। +2. Delegator द्वारा उन्हें कुल GRT डेलीगेट किया गया। +3. क्वेरी के लिए Indexer द्वारा निर्धारित कीमत। -The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. +जितना अधिक GRT किसी Indexer को स्टेक और डेलीगेट किया जाता है, उतनी ही अधिक क्वेरीज़ वे सर्व कर सकते हैं, जिससे Delegator और Indexer दोनों के लिए अधिक संभावित रिवॉर्ड मिल सकते हैं। -### What is Delegation Capacity? +### Delegation Capacity क्या है? -Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. +delegation क्षमता उस अधिकतम GRT को दर्शाती है जिसे एक Indexer अपने Self-Stake के आधार पर Delegators से स्वीकार कर सकता है। -The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. +The Graph Network में 16 का एक delegation अनुपात शामिल है, जिसका अर्थ है कि एक Indexer अपनी स्वयं की स्टेक का 16 गुना तक डेलीगेट किए गए GRT को स्वीकार कर सकता है। -For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. +उदाहरण के लिए, यदि किसी Indexer के पास 1M GRT का Self-Stake है, तो उनकी Delegation Capacity 16M होगी। -### Why Does Delegation Capacity Matter? +### delegation क्षमता क्यों मायने रखती है? -If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. +यदि कोई Indexer अपनी Delegation Capacity से अधिक हो जाता है, तो सभी Delegators के लिए इनाम कम हो जाता है क्योंकि अतिरिक्त सौंपे गए GRT को प्रोटोकॉल के भीतर प्रभावी रूप से उपयोग नहीं किया जा सकता। -This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. +यह Delegators के लिए किसी Indexer की वर्तमान Delegation Capacity का मूल्यांकन करने से पहले उसका चयन करना अत्यंत महत्वपूर्ण बनाता है। -Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. +Indexers अपनी Self-Stake बढ़ाकर अपनी Delegation Capacity बढ़ा सकते हैं, जिससे delegated tokens की सीमा बढ़ जाती है. -## Delegation on The Graph +## delegation ऑन The Graph -> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). +> यह मार्गदर्शिका उन चरणों को शामिल नहीं करती है जैसे कि MetaMask सेट करना। Ethereum समुदाय वॉलेट्स के बारे में एक [व्यापक संसाधन प्रदान करता है।](https://ethereum.org/en/wallets/) -There are two sections in this guide: +इस गाइड में दो अनुभाग हैं: - ग्राफ़ नेटवर्क में टोकन सौंपने का जोखिम - प्रतिनिधि के रूप में अपेक्षित रिटर्न की गणना कैसे करें @@ -58,7 +58,7 @@ There are two sections in this guide: प्रोटोकॉल में प्रतिनिधि होने के मुख्य जोखिमों की सूची नीचे दी गई है। -### The Delegation Tax +### delegation कर प्रतिनिधियों को खराब व्यवहार के लिए कम नहीं किया जा सकता है, लेकिन खराब निर्णय लेने को हतोत्साहित करने के लिए प्रतिनिधियों पर एक कर है जो नेटवर्क की अखंडता को नुकसान पहुंचा सकता है। @@ -68,21 +68,21 @@ There are two sections in this guide: - सुरक्षित रहने के लिए, आपको Indexer को डेलीगेट करते समय अपने संभावित रिटर्न की गणना करनी चाहिए। उदाहरण के लिए, आप यह गणना कर सकते हैं कि आपके डेलीगेशन पर 0.5% कर वापस कमाने में कितने दिन लगेंगे। -### The Undelegation Period +### अनियोजन अवधि -When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. +जब एक Delegator अनडेलीगेट करने का चयन करता है, तो उनके टोकन 28-दिन की अनडेलीगेशन अवधि के अधीन होते हैं। -This means they cannot transfer their tokens or earn any rewards for 28 days. +इसका मतलब है कि वे 28 दिनों तक अपने टोकन ट्रांसफर नहीं कर सकते या कोई इनाम अर्जित नहीं कर सकते। -After the undelegation period, GRT will return to your crypto wallet. +After the undelegation period, GRT आपके क्रिप्टो वॉलेट में वापस आ जाएगा। ### यह क्यों महत्वपूर्ण है? -If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. +यदि आप किसी ऐसे Indexer को चुनते हैं जो भरोसेमंद नहीं है या अच्छा काम नहीं कर रहा है, तो आप उसे अनडेलीगेट करना चाहेंगे। इसका अर्थ है कि आप इनाम अर्जित करने के अवसरों को खो देंगे। -As a result, it’s recommended that you choose an Indexer wisely. +As a result, यह अनुशंसा की जाती है कि आप एक Indexer को समझदारी से चुनें। -![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) +![delegation अनबॉन्डिंग। delegation UI में 0.5% शुल्क को नोट करें, साथ ही 28 दिन की अनबॉन्डिंग अवधि।](/img/Delegation-Unbonding.png) #### डेलीगेशन पैरामीटर @@ -92,29 +92,29 @@ As a result, it’s recommended that you choose an Indexer wisely. - यदि किसी Indexer का पुरस्कार कट 100% पर सेट है, तो एक Delegator के रूप में, आपको 0 इंडेक्सिंग पुरस्कार मिलेंगे। - यदि इसे 80% पर सेट किया गया है, तो एक Delegator के रूप में, आप 20% प्राप्त करेंगे। -![Indexing Reward Cut. The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.](/img/Indexing-Reward-Cut.png) +![Indexing Reward Cut. शीर्ष Indexer Delegators को 90% इनाम दे रहा है। मध्य वाला Delegators को 20% दे रहा है। निचला वाला Delegators को ~83% दे रहा है।](/img/Indexing-Reward-Cut.png) - \*\*पूछताछ शुल्क कटौती - यह बिल्कुल Indexing Reward Cut की तरह है, लेकिन यह उन पूछताछ शुल्क पर लागू होता है जो Indexer एकत्र करता है। -- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. +- यह अत्यधिक अनुशंसा की जाती है कि आप [The Graph Discord](https://discord.gg/graphprotocol) का अन्वेषण करें ताकि यह निर्धारित किया जा सके कि किन Indexers की सामाजिक और तकनीकी प्रतिष्ठा सर्वश्रेष्ठ है। -- Many Indexers are active in Discord and will be happy to answer your questions. +- Many Indexers Discord में सक्रिय हैं और आपके प्रश्नों का उत्तर देने में खुश होंगे। ## यहाँ पर 'Delegators' की अपेक्षित लाभ की गणना की जा रही है। -> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +> अपने delegation पर ROI की गणना यहां करें।(https://thegraph.com/explorer/delegate?chain=arbitrum-one)। -A Delegator must consider a variety of factors to determine a return: +एक Delegator को प्रतिफल निर्धारित करने के लिए विभिन्न कारकों पर विचार करना चाहिए: - -An Indexer's ability to use the delegated GRT available to them impacts their rewards. +An Indexer's द्वारा उपलब्ध कराए गए प्रत्यायोजित GRT का उपयोग करने की क्षमता उनके पुरस्कारों को प्रभावित करती है। -If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. +यदि कोई Indexer अपने निपटान में उपलब्ध सभी GRT को आवंटित नहीं करता है, तो वे स्वयं और उनके Delegators दोनों के लिए संभावित आय को अधिकतम करने का अवसर खो सकते हैं। -Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. +Indexers किसी आवंटन को बंद कर सकते हैं और 1 से 28 दिन की विंडो के भीतर किसी भी समय इनाम एकत्र कर सकते हैं। हालाँकि, यदि इनाम तुरंत एकत्र नहीं किए जाते हैं, तो कुल इनाम कम दिखाई दे सकते हैं, भले ही इनाम का कुछ प्रतिशत अप्राप्त बना रहे। ### प्रश्न शुल्क में कटौती और अनुक्रमण शुल्क में कटौती को ध्यान में रखते हुए -You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. +आपको एक Indexer चुनना चाहिए जो अपने Query Fee और Indexing Fee Cuts को निर्धारित करने में पारदर्शी हो। सूत्र है: @@ -140,4 +140,5 @@ Delegators को प्रतिनिधि पूल में उनके MetaMask के माध्यम से Indexers को प्रतिनिधित्व देने के प्रयास कभी-कभी विफल हो सकते हैं और इसके परिणामस्वरूप "Pending" या "Queued" लेनदेन के प्रयासों के लिए लंबे समय तक रुकावट हो सकती है। -इस बग का एक सरल समाधान ब्राउज़र को पुनः प्रारंभ करना है (उदाहरण के लिए, पता बार में "abort " का उपयोग करना), जो सभी पिछले प्रयासों को रद्द कर देगा बिना गैस को वॉलेट से घटाए। कई उपयोगकर्ताओं ने जिन्होंने इस समस्या का सामना किया है, उन्होंने अपने ब्राउज़र को पुनः प्रारंभ करने और 'delegation' करने का प्रयास करने के बाद सफल लेनदेन की रिपोर्ट की है। +इस बग का एक सरल समाधान ब्राउज़र को पुनः प्रारंभ करना है (उदाहरण के लिए, पता बार में "abort +" का उपयोग करना), जो सभी पिछले प्रयासों को रद्द कर देगा बिना गैस को वॉलेट से घटाए। कई उपयोगकर्ताओं ने जिन्होंने इस समस्या का सामना किया है, उन्होंने अपने ब्राउज़र को पुनः प्रारंभ करने और 'delegation' करने का प्रयास करने के बाद सफल लेनदेन की रिपोर्ट की है। diff --git a/website/src/pages/hi/resources/roles/delegating/undelegating.mdx b/website/src/pages/hi/resources/roles/delegating/undelegating.mdx index 06b53896a7ea..aa37e2d4bd1c 100644 --- a/website/src/pages/hi/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/hi/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### चरण-दर-चरण 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## Additional Resources diff --git a/website/src/pages/hi/resources/subgraph-studio-faq.mdx b/website/src/pages/hi/resources/subgraph-studio-faq.mdx index 9901cc26d73f..d63ccadb4145 100644 --- a/website/src/pages/hi/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/hi/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: सबग्राफ स्टूडियो अक्सर पूछ ## 1. सबग्राफ स्टूडियो क्या है? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) एक **dapp** है, जो Subgraphs और **API keys** बनाने, प्रबंधित करने और प्रकाशित करने के लिए उपयोग किया जाता है।सबग्राफ ## 2. मैं एक एपीआई कुंजी कैसे बना सकता हूँ? @@ -12,20 +12,21 @@ To create an API, navigate to Subgraph Studio and connect your wallet. You will ## 3. क्या मैं कई एपीआई कुंजियां बना सकता हूं? -Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +हाँ! आप विभिन्न परियोजनाओं में उपयोग करने के लिए कई एपीआई keys बना सकते हैं। [यहां](https://thegraph.com/studio/apikeys/) लिंक देखें। ## 4. मैं एपीआई कुंजी के लिए डोमेन को कैसे प्रतिबंधित करूं? एपीआई key बनाने के बाद, सुरक्षा अनुभाग में, आप उन डोमेन को परिभाषित कर सकते हैं जो किसी विशिष्ट एपीआई key को क्वेरी कर सकते हैं। -## 5. क्या मैं अपना सबग्राफ किसी अन्य स्वामी को स्थानांतरित कर सकता हूं? +## 5. क्या मैं अपना Subgraph किसी अन्य मालिक को ट्रांसफर कर सकता हूँ? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +हाँ, जो Subgraphs **Arbitrum One** पर प्रकाशित किए गए हैं, उन्हें किसी नए **wallet** या **Multisig** में ट्रांसफर किया जा सकता है। इसके लिए, Subgraph की **details page** पर 'Publish' बटन के पास तीन बिंदुओं (•••) पर क्लिक करें और **'Transfer ownership'** विकल्प चुनें। -ध्यान दें कि एक बार स्थानांतरित हो जाने के बाद आप स्टूडियो में सबग्राफ को देख या संपादित नहीं कर पाएंगे। +ध्यान दें कि ट्रांसफर करने के बाद, आप Studio में उस Subgraph को देख या संपादित नहीं कर पाएंगे। -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. अगर मैं जिस Subgraph को उपयोग करना चाहता हूँ, उसका **developer** नहीं हूँ, तो मैं उसके **query URLs** कैसे खोज सकता हूँ? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +आप **Graph Explorer** के **Subgraph Details** सेक्शन में प्रत्येक Subgraph का **query URL** देख सकते हैं। - **"Query"** बटन पर क्लिक करने पर, एक पैन खुल जाएगा, जहाँ आपको इच्छित Subgraph का **query URL** मिलेगा। +- इसके बाद, **``** प्लेसहोल्डर को अपने **Subgraph Studio** के API key से बदलकर उपयोग कर सकते हैं। -याद रखें कि आप एक एपीआई key बना सकते हैं और नेटवर्क पर प्रकाशित किसी सबग्राफ को क्वेरी कर सकते हैं, भले ही आप स्वयं एक सबग्राफ बनाते हों। नई एपीआई key के माध्यम से ये प्रश्न, नेटवर्क पर किसी अन्य के रूप में भुगतान किए गए प्रश्न हैं। +याद रखें कि आप एक **API key** बना सकते हैं और नेटवर्क पर प्रकाशित किसी भी Subgraph को query कर सकते हैं, चाहे आपने वह Subgraph खुद बनाया हो या नहीं। नई **API key** के माध्यम से किए गए ये queries, नेटवर्क पर अन्य queries की तरह **paid queries** होंगे। diff --git a/website/src/pages/hi/resources/tokenomics.mdx b/website/src/pages/hi/resources/tokenomics.mdx index e3437e3a0fff..00ea0f5b3cef 100644 --- a/website/src/pages/hi/resources/tokenomics.mdx +++ b/website/src/pages/hi/resources/tokenomics.mdx @@ -1,12 +1,12 @@ --- -title: ग्राफ नेटवर्क के टोकनोमिक्स -sidebarTitle: Tokenomics +title: Tokenomics of The Graph Network +sidebarTitle: टोकनोमिक्स description: The Graph Network को शक्तिशाली टोकनोमिक्स द्वारा प्रोत्साहित किया जाता है। यहां बताया गया है कि GRT, The Graph का मूल कार्य उपयोगिता टोकन, कैसे काम करता है। --- -## अवलोकन +## Overview -The Graph एक विकेन्द्रीकृत प्रोटोकॉल है जो ब्लॉकचेन डेटा तक आसान पहुंच सक्षम करता है। यह ब्लॉकचेन डेटा को उसी तरह से अनुक्रमित करता है जैसे Google वेब को अनुक्रमित करता है। यदि आपने किसी dapp का उपयोग किया है जो किसी Subgraph से डेटा पुनर्प्राप्त करता है, तो संभवतः आपने The Graph के साथ इंटरैक्ट किया है। आज, वेब3 इकोसिस्टम में हजारों [popular dapps](https://thegraph.com/explorer) The Graph का उपयोग कर रहे हैं। +The Graph एक **decentralized protocol** है, जो **blockchain data** तक आसान पहुँच प्रदान करता है। यह **blockchain data** को उसी तरह **index** करता है, जैसे **Google** वेब को **index** करता है। अगर आपने किसी ऐसे **dapp** का उपयोग किया है जो किसी **Subgraph** से डेटा प्राप्त करता है, तो आपने संभवतः **The Graph** के साथ इंटरैक्ट किया है। आज, **Web3 ecosystem** में हजारों [लोकप्रिय dapps](https://thegraph.com/explorer) **The Graph** का उपयोग कर रहे हैं। ## विशिष्टताएँ @@ -14,90 +14,92 @@ The Graph का मॉडल एक B2B2C मॉडल के समान ह The Graph ब्लॉकचेन डेटा को अधिक सुलभ बनाने में महत्वपूर्ण भूमिका निभाता है और इसके आदान-प्रदान के लिए एक मार्केटप्लेस का समर्थन करता है। The Graph के पे-फॉर-व्हाट-यू-नीड मॉडल के बारे में अधिक जानने के लिए, इसके [free and growth plans](/subgraphs/billing/) देखें। -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +- GRT टोकन पता मुख्य नेटवर्क पर: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +- GRT टोकन पता Arbitrum One पर: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## The Roles of Network Participants +## नेटवर्क प्रतिभागियों की भूमिकाएँ -There are four primary network participants: +चार प्रमुख नेटवर्क प्रतिभागी हैं: -1. Delegators - Delegate GRT to Indexers & secure the network +1. Delegators - Indexers को GRT सौंपें और नेटवर्क को सुरक्षित करें -2. Curators - Find the best subgraphs for Indexers +2. **Curators** - Indexers के लिए सबसे अच्छे **Subgraphs** खोजें। -3. Developers - Build & query subgraphs +3. **Developers** - **Subgraphs** बनाएं और उन्हें **query** करें। 4. इंडेक्सर्स - ब्लॉकचेन डेटा की रीढ़ -Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). +मछुआरे और मध्यस्थ भी अन्य योगदानों के माध्यम से नेटवर्क की सफलता में महत्वपूर्ण भूमिका निभाते हैं, अन्य प्राथमिक प्रतिभागी भूमिकाओं के कार्यों का समर्थन करते हैं। नेटवर्क भूमिकाओं के बारे में अधिक जानकारी के लिए, [यह लेख पढ़ें](https://thegraph.com/blog/the-graph-grt-token-economics/)। -![Tokenomics diagram](/img/updated-tokenomics-image.png) +![Tokenomics आरेख](/img/updated-tokenomics-image.png) -## Delegators (Passively earn GRT) +## Delegator(निष्क्रिय रूप से GRT कमाएं) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +**Indexers** को **Delegators** द्वारा **GRT** डेलिगेट किया जाता है, जिससे नेटवर्क पर Subgraphs में Indexer की **stake** बढ़ती है। इसके बदले में, **Delegators** को Indexer से मिलने वाले कुल **query fees** और **indexing rewards** का एक निश्चित प्रतिशत मिलता है। हर **Indexer** स्वतंत्र रूप से तय करता है कि वह **Delegators** को कितना रिवार्ड देगा, जिससे **Indexers** के बीच **Delegators** को आकर्षित करने की प्रतिस्पर्धा बनी रहती है। अधिकांश **Indexers** सालाना **9-12%** रिटर्न ऑफर करते हैं। -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. +यदि कोई Delegator 15k GRT को किसी ऐसे Indexer को डेलिगेट करता है जो 10% की पेशकश कर रहा है, तो Delegator को वार्षिक रूप से ~1,500 GRT का इनाम प्राप्त होगा। -There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. +नेटवर्क पर किसी Delegator द्वारा GRT डेलीगेट करने पर 0.5% डेलीगेशन टैक्स जल जाता है। यदि कोई Delegator अपने डेलीगेट किए गए GRT को वापस लेने का निर्णय लेता है, तो उसे 28-एपॉक अनबॉन्डिंग अवधि की प्रतीक्षा करनी होगी। प्रत्येक एपॉक 6,646 ब्लॉक्स का होता है, जिसका अर्थ है कि 28 एपॉक लगभग 26 दिनों के बराबर होते हैं। -If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. +अगर आप इसे पढ़ रहे हैं, तो आप अभी Delegator बन सकते हैं बस [ network participants page ] (https://thegraph.com/explorer/participants/indexers)पर जाएं और अपनी पसंद के किसी Indexer को GRT डेलीगेट करें। -## Curators (Earn GRT) +## Curators (GRT कमाएं) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +**Curators** उच्च-गुणवत्ता वाले **Subgraphs** की पहचान करते हैं और उन्हें **"curate"** करते हैं (अर्थात, उन पर **GRT signal** करते हैं) ताकि **curation shares** कमा सकें। ये **curation shares** उस **Subgraph** द्वारा उत्पन्न सभी भविष्य की **query fees** का एक निश्चित प्रतिशत सुनिश्चित करते हैं। हालाँकि कोई भी स्वतंत्र नेटवर्क प्रतिभागी **Curator** बन सकता है, आमतौर पर **Subgraph developers** अपने स्वयं के **Subgraphs** के पहले **Curators** होते हैं, क्योंकि वे सुनिश्चित करना चाहते हैं कि उनका **Subgraph indexed** हो। -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +**Subgraph developers** को सलाह दी जाती है कि वे अपने **Subgraph** को कम से कम **3,000 GRT** के साथ **curate** करें। हालांकि, यह संख्या **network activity** और **community participation** के अनुसार बदल सकती है। -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +**Curators** को किसी नए **Subgraph** को **curate** करते समय **1% curation tax** देना पड़ता है। यह **curation tax** **burn** हो जाता है, जिससे **GRT** की कुल आपूर्ति कम होती है। -## Developers +## डेवलपर्स -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +**Developers** **Subgraphs** बनाते हैं और उन्हें **query** करके **blockchain data** प्राप्त करते हैं। चूंकि **Subgraphs** **open source** होते हैं, **developers** मौजूदा **Subgraphs** को **query** करके अपने **dapps** में **blockchain data** लोड कर सकते हैं। **Developers** द्वारा किए गए **queries** के लिए **GRT** में भुगतान किया जाता है, जो नेटवर्क प्रतिभागियों के बीच वितरित किया जाता है। -### सबग्राफ बनाना +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +**Developers** **[Subgraph create](/developing/creating-a-subgraph/)** करके **blockchain** पर डेटा **index** कर सकते हैं। **Subgraphs** यह निर्देश देते हैं कि **Indexers** को कौन सा डेटा **consumers** को उपलब्ध कराना चाहिए। -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +जब **developers** अपना **Subgraph** बना और टेस्ट कर लेते हैं, तो वे इसे **The Graph** के **decentralized network** पर **[publish](/subgraphs/developing/publishing/publishing-a-subgraph/)** कर सकते हैं। -### किसी मौजूदा सबग्राफ को क्वेरी करना +### मौजूदा **Subgraph** को **query** करना -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +एक बार **Subgraph** **[published](/subgraphs/developing/publishing/publishing-a-subgraph/)** हो जाने के बाद, कोई भी **API key** बना सकता है, अपनी **billing balance** में **GRT** जोड़ सकता है और **Subgraph** को **query** कर सकता है। सबग्राफ़ को GraphQL का उपयोग करके क्वेरी किया जाता है()/subgraphs/querying/introduction/, और क्वेरी शुल्क को Subgraph Studio()https://thegraph.com/studio/ में GRT के साथ भुगतान किया जाता है। क्वेरी शुल्क को नेटवर्क प्रतिभागियों में उनके प्रोटोकॉल में योगदान के आधार पर वितरित किया जाता है। -1% of the query fees paid to the network are burned. +नेटवर्क को दिए गए क्वेरी शुल्क का 1% नष्ट (burn) कर दिया जाता है। -## Indexers (Earn GRT) +## Indexers (GRT कमाएँ) -Indexers The Graph की रीढ़ हैं। वे स्वतंत्र हार्डवेयर और सॉफ़्टवेयर संचालित करते हैं जो The Graph के विकेन्द्रीकृत नेटवर्क को शक्ति प्रदान करता है। Indexers, सबग्राफ से निर्देशों के आधार पर उपभोक्ताओं को डेटा प्रदान करते हैं। +**Indexers** **The Graph** की रीढ़ हैं। वे **The Graph** के **decentralized network** को चलाने के लिए स्वतंत्र **hardware** और **software** ऑपरेट करते हैं। **Indexers**, **Subgraphs** से मिले निर्देशों के आधार पर **data consumers** को डेटा प्रदान करते हैं। Indexers दो तरीकों से GRT रिवार्ड्स कमा सकते हैं: -1. **क्वेरी शुल्क:** डेवलपर्स या उपयोगकर्ताओं द्वारा Subgraph डेटा क्वेरी के लिए भुगतान किया गया GRT। क्वेरी शुल्क सीधे Indexers को एक्सपोनेंशियल रिबेट फ़ंक्शन के अनुसार वितरित किया जाता है (देखें GIP [यहाँ](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162))। +1. **Query fees**: **Developers** या **users** द्वारा **Subgraph data queries** के लिए भुगतान किए गए **GRT**। ये शुल्क सीधे **Indexers** को **exponential rebate function** के अनुसार वितरित किए जाते हैं। [यहां देखें](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)। -2. **Indexing रिवार्ड्स**: 3% की वार्षिक जारी राशि Indexers को उनके द्वारा indexed किए गए सबग्राफकी संख्या के आधार पर वितरित की जाती है। ये पुरस्कार Indexers को सबग्राफको index करने के लिए प्रेरित करते हैं, कभी-कभी query fees शुरू होने से पहले भी, ताकि वे Proofs of Indexing (POIs) को एकत्रित और प्रस्तुत कर सकें, यह सत्यापित करने के लिए कि उन्होंने डेटा को सटीक रूप से index किया है। +2. **Indexing rewards**: **3% वार्षिक जारी किए गए GRT** को **Indexers** के बीच वितरित किया जाता है, इस आधार पर कि वे कितने **Subgraphs** को **index** कर रहे हैं। ये **rewards** Indexers को **Subgraphs** को **index** करने के लिए प्रेरित करते हैं, कभी-कभी **query fees** शुरू होने से पहले ही, ताकि वे **Proofs of Indexing (POIs)** जमा कर सकें और सत्यापित कर सकें कि उन्होंने डेटा को सही तरीके से **index** किया है। -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +प्रत्येक **Subgraph** को कुल नेटवर्क **token issuance** का एक हिस्सा आवंटित किया जाता है, जो उस **Subgraph** के **curation signal** की मात्रा पर आधारित होता है। यह राशि फिर उस **Subgraph** पर **Indexers** के आवंटित **stake** के अनुसार उन्हें **reward** के रूप में दी जाती है। -In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. +एक Indexing Node चलाने के लिए, Indexers को नेटवर्क के साथ 100,000 GRT या उससे अधिक की स्वयं-स्टेकिंग करनी होगी। Indexers को उनके द्वारा सेव की जाने वाली क्वेरी की मात्रा के अनुपात में GRT स्वयं-स्टेक करने के लिए प्रोत्साहित किया जाता है। -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +**Indexers** अपने **Subgraph** पर **GRT allocation** बढ़ाने के लिए **Delegators** से **GRT delegation** स्वीकार कर सकते हैं, और वे अपने प्रारंभिक **self-stake** का अधिकतम **16 गुना** स्वीकार कर सकते हैं। यदि कोई **Indexer** "over-delegated" हो जाता है (अर्थात् उसका **delegated GRT** उसके प्रारंभिक **self-stake** के 16 गुना से अधिक हो जाता है), तो वह नेटवर्क में अपना **self-stake** बढ़ाने तक अतिरिक्त **GRT** का उपयोग नहीं कर पाएगा। -The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. +एक Indexer को मिलने वाले पुरस्कारों की मात्रा विभिन्न कारकों पर निर्भर कर सकती है, जैसे कि Indexer की स्वयं की हिस्सेदारी, स्वीकृत डेलिगेशन, सेवा की गुणवत्ता, और कई अन्य कारक। -## Token Supply: Burning & Issuance +## टोकन आपूर्ति: जलाना और जारी करना -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +**प्रारंभिक टोकन आपूर्ति** 10 बिलियन **GRT** है, और **Indexers** को **Subgraphs** पर **stake allocate** करने के लिए प्रति वर्ष **3%** नई **GRT issuance** का लक्ष्य रखा गया है। इसका मतलब है कि हर साल **Indexers** के योगदान के लिए नए टोकन जारी किए जाएंगे, जिससे कुल **GRT आपूर्ति** 3% बढ़ेगी। -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph में नए टोकन **issuance** को संतुलित करने के लिए कई **burning mechanisms** शामिल किए गए हैं। सालाना लगभग **1% GRT supply** विभिन्न नेटवर्क गतिविधियों के माध्यम से **burn** हो जाती है, और यह संख्या नेटवर्क की वृद्धि के साथ बढ़ रही है। ये **burning mechanisms** शामिल हैं: - **0.5% Delegation Tax**: जब कोई **Delegator** किसी **Indexer** को **GRT** डेलीगेट करता है। +- **1% Curation Tax**: जब **Curators** किसी **Subgraph** पर **GRT signal** करते हैं। +- **1% Query Fees Burn**: जब **ब्लॉकचेन डेटा** के लिए **queries** की जाती हैं। -![Total burned GRT](/img/total-burned-grt.jpeg) +![कुल जले हुए GRT](/img/total-burned-grt.jpeg) -In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. +इन नियमित रूप से होने वाली टोकन बर्निंग गतिविधियों के अलावा, GRT टोकन में एक slashing mechanism भी शामिल है, जो Indexer द्वारा किए गए दुर्भावनापूर्ण या गैर-जिम्मेदाराना व्यवहार को दंडित करने के लिए लागू किया जाता है। यदि किसी Indexer को slashed किया जाता है, तो उस epoch के लिए उसके indexing rewards का 50% burn कर दिया जाता है (जबकि बाकी आधा हिस्सा fisherman को जाता है), और उसकी self-stake का 2.5% slashed कर दिया जाता है, जिसमें से आधा हिस्सा burn कर दिया जाता है। यह सुनिश्चित करने में मदद करता है कि Indexer नेटवर्क के सर्वोत्तम हितों में कार्य करें और इसकी security और stability में योगदान दें। -## Improving the Protocol +## प्रोटोकॉल में सुधार करना -The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). + The Graph Network निरंतर विकसित हो रहा है और प्रोटोकॉल की आर्थिक संरचना में सुधार किए जा रहे हैं ताकि सभी नेटवर्क प्रतिभागियों को सर्वोत्तम अनुभव मिल सके। The Graph Council प्रोटोकॉल परिवर्तनों की निगरानी करता है और समुदाय के सदस्यों को भाग लेने के लिए प्रोत्साहित किया जाता है। प्रोटोकॉल सुधारों में शामिल हों [The Graph Forum] (https://forum.thegraph.com/) में। diff --git a/website/src/pages/hi/sps/introduction.mdx b/website/src/pages/hi/sps/introduction.mdx index 30d84b5cfb7f..56ee02d1d54a 100644 --- a/website/src/pages/hi/sps/introduction.mdx +++ b/website/src/pages/hi/sps/introduction.mdx @@ -3,28 +3,29 @@ title: सबस्ट्रीम-पावर्ड सबग्राफ क sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +अपने सबग्राफ की कार्यक्षमता और स्केलेबिलिटी को बढ़ाएं [सबस्ट्रीम](/substreams/introduction/) का उपयोग करके, जो प्री-इंडेक्स्ड ब्लॉकचेन डेटा को स्ट्रीम करता है। -## अवलोकन +## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +सबस्ट्रीम पैकेज (.spkg) को डेटा स्रोत के रूप में उपयोग करें ताकि आपका सबग्राफ पहले से इंडेक्स किए गए ब्लॉकचेन डेटा की स्ट्रीम तक पहुंच प्राप्त कर सके। यह बड़े या जटिल ब्लॉकचेन नेटवर्क के साथ अधिक कुशल और स्केलेबल डेटा हैंडलिंग को सक्षम बनाता है। ### विशिष्टताएँ इस तकनीक को सक्षम करने के दो तरीके हैं: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **सबस्ट्रीम [triggers](/sps/triggers/) का उपयोग करना**: किसी भी सबस्ट्रीम मॉड्यूल से उपभोग करने के लिए, Protobuf मॉडल को एक सबग्राफ हैंडलर के माध्यम से आयात करें और अपनी पूरी लॉजिक को एक सबग्राफ में स्थानांतरित करें। इस विधि से Subgraph में सीधे सबग्राफ entities बनाई जाती हैं। -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **[Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out) का उपयोग करके**: अधिक लॉजिक को सबस्ट्रीम में लिखकर, आप सीधे मॉड्यूल के आउटपुट को [`ग्राफ-नोड`](/indexing/tooling/graph-node/) में कंज्यूम कर सकते हैं। graph-node में, आप सबस्ट्रीम डेटा का उपयोग करके अपनी सबग्राफ entities बना सकते हैं। -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +आप अपना लॉजिक सबग्राफ या सबस्ट्रीम में कहीं भी रख सकते हैं। हालाँकि, अपने डेटा की आवश्यकताओं के अनुसार निर्णय लें, क्योंकि सबस्ट्रीम एक समानांतर मॉडल का उपयोग करता है, और ट्रिगर `graph node` में रैखिक रूप से उपभोग किए जाते हैं। ### Additional Resources -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +इन लिंक पर जाएं ताकि आप कोड-जनरेशन टूलिंग का उपयोग करके अपना पहला एंड-टू-एंड सबस्ट्रीम प्रोजेक्ट तेजी से बना सकें: - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/hi/sps/sps-faq.mdx b/website/src/pages/hi/sps/sps-faq.mdx index 53a8f393a5bc..1351b52c67a9 100644 --- a/website/src/pages/hi/sps/sps-faq.mdx +++ b/website/src/pages/hi/sps/sps-faq.mdx @@ -3,39 +3,39 @@ title: सबस्ट्रीम्स-पावर्ड सबग्रा sidebarTitle: FAQ --- -## What are Substreams? +## सबस्ट्रीम क्या होते हैं? -Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications. +सबस्ट्रीम एक अत्यधिक शक्तिशाली प्रोसेसिंग इंजन है जो ब्लॉकचेन डेटा की समृद्ध स्ट्रीम्स को उपभोग करने में सक्षम है। यह आपको ब्लॉकचेन डेटा को परिष्कृत और आकार देने की अनुमति देता है ताकि एंड-यूजर applications द्वारा इसे तेजी और सहजता से पचाया जा सके। -Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere. +यह एक ब्लॉकचेन-अज्ञेयवादी, समानांतरित, और स्ट्रीमिंग-प्रथम इंजन है, जो ब्लॉकचेन डेटा ट्रांसफॉर्मेशन लेयर के रूप में कार्य करता है। यह [Firehose](https://firehose.streamingfast.io/) द्वारा संचालित है और डेवलपर्स को Rust मॉड्यूल लिखने, कम्युनिटी मॉड्यूल्स पर निर्माण करने, बेहद उच्च-प्रदर्शन इंडेक्सिंग प्रदान करने, और अपना डेटा कहीं भी [sink](/substreams/developing/sinks/) करने में सक्षम बनाता है। -Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. +सबस्ट्रीम को [StreamingFast](https://www.streamingfast.io/) द्वारा विकसित किया गया है। सबस्ट्रीम के बारे में अधिक जानने के लिए [सबस्ट्रीम Documentation](/substreams/introduction/) पर जाएं। -## What are Substreams-powered subgraphs? +## सबस्ट्रीम-संचालित सबग्राफ क्या हैं? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[सबस्ट्रीम-powered सबग्राफ](/sps/introduction/)सबस्ट्रीमकी शक्ति को सबग्राफ की queryability के साथ जोड़ते हैं। जब किसी सबस्ट्रीम-powered सबग्राफ को प्रकाशित किया जाता है, तो सबस्ट्रीम परिवर्तनों द्वारा निर्मित डेटा [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) उत्पन्न कर सकता है, जो सबग्राफ entities के साथ संगत होते हैं। -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +यदि आप पहले से ही सबग्राफ विकास से परिचित हैं, तो ध्यान दें कि सबस्ट्रीम-संचालित सबग्राफ को उसी तरह से क्वेरी किया जा सकता है जैसे कि इसे AssemblyScript ट्रांसफॉर्मेशन लेयर द्वारा उत्पन्न किया गया हो। यह सबग्राफ के सभी लाभ प्रदान करता है, जिसमें एक डायनेमिक और लचीला GraphQL API शामिल है। -## How are Substreams-powered subgraphs different from subgraphs? +## सबस्ट्रीम-powered सबग्राफ सामान्य सबग्राफ से कैसे भिन्न हैं? सबग्राफ डेटा सोर्सेस से बने होते हैं, जो ऑनचेन आयोजन को निर्धारित करते हैं और उन आयोजन को Assemblyscript में लिखे handler के माध्यम से कैसे ट्रांसफॉर्म करना चाहिए। ये आयोजन क्रमवार तरीके से प्रोसेस किए जाते हैं, जिस क्रम में ये आयोजन ऑनचेन होते हैं। -सबस्ट्रीम-सक्षम सबग्राफ के पास एक ही datasource होता है जो सबस्ट्रीम पैकेज को संदर्भित करता है, जिसे ग्राफ नोड द्वारा प्रोसेस किया जाता है। सबस्ट्रीम को पारंपरिक सबग्राफ की तुलना में अतिरिक्त सटीक ऑनचेन डेटा तक पहुंच होती है, और यह बड़े पैमाने पर समानांतर प्रोसेसिंग का लाभ भी ले सकते हैं, जिससे प्रोसेसिंग समय काफी तेज हो सकता है। +By contrast, सबस्ट्रीम-powered सबग्राफ के पास एक ही datasource होता है जो एक सबस्ट्रीम package को संदर्भित करता है, जिसे ग्राफ नोड द्वारा प्रोसेस किया जाता है। सबस्ट्रीम को पारंपरिक सबग्राफ की तुलना में अतिरिक्त विस्तृत ऑनचेन डेटा तक पहुंच प्राप्त होती है, और यह बड़े पैमाने पर समानांतर प्रोसेसिंग से भी लाभ उठा सकते हैं, जिससे प्रोसेसिंग समय काफी तेज़ हो सकता है। -## What are the benefits of using Substreams-powered subgraphs? +## सबस्ट्रीम- powered सबग्राफ के उपयोग के लाभ क्या हैं? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +सबस्ट्रीम-powered सबग्राफ सभी लाभों को एक साथ लाते हैं जो सबस्ट्रीम और सबग्राफ प्रदान करते हैं। वे अधिक संयोजनशीलता और उच्च-प्रदर्शन इंडेक्सिंग को The Graph में लाते हैं। वे नए डेटा उपयोग के मामलों को भी सक्षम बनाते हैं; उदाहरण के लिए, एक बार जब आपने अपना सबस्ट्रीम-powered सबग्राफ बना लिया, तो आप अपने [सबस्ट्रीम modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) को पुन: उपयोग कर सकते हैं ताकि विभिन्न [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) जैसे कि PostgreSQL, MongoDB, और Kafka में आउटपुट किया जा सके। -## What are the benefits of Substreams? +## Substream के क्या benefit हैं? -There are many benefits to using Substreams, including: +Substream का उपयोग करने के कई benefit हैं, जिनमें: -- Composable: You can stack Substreams modules like LEGO blocks, and build upon community modules, further refining public data. +- Composable: आप Substreams modules को LEGO blocks की तरह stack कर सकते हैं, और community module पर निर्माण करके public data को अधिक refining कर कते हैं। -- High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). +- High-performance indexing: बड़े पैमाने पर parallel operation के विशाल संगठनों के माध्यम से कई गुना तेज़ सूचीकरण (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: अपना डेटा कहीं भी सिंक करें: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -49,48 +49,48 @@ Developed by [StreamingFast](https://www.streamingfast.io/), the Firehose is a b Go to the [documentation](https://firehose.streamingfast.io/) to learn more about the Firehose. -## What are the benefits of the Firehose? +## Firehouse के क्या benefits हैं? -There are many benefits to using Firehose, including: +Firehouse का उपयोग करने के कई benefits हैं, जिनमें: -- Lowest latency & no polling: In a streaming-first fashion, the Firehose nodes are designed to race to push out the block data first. +- सबसे कम latency और कोई मतदान नहीं: streaming-first fashion में, Firehose nodes को पहले block data को push करने की दौड़ के लिए designed किया गया है। -- Prevents downtimes: Designed from the ground up for High Availability. +- Prevents downtimes: उच्च उपलब्धता के लिए मौलिक रूप से design किया गया है। -- Never miss a beat: The Firehose stream cursor is designed to handle forks and to continue where you left off in any condition. +- Never miss a beat: Firehose stream cursor को forks to handle और किसी भी स्थिति में जहां आप छोड़े थे वहां जारी रहने के लिए design किया गया है। -- Richest data model:  Best data model that includes the balance changes, the full call tree, internal transactions, logs, storage changes, gas costs, and more. +- Richest data model: Best data model जिसमें balance changes, the full call tree, आंतरिक लेनदेन, logs, storage changes, gas costs और बहुत कुछ शामिल है। -- Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. +- Leverages flat files: blockchain data को flat files में निकाला जाता है, जो सबसे सस्ते और सबसे अधिक अनुकूल गणना संसाधन होता है। -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## डेवलपर्स सबस्ट्रीम-powered सबग्राफ और सबस्ट्रीम के बारे में अधिक जानकारी कहाँ प्राप्त कर सकते हैं? [सबस्ट्रीम documentation](/substreams/introduction/) आपको सबस्ट्रीम modules बनाने का तरीका सिखाएगी। -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [सबस्ट्रीम-powered सबग्राफ documentation](/sps/introduction/) आपको यह दिखाएगी कि उन्हें The Graph पर परिनियोजन के लिए कैसे संकलित किया जाए। [नवीनतम Substreams Codegen टूल](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) आपको बिना किसी कोड के एक Substreams प्रोजेक्ट शुरू करने की अनुमति देगा। -## What is the role of Rust modules in Substreams? +## Substreams में Rust modules का क्या भूमिका है? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust मॉड्यूल्स AssemblyScript मापर्स के समकक्ष होते हैं सबग्राफ में। इन्हें समान तरीके से WASM में संकलित किया जाता है, लेकिन प्रोग्रामिंग मॉडल समानांतर निष्पादन की अनुमति देता है। ये उस प्रकार के रूपांतरण और समुच्चयन को परिभाषित करते हैं, जिन्हें आप कच्चे ब्लॉकचेन डेटा पर लागू करना चाहते हैं। See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. -## What makes Substreams composable? +## Substreams को composable क्या बनाता है? When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +ऐसे मान लीजिए, एलिस एक DEX प्राइस मॉड्यूल बना सकती है, बॉब इसका उपयोग करके अपने इच्छित कुछ टोकनों के लिए एक वॉल्यूम एग्रीगेटर बना सकता है, और लिसा चार अलग-अलग DEX प्राइस मॉड्यूल को जोड़कर एक प्राइस ओरैकल बना सकती है। एक ही सबस्ट्रीम अनुरोध इन सभी व्यक्तिगत मॉड्यूल्स को एक साथ पैकेज करेगा, उन्हें आपस में लिंक करेगा, और एक अधिक परिष्कृत डेटा स्ट्रीम प्रदान करेगा। उस स्ट्रीम का उपयोग फिर एक सबग्राफ को पॉप्युलेट करने के लिए किया जा सकता है और उपभोक्ताओं द्वारा क्वेरी किया जा सकता है। -## How can you build and deploy a Substreams-powered Subgraph? +## आप कैसे एक Substreams-powered Subgraph बना सकते हैं और deploy कर सकते हैं? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +सबस्ट्रीम-समर्थित सबग्राफ को [परिभाषित](/sps/introduction/) करने के बाद, आप इसे Graph CLI का उपयोग करके [सबग्राफ Studio](https://thegraph.com/studio/) में डिप्लॉय कर सकते हैं। -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## आप सबस्ट्रीम और सबस्ट्रीम-powered सबग्राफ के उदाहरण कहाँ पा सकते हैं? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +आप [इस Github रिपॉज़िटरी](https://github.com/pinax-network/awesome-substreams) पर जाकर सबस्ट्रीम और सबस्ट्रीम -powered सबग्राफके उदाहरण देख सकते हैं। -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## सबस्ट्रीम और सबस्ट्रीम-powered सबग्राफ का The Graph Network के लिए क्या अर्थ है? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/hi/sps/triggers.mdx b/website/src/pages/hi/sps/triggers.mdx index 258b6e532745..196694448b05 100644 --- a/website/src/pages/hi/sps/triggers.mdx +++ b/website/src/pages/hi/sps/triggers.mdx @@ -2,17 +2,17 @@ title: सबस्ट्रीम्स ट्रिगर्स --- -Use Custom Triggers and enable the full use GraphQL. +कस्टम ट्रिगर्स का उपयोग करें और पूर्ण रूप से GraphQL को सक्षम करें। -## अवलोकन +## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +कस्टम ट्रिगर्स आपको डेटा सीधे आपके सबग्राफ मैपिंग फ़ाइल और entities में भेजने की अनुमति देते हैं, जो तालिकाओं और फ़ील्ड्स के समान होते हैं। इससे आप पूरी तरह से GraphQL लेयर का उपयोग कर सकते हैं। -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +आपके सबस्ट्रीम मॉड्यूल द्वारा उत्पन्न Protobuf परिभाषाओं को आयात करके, आप इस डेटा को अपने सबग्राफ के handler में प्राप्त और प्रोसेस कर सकते हैं। यह सबग्राफ ढांचे के भीतर कुशल और सुव्यवस्थित डेटा प्रबंधन सुनिश्चित करता है। -### Defining `handleTransactions` +### `handleTransactions` को परिभाषित करना -निम्नलिखित कोड यह दर्शाता है कि कैसे एक handleTransactions फ़ंक्शन को एक subgraph हैंडलर में परिभाषित किया जा सकता है। यह फ़ंक्शन कच्चे Substreams बाइट्स को एक पैरामीटर के रूप में प्राप्त करता है और उन्हें एक Transactions ऑब्जेक्ट में डिकोड करता है। प्रत्येक लेनदेन के लिए, एक नई subgraph एंटिटी बनाई जाती है। +यह कोड एक सबग्राफ handler में `handleTransactions` फ़ंक्शन को परिभाषित करने का तरीका दर्शाता है। यह फ़ंक्शन कच्चे सबस्ट्रीम बाइट्स को पैरामीटर के रूप में प्राप्त करता है और उन्हें `Transactions` ऑब्जेक्ट में डिकोड करता है। प्रत्येक लेन-देन के लिए, एक नया सबग्राफ entity बनाया जाता है। ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -34,14 +34,14 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +यहाँ आप `mappings.ts` फ़ाइल में जो देख रहे हैं: 1. Substreams डेटा को जनरेट किए गए Transactions ऑब्जेक्ट में डिकोड किया जाता है, यह ऑब्जेक्ट किसी अन्य AssemblyScript ऑब्जेक्ट की तरह उपयोग किया जाता है। 2. लेनदेन पर लूप करना -3. प्रत्येक लेनदेन के लिए एक नया subgraph entity बनाएं +3. यहाँ आप `mappings.ts` फ़ाइल में जो देख रहे हैं: -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +एक ट्रिगर-आधारित सबग्राफ का विस्तृत उदाहरण देखने के लिए, [इस ट्यूटोरियल को देखें](/sps/tutorial/)। ### Additional Resources -To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/). +अपने पहले प्रोजेक्ट को डेवलपमेंट कंटेनर में स्कैफोल्ड करने के लिए, इनमें से किसी एक [How-To Guide](/substreams/developing/dev-container/) को देखें। diff --git a/website/src/pages/hi/sps/tutorial.mdx b/website/src/pages/hi/sps/tutorial.mdx index 86326b903aad..18d38dc06938 100644 --- a/website/src/pages/hi/sps/tutorial.mdx +++ b/website/src/pages/hi/sps/tutorial.mdx @@ -1,15 +1,15 @@ --- -title: 'ट्यूटोरियल: Solana पर एक Substreams-शक्ति वाले Subgraph सेट करें' +title: "ट्यूटोरियल: Solana पर एक Substreams-शक्ति वाले Subgraph सेट करें" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## शुरू करिये For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) -### Prerequisites +### आवश्यक शर्तें 'शुरू करने से पहले, सुनिश्चित करें कि:' @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### चरण 2: Subgraph Manifest उत्पन्न करें -एक बार जब प्रोजेक्ट इनिशियलाइज़ हो जाए, Dev Container में निम्नलिखित कमांड चलाकर subgraph मैनिफेस्ट जेनरेट करें: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash 'सबस्ट्रीम्स कोडजेन' subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### चरण 3: schema.graphql में संस्थाएँ परिभाषित करें -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ type MyTransfer @entity { With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ AssemblyScript में Protobuf ऑब्जेक्ट बनाने क npm चलाएँ protogen ``` -यह कमांड Protobuf परिभाषाओं को AssemblyScript में परिवर्तित करता है, जिससे आप उन्हें हैंडलर में उपयोग कर सकते हैं। +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### निष्कर्ष -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/hi/subgraphs/_meta-titles.json b/website/src/pages/hi/subgraphs/_meta-titles.json index 0556abfc236c..87cd473806ba 100644 --- a/website/src/pages/hi/subgraphs/_meta-titles.json +++ b/website/src/pages/hi/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", - "cookbook": "Cookbook", + "querying": "queries", + "developing": "विकसित करना", + "guides": "How-to Guides", "best-practices": "Best Practices" } diff --git a/website/src/pages/hi/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/hi/subgraphs/best-practices/avoid-eth-calls.mdx index cd5921ae8354..45cad90fd76b 100644 --- a/website/src/pages/hi/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: सबग्राफ सर्वोत्तम प्रथा 4 - eth_calls से बचकर अनुक्रमण गति में सुधार करें -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: eth_calls से बचाव --- ## TLDR -eth_calls वे कॉल हैं जो एक Subgraph से Ethereum नोड पर किए जा सकते हैं। ये कॉल डेटा लौटाने में महत्वपूर्ण समय लेते हैं, जिससे indexing धीमी हो जाती है। यदि संभव हो, तो स्मार्ट कॉन्ट्रैक्ट्स को इस तरह से डिजाइन करें कि वे सभी आवश्यक डेटा उत्पन्न करें ताकि आपको eth_calls का उपयोग न करना पड़े। +eth_calls वे कॉल हैं जो एक Subgraph से Ethereum नोड पर किया जा सकता है। ये कॉल डेटा लौटाने में महत्वपूर्ण समय लेते हैं, जिससे indexing धीमी हो जाती है। यदि संभव हो, तो smart contract को इस तरह से डिजाइन करें कि वे सभी आवश्यक डेटा उत्पन्न करें ताकि आपको eth_calls का उपयोग न करना पड़े। ## Eth_calls से बचना एक सर्वोत्तम अभ्यास क्यों है -Subgraphs को स्मार्ट कॉन्ट्रैक्ट्स से निकले हुए इवेंट डेटा को इंडेक्स करने के लिए ऑप्टिमाइज़ किया गया है। एक subgraph ‘eth_call’ से आने वाले डेटा को भी इंडेक्स कर सकता है, लेकिन इससे subgraph इंडेक्सिंग काफी धीमी हो सकती है क्योंकि ‘eth_calls’ के लिए स्मार्ट कॉन्ट्रैक्ट्स को एक्सटर्नल कॉल्स करने की आवश्यकता होती है। इन कॉल्स की प्रतिक्रिया subgraph पर निर्भर नहीं करती, बल्कि उस Ethereum नोड की कनेक्टिविटी और प्रतिक्रिया पर निर्भर करती है, जिसे क्वेरी किया जा रहा है। हमारे subgraphs में ‘eth_calls’ को कम करके या पूरी तरह से समाप्त करके, हम अपने इंडेक्सिंग स्पीड में उल्लेखनीय सुधार कर सकते हैं। +सबग्राफ स्मार्ट contract द्वारा उत्सर्जित इवेंट डेटा को इंडेक्स करने के लिए ऑप्टिमाइज़ किए गए हैं। एक सबग्राफ `eth_call` से आने वाले डेटा को भी इंडेक्स कर सकता है, लेकिन यह सबग्राफ indexing को काफी धीमा कर सकता है क्योंकि eth_calls के लिए स्मार्ट कॉन्ट्रैक्ट्स को एक्सटर्नल कॉल करने की आवश्यकता होती है। इन कॉल्स की प्रतिक्रियाशीलता सबग्राफ पर नहीं, बल्कि उस Ethereum नोड की कनेक्टिविटी और प्रतिक्रियाशीलता पर निर्भर करती है, जिससे क्वेरी की जा रही है। यदि हम अपने सबग्राफ में `eth_calls` को कम या समाप्त कर देते हैं, तो हम अपनी indexing स्पीड को काफी हद तक सुधार सकते हैं। ### एक eth_call कैसा दिखता है? -eth_calls अक्सर तब आवश्यक होते हैं जब subgraph के लिए आवश्यक डेटा इमिटेड इवेंट्स के माध्यम से उपलब्ध नहीं होता है। उदाहरण के लिए, एक ऐसा परिदृश्य मानें जहां एक subgraph को यह पहचानने की आवश्यकता है कि क्या ERC20 टोकन एक विशेष पूल का हिस्सा हैं, लेकिन कॉन्ट्रैक्ट केवल एक बुनियादी Transfer इवेंट इमिट करता है और वह इवेंट इमिट नहीं करता जिसमें हमें आवश्यक डेटा हो: +`eth_calls` अक्सर आवश्यक होते हैं जब किसी सबग्राफ के लिए आवश्यक डेटा उत्सर्जित घटनाओं के माध्यम से उपलब्ध नहीं होता है। उदाहरण के लिए, एक स्थिति पर विचार करें जहां एक सबग्राफ को यह पहचानने की आवश्यकता होती है कि कोई ERC20 टोकन किसी विशेष पूल का हिस्सा है या नहीं, लेकिन अनुबंध केवल एक बुनियादी `Transfer` आयोजन उत्सर्जित करता है और वह घटना उत्सर्जित नहीं करता है जिसमें हमारे लिए आवश्यक डेटा हो। ```yaml इवेंट Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -यह कार्यात्मक है, हालांकि यह आदर्श नहीं है क्योंकि यह हमारे subgraph की indexing को धीमा कर देता है। +यह कार्यशील है, हालांकि यह हमारे सबग्राफ की indexing को धीमा कर देता है। ## Eth_calls को कैसे समाप्त करें @@ -54,7 +54,7 @@ export function handleTransfer(event: Transfer): void { event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -इस अपडेट के साथ, subgraph आवश्यक डेटा को बिना बाहरी कॉल के सीधे अनुक्रमित कर सकता है: +इस अपडेट के साथ, सबग्राफ बाहरी कॉल किए बिना सीधे आवश्यक डेटा को इंडेक्स कर सकता है। ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,22 +96,22 @@ export function handleTransferWithPool(event: TransferWithPool): void { Handler स्वयं इस eth_call के परिणाम तक ठीक उसी तरह पहुंचता है जैसे पिछले अनुभाग में, अनुबंध से बाइंडिंग करके और कॉल करके। graph-node घोषित eth_calls के परिणामों को मेमोरी में कैश करता है और हैंडलर से कॉल इस मेमोरी कैश से परिणाम प्राप्त करेगा, बजाय इसके कि एक वास्तविक RPC कॉल की जाए। -नोट: घोषित eth_calls केवल उन subgraphs में किए जा सकते हैं जिनका specVersion >= 1.2.0 है। +घोषित eth_calls केवल उन सबग्राफ में किए जा सकते हैं जिनका specVersion >= 1.2.0 है। ## निष्कर्ष -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +आप अपने सबग्राफ में `eth_calls` को कम या समाप्त करके Indexing प्रदर्शन को काफी हद तक सुधार सकते हैं। -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. ![indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/hi/subgraphs/best-practices/derivedfrom.mdx index 6711d2943209..2b39dfea8e4c 100644 --- a/website/src/pages/hi/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph सर्वोत्तम प्रथा 2 - @derivedFrom का उपयोग करके अनुक्रमण और क्वेरी की प्रतिक्रियाशीलता में सुधार करें। -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -आपके स्कीमा में ऐरे हजारों प्रविष्टियों से बढ़ने पर एक सबग्राफ के प्रदर्शन को वास्तव में धीमा कर सकते हैं। यदि संभव हो, तो @derivedFrom निर्देशिका का उपयोग करना चाहिए जब आप ऐरे का उपयोग कर रहे हों, क्योंकि यह बड़े ऐरे के निर्माण को रोकता है, हैंडलरों को सरल बनाता है और व्यक्तिगत संस्थाओं के आकार को कम करता है, जिससे अनुक्रमण गति और प्रश्न प्रदर्शन में महत्वपूर्ण सुधार होता है। +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## @derivedFrom निर्देशिका का उपयोग कैसे करें @@ -15,7 +15,7 @@ sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' टिप्पणियाँ : [Comment!]! @derivedFrom(field: “post”) ``` -@derivedFrom कुशल एक से कई संबंध बनाता है, जिससे एक इकाई को संबंधित इकाई में एक फ़ील्ड के आधार पर कई संबंधित इकाइयों के साथ गतिशील रूप से संबंध बनाने की अनुमति मिलती है। यह दृष्टिकोण रिश्ते के दोनों पक्षों को डुप्लिकेट डेटा संग्रहीत करने की आवश्यकता को समाप्त करता है, जिससे subgraph अधिक कुशल बन जाता है। +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### @derivedFrom के लिए उदाहरण उपयोग मामला @@ -60,30 +60,30 @@ type Comment @entity { बस @derivedFrom निर्देश जोड़ने से, यह स्कीमा केवल संबंध के “Comments” पक्ष पर “Comments” को संग्रहीत करेगा और संबंध के “Post” पक्ष पर नहीं। ऐरे व्यक्तिगत पंक्तियों में संग्रहीत होते हैं, जिससे उन्हें काफी विस्तार करने की अनुमति मिलती है। यदि उनका विकास अनियंत्रित है, तो इससे विशेष रूप से बड़े आकार हो सकते हैं। -यह न केवल हमारे subgraph को अधिक प्रभावी बनाएगा, बल्कि यह तीन विशेषताओं को भी अनलॉक करेगा: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. हम Post को क्वेरी कर सकते हैं और इसके सभी कमेंट्स देख सकते हैं। 2. हम एक रिवर्स लुकअप कर सकते हैं और किसी भी Comment को क्वेरी कर सकते हैं और देख सकते हैं कि यह किस पोस्ट से आया है। -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## निष्कर्ष -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. ![indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/hi/subgraphs/best-practices/grafting-hotfix.mdx index cc3c759ebdea..3b7f938b62d7 100644 --- a/website/src/pages/hi/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. -### अवलोकन +### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **डेटा प्रिजर्वेशन** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,31 +157,31 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## निष्कर्ष -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. ![indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/hi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 5cccca23acb2..1c7ec1e2e313 100644 --- a/website/src/pages/hi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: सबग्राफ सर्वश्रेष्ठ प्रथा 3 - अपरिवर्तनीय संस्थाओं और बाइट्स को आईडी के रूप में उपयोग करके अनुक्रमण और क्वेरी प्रदर्शन में सुधार करें। -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ type Transfer @entity(immutable: true) { ### IDs के रूप में Bytes का उपयोग न करने के कारण 1. यदि एंटिटी IDs मानव-पठनीय होने चाहिए, जैसे कि ऑटो-इंक्रीमेंटेड न्यूमेरिकल IDs या पठनीय स्ट्रिंग्स, तो IDs के लिए Bytes का उपयोग नहीं किया जाना चाहिए। -2. यदि किसी subgraph के डेटा को दूसरे डेटा मॉडल के साथ एकीकृत किया जा रहा है जो IDs के रूप में Bytes का उपयोग नहीं करता है, तो Bytes के रूप में IDs का उपयोग नहीं किया जाना चाहिए। +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing और क्वेरीिंग प्रदर्शन में सुधार की आवश्यकता नहीं है। ### Bytes के रूप में IDs के साथ जोड़ना -बहुत से subgraphs में एक ID में दो प्रॉपर्टीज को जोड़ने के लिए स्ट्रिंग संयोजन का उपयोग करना एक सामान्य प्रथा है, जैसे कि event.transaction.hash.toHex() + "-" + event.logIndex.toString() का उपयोग करना। हालांकि, चूंकि यह एक स्ट्रिंग लौटाता है, यह subgraph इंडेक्सिंग और क्वेरी प्रदर्शन में महत्वपूर्ण रूप से बाधा डालता है। +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. इसके बजाय, हमें event properties को जोड़ने के लिए concatI32() method का उपयोग करना चाहिए। यह रणनीति एक Bytes ID उत्पन्न करती है जो बहुत अधिक performant होती है। @@ -91,7 +91,7 @@ Query: } ``` -प्रश्न प्रतिक्रिया: +प्रश्न प्रतिक्रिया: ```json { @@ -147,7 +147,7 @@ Query: } ``` -प्रश्न प्रतिक्रिया: +प्रश्न प्रतिक्रिया: ```json { @@ -172,20 +172,20 @@ Query: ## निष्कर्ष -Immutable Entities और Bytes को IDs के रूप में उपयोग करने से subgraph की दक्षता में उल्लेखनीय सुधार हुआ है। विशेष रूप से, परीक्षणों ने क्वेरी प्रदर्शन में 28% तक की वृद्धि और indexing स्पीड में 48% तक की तेजी को उजागर किया है। +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. इस ब्लॉग पोस्ट में, Edge & Node के सॉफ़्टवेयर इंजीनियर डेविड लुटरकोर्ट द्वारा Immutable Entities और Bytes को IDs के रूप में उपयोग करने के बारे में और अधिक पढ़ें: [दो सरल Subgraph प्रदर्शन सुधार।](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. ![indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/best-practices/pruning.mdx b/website/src/pages/hi/subgraphs/best-practices/pruning.mdx index e566e35d240e..e9f23d71772c 100644 --- a/website/src/pages/hi/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: सबग्राफ बेस्ट प्रैक्टिस 1 - सबग्राफ प्रूनिंग के साथ क्वेरी की गति में सुधार करें -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -Pruning(/developing/creating-a-subgraph/#prune), subgraph के डेटाबेस से दिए गए ब्लॉक तक की archival entities को हटाता है, और unused entities को subgraph के डेटाबेस से हटाने से subgraph की query performance में सुधार होगा, अक्सर काफी हद तक। indexerHints का उपयोग करना subgraph को prune करने का एक आसान तरीका है। +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## IndexerHints के साथ subgraph को prune करने का तरीका @@ -13,14 +13,14 @@ Manifest में एक section को indexerHints के नाम से indexerHints में तीन prune विकल्प होते हैं: -- prune: auto: आवश्यक न्यूनतम इतिहास को बनाए रखता है जैसा कि Indexer द्वारा निर्धारित किया गया है, जो क्वेरी प्रदर्शन को अनुकूलित करता है। यह सामान्यतः अनुशंसित सेटिंग है और यह सभी subgraphs के लिए डिफ़ॉल्ट है जो graph-cli >= 0.66.0 द्वारा बनाए गए हैं। +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: ऐतिहासिक ब्लॉकों को बनाए रखने की संख्या पर एक कस्टम सीमा निर्धारित करता है। -- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. +- `prune: never`: ऐतिहासिक डेटा को कभी भी नहीं हटाया जाता; यह संपूर्ण इतिहास को बनाए रखता है और यदि `indexerHints` अनुभाग नहीं है तो यह डिफ़ॉल्ट होता है। यदि [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) आवश्यक हैं तो `prune: never` का चयन किया जाना चाहिए। -हम अपने 'subgraph' में indexerHints जोड़ सकते हैं हमारे subgraph.yaml को अपडेट करके: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -33,24 +33,24 @@ dataSources: ## महत्वपूर्ण विचार -- If [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data. +- यदि [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) की आवश्यकता हो और साथ ही pruning भी करनी हो, तो Time Travel Query की कार्यक्षमता बनाए रखने के लिए pruning को सटीक रूप से करना आवश्यक है। इसी कारण, आमतौर पर Time Travel Queries के साथ `indexerHints: prune: auto` का उपयोग करने की अनुशंसा नहीं की जाती है। इसके बजाय, `indexerHints: prune: ` का उपयोग करें ताकि उस ब्लॉक ऊँचाई तक सटीक रूप से pruning हो सके, जो Time Travel Queries के लिए आवश्यक ऐतिहासिक डेटा को सुरक्षित रखे, या फिर `prune: never` का उपयोग करें ताकि सभी डेटा सुरक्षित रहे। -- It is not possible to [graft](/subgraphs/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months). +- यह संभव नहीं है कि किसी ब्लॉक ऊंचाई पर [graft](/subgraphs/cookbook/grafting/) किया जाए जो कि हटा दिया गया हो। यदि grafting नियमित रूप से की जाती है और हटाने की आवश्यकता होती है, तो यह अनुशंसित है कि indexerHints: prune: का उपयोग करें, जो सटीक रूप से एक निर्धारित संख्या में ब्लॉक बनाए रखेगा (उदाहरण के लिए, छह महीनों के लिए पर्याप्त)। ## निष्कर्ष -Pruning का उपयोग indexerHints से करना एक सर्वोत्तम प्रथा है subgraph विकास के लिए, जो महत्वपूर्ण क्वेरी प्रदर्शन सुधार प्रदान करता है। +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. ![indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/best-practices/timeseries.mdx b/website/src/pages/hi/subgraphs/best-practices/timeseries.mdx index a0c4f65157ad..56d95cf94a5d 100644 --- a/website/src/pages/hi/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/hi/subgraphs/best-practices/timeseries.mdx @@ -1,13 +1,13 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries और Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. -## अवलोकन +## Overview Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### आवश्यक शर्तें + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ A timeseries entity represents raw data points collected over time. It is define type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ An aggregation entity computes aggregated values from a timeseries source. It is type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -141,7 +145,7 @@ Supported aggregation functions: - sum - count -- min +- मिनट - max - first - last @@ -172,24 +176,24 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### निष्कर्ष -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. -## Subgraph Best Practices 1-6 +## सबग्राफ सर्वोत्तम प्रथाएँ 1-6 -1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) +1. [सबग्राफ की गति में सुधार करें सबग्राफ प्रूनिंग के साथ](/subgraphs/best-practices/pruning/) -2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. ![indexing और क्वेरी प्रतिसादशीलता में सुधार करें @derivedFrom का उपयोग करके](/subgraphs/best-practices/derivedfrom/) -3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [अपरिवर्तनीय entities और Bytes को ID के रूप में उपयोग करके Indexing और क्वेरी प्रदर्शन में सुधार करें](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [indexing गति में सुधार करें `eth_calls` से बचकर](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) +5. [समय श्रृंखला और समुच्चयन के साथ सरल और अनुकूलित करें](/subgraphs/best-practices/timeseries/) -6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) +6. [त्वरित हॉटफ़िक्स परिनियोजन के लिए ग्राफ्टिंग का उपयोग करें](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/hi/subgraphs/billing.mdx b/website/src/pages/hi/subgraphs/billing.mdx index db7598ed5faf..b8a53b9b14dc 100644 --- a/website/src/pages/hi/subgraphs/billing.mdx +++ b/website/src/pages/hi/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: बिलिंग ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card @@ -31,11 +33,11 @@ Subgraph users can use The Graph Token (or GRT) to pay for queries on The Graph ### GRT on Arbitrum or Ethereum -The Graph का बिलिंग सिस्टम Arbitrum पर GRT को स्वीकार करता है, और उपयोगकर्ताओं को गैस के भुगतान के लिए Arbitrum पर ETH की आवश्यकता होगी। जबकि The Graph प्रोटोकॉल Ethereum Mainnet पर शुरू हुआ, सभी गतिविधियाँ, जिसमें बिलिंग कॉन्ट्रैक्ट्स भी शामिल हैं, अब Arbitrum One पर हैं। +The Graph का बिलिंग सिस्टम Arbitrum पर GRT को स्वीकार करता है, और उपयोगकर्ताओं को गैस के भुगतान के लिए Arbitrum पर ETH की आवश्यकता होगी। जबकि The Graph प्रोटोकॉल Ethereum Mainnet पर शुरू हुआ, सभी गतिविधियाँ, जिसमें बिलिंग कॉन्ट्रैक्ट्स भी शामिल हैं, अब Arbitrum One पर हैं। क्वेरियों के लिए भुगतान करने के लिए, आपको Arbitrum पर GRT की आवश्यकता है। इसे प्राप्त करने के लिए कुछ विभिन्न तरीके यहां दिए गए हैं: -- यदि आपके पास पहले से Ethereum पर GRT है, तो आप इसे Arbitrum पर ब्रिज कर सकते हैं। आप यह Subgraph Studio में प्रदान किए गए GRT ब्रिजिंग विकल्प के माध्यम से या निम्नलिखित में से किसी एक ब्रिज का उपयोग करके कर सकते हैं: +- यदि आपके पास पहले से Ethereum पर GRT है, तो आप इसे Arbitrum पर ब्रिज कर सकते हैं। आप यह Subgraph Studio में प्रदान किए गए GRT ब्रिजिंग विकल्प के माध्यम से या निम्नलिखित में से किसी एक ब्रिज का उपयोग करके कर सकते हैं: - [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) diff --git a/website/src/pages/hi/subgraphs/cookbook/arweave.mdx b/website/src/pages/hi/subgraphs/cookbook/arweave.mdx index b51d9a5405bc..747f3933136e 100644 --- a/website/src/pages/hi/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/hi/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: आरवीव पर सब-ग्राफ्र्स बनाना --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! इस गाइड में आप आरवीव ब्लॉकचेन पर सब ग्राफ्स बनाना और डेप्लॉय करना सीखेंगे! @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are आरवीवे पर सब ग्राफ बनाने के लिए हमे दो पैकेजेस की जरूरत है: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## सब ग्राफ के कॉम्पोनेन्ट -सब ग्राफ के तीन कॉम्पोनेन्ट होते हैं: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are यहाँ आप बताते हैं की आप कौन सा डाटा इंडेक्सिंग के बाद क्वेरी करना चाहते हैं| दरसअल यह एक API के मॉडल जैसा है, जहाँ मॉडल द्वारा रिक्वेस्ट बॉडी का स्ट्रक्चर परिभाषित किया जाता है| -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` यह किसी के द्वारा इस्तेमाल किये जा रहे डाटा सोर्स से डाटा को पुनः प्राप्त करने और स्टोर करने के लॉजिक को बताता है| डाटा अनुवादित होकर आपके द्वारा सूचीबद्ध स्कीमा के अनुसार स्टोर हो जाता है| -सब ग्राफ को बनाते वक़्त दो मुख्य कमांड हैं: +During Subgraph development there are two key commands: ``` $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## सब ग्राफ मैनिफेस्ट की परिभाषा -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - अरवीव डाटा सोर्स द्वारा एक वैकल्पिक source.owner फील्ड लाया गया, जो की एक आरवीव वॉलेट का मपब्लिक key है| @@ -99,7 +99,7 @@ dataSources: ## स्कीमा की परिभाषा -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## असेंबली स्क्रिप्ट मैप्पिंग्स @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## आरवीव सब-ग्राफ क्वेरी करना -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## सब-ग्राफ के उदाहरण -सहायता के एक सब-ग्राफ का उदाहरण +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### क्या एक सब-ग्राफ आरवीव और बाकी चेन्स को इंडेक्स कर सकता है? +### Can a Subgraph index Arweave and other chains? -नहीं, एक सब-ग्राफ केवल एक चेन/नेटवर्क से डाटा सोर्स को सपोर्ट कर सकता है +No, a Subgraph can only support data sources from one chain/network. ### क्या मैं आरवीव पर स्टोर की फाइल्स को इंडेक्स कर सकता हूँ? वर्तमान में द ग्राफ आरवीव को केवल एक ब्लॉकचेन की तरह इंडेक्स करता है (उसके ब्लॉक्स और ट्रांसक्शन्स)| -### क्या मैं अपने सब-ग्राफ में Bundlr बंडल्स को पहचान सकता हूँ? +### Can I identify Bundlr bundles in my Subgraph? यह वर्तमान में सपोर्टेड नहीं है| @@ -188,7 +188,7 @@ The GraphQL endpoint for Arweave subgraphs is determined by the schema definitio ### वर्तमान एन्क्रिप्शन फॉर्मेट क्या है? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/hi/subgraphs/cookbook/enums.mdx b/website/src/pages/hi/subgraphs/cookbook/enums.mdx index 3c588eace670..5721d23638de 100644 --- a/website/src/pages/hi/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/hi/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, या enumeration types, एक विशिष्ट डेटा प ### अपने Schema में Enums का उदाहरण -यदि आप एक subgraph बना रहे हैं जो एक मार्केटप्लेस पर टोकन के स्वामित्व इतिहास को ट्रैक करता है, तो प्रत्येक टोकन विभिन्न स्वामित्वों से गुजर सकता है, जैसे कि OriginalOwner, SecondOwner, और ThirdOwner। enums का उपयोग करके, आप इन विशिष्ट स्वामित्वों को परिभाषित कर सकते हैं, यह सुनिश्चित करते हुए कि केवल पूर्वनिर्धारित मान ही सौंपे जाएं। +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. आप अपनी स्कीमा में एन्सम्स (enums) को परिभाषित कर सकते हैं, और एक बार परिभाषित हो जाने के बाद, आप एन्सम के मानों की स्ट्रिंग प्रस्तुति का उपयोग करके एक एन्सम फ़ील्ड को एक entities पर सेट कर सकते हैं। @@ -65,10 +65,10 @@ Enums प्रकार सुरक्षा प्रदान करते > नोट: निम्नलिखित guide CryptoCoven NFT स्मार्ट कॉन्ट्रैक्ट का उपयोग करती है। -NFTs जहां ट्रेड होते हैं, उन विभिन्न मार्केटप्लेस के लिए enums को परिभाषित करने के लिए, अपने Subgraph स्कीमा में निम्नलिखित का उपयोग करें: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql -#मार्केटप्लेस के लिए Enum जो CryptoCoven कॉन्ट्रैक्ट के साथ इंटरएक्टेड हैं (संभवत: ट्रेड/मिंट) +#मार्केटप्लेस के लिए Enum जो CryptoCoven कॉन्ट्रैक्ट के साथ इंटरएक्टेड हैं (संभवत: ट्रेड/मिंट) enum Marketplace { OpenSeaV1 # जब CryptoCoven NFT को इस बाजार में व्यापार किया जाता है OpenSeaV2 # जब CryptoCoven NFT को OpenSeaV2 बाजार में व्यापार किया जाता है @@ -80,7 +80,7 @@ enum Marketplace { ## NFT Marketplaces के लिए Enums का उपयोग -एक बार परिभाषित होने पर, enums का उपयोग आपके subgraph में transactions या events को श्रेणीबद्ध करने के लिए किया जा सकता है। +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. उदाहरण के लिए, जब logging NFT बिक्री लॉग करते हैं, तो आप ट्रेड में शामिल मार्केटप्लेस को enum का उपयोग करके निर्दिष्ट कर सकते हैं। diff --git a/website/src/pages/hi/subgraphs/cookbook/grafting.mdx b/website/src/pages/hi/subgraphs/cookbook/grafting.mdx index c0703bcfb101..1e8fd239ca41 100644 --- a/website/src/pages/hi/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/hi/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: एक कॉन्ट्रैक्ट बदलें और उसका इतिहास ग्राफ्टिंग के साथ रखें --- -इस गाइड में, आप सीखेंगे कि मौजूदा सबग्राफ को ग्राफ्ट करके नए सबग्राफ कैसे बनाएं और तैनात करें। +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## ग्राफ्टिंग क्या है? -ग्राफ्टिंग एक वर्तमान सब-ग्राफ के डाटा का दोबारा इस्तेमाल करता है और उसे बाद के ब्लॉक्स में इंडेक्स करना चालू कर देता है| यह विकास की प्रक्रिया में उपयोगी है क्यूंकि इसकी वजह से मैप्पिंग्स में छोटी-मोटी त्रुटियों से छुटकारा पाया जा सकता है या फिर एक मौजूदा सब-ग्राफ को विफल होने के बाद दोबारा चालू किया जा सकता है| साथ हीं, इसका इस्तेमाल ऐसे सब-ग्राफ में कोई खूबी जोड़ते वक़्त भी किया जा सकता है जिसमे शुरुआत से इंडेक्स करने में काफी लम्बा वक़्त लगता हो| +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -ग्राफ्टेड सबग्राफ एक ग्राफक्यूएल स्कीमा का उपयोग कर सकता है जो बेस सबग्राफ के समान नहीं है, लेकिन इसके अनुकूल हो। यह अपने आप में एक मान्य सबग्राफ स्कीमा होना चाहिए, लेकिन निम्नलिखित तरीकों से बेस सबग्राफ के स्कीमा से विचलित हो सकता है: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - यह इकाई के प्रकारों को जोड़ या हटा सकता है| - यह इकाई प्रकारों में से गुणों को हटाता है| @@ -22,38 +22,38 @@ title: एक कॉन्ट्रैक्ट बदलें और उसक - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -इस ट्यूटोरियल में, हम एक बुनियादी उपयोग मामले को कवर करेंगे। हम एक मौजूदा कॉन्ट्रैक्ट को एक समान कॉन्ट्रैक्ट (नए पते के साथ, लेकिन वही कोड) से बदलेंगे। फिर, मौजूदा Subgraph को "बेस" Subgraph पर जोड़ेंगे, जो नए कॉन्ट्रैक्ट को ट्रैक करता है। +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting एक शक्तिशाली विशेषता है जो आपको एक सबग्राफ़ को दूसरे पर "graft" करने की अनुमति देती है, जिससे मौजूदा सबग्राफ़ से नए संस्करण में ऐतिहासिक डेटा को प्रभावी ढंग से स्थानांतरित किया जा सके।The Graph Network से सबग्राफ़ को Subgraph Studioमें वापस graft करना संभव नहीं है। +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## एक मौजूदा सब-ग्राफ बनाना -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## सब ग्राफ मैनिफेस्ट की परिभाषा -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## ग्राफ्टिंग मैनिफेस्ट की परिभाषा -ग्राफ्टिंग करने के लिए मूल सब-ग्राफ मैनिफेस्ट में 2 नई चीज़ें जोड़ने की आवश्यकता है: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## बेस सब-ग्राफ को तैनात करना -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. एक बार पूरा होने पर, सत्यापित करें की इंडेक्सिंग सही ढंग से हो गयी है| यदि आप निम्न कमांड ग्राफ प्लेग्राउंड में चलाते हैं +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ The `base` and `block` values can be found by deploying two subgraphs: one for t } ``` -एक बार आपका सत्यापित सब-ग्राफ ढंग से इंडेक्स हो जाता है तो आप बिना किसी देरी के अपना सब-ग्राफ को ग्राफ्टिंग से अपडेट कर सकते हैं| +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## ग्राफ्टिंग सब-ग्राफ तैनात करना ग्राफ्ट प्रतिस्तापित subgraph.yaml में एक नया कॉन्ट्रैक्ट एड्रेस होगा| यह तब हो सकता है जब आप अपना डैप अपडेट करें, कॉन्ट्रैक्ट को दोबारा तैनात करें, इत्यादि| -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. एक बार पूरा होने पर, सत्यापित करें की इंडेक्सिंग सही ढंग से हो गयी है| यदि आप निम्न कमांड ग्राफ प्लेग्राउंड में चलाते हैं +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ The `base` and `block` values can be found by deploying two subgraphs: one for t } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Additional Resources diff --git a/website/src/pages/hi/subgraphs/cookbook/near.mdx b/website/src/pages/hi/subgraphs/cookbook/near.mdx index 6aab3eeedbb4..a36e1ae971c6 100644 --- a/website/src/pages/hi/subgraphs/cookbook/near.mdx +++ b/website/src/pages/hi/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: NEAR पर सबग्राफ बनाना --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## NEAR क्या है? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## NEAR सबग्राफ क्या हैं? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - ब्लॉक हैंडलर्स: ये हर नए ब्लॉक पर चलते हैं - रसीद हैंडलर: किसी निर्दिष्ट खाते पर संदेश निष्पादित होने पर हर बार चलें @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## NEAR सबग्राफ बनाना -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> NEAR सबग्राफ का निर्माण वह सबग्राफ के निर्माण के समान है जो एथेरियम को अनुक्रमित करता है। +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -सबग्राफ परिभाषा के तीन पहलू हैं: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -सब ग्राफ को बनाते वक़्त दो मुख्य कमांड हैं: +During Subgraph development there are two key commands: ```bash $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### सब ग्राफ मैनिफेस्ट की परिभाषा -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ NEAR डेटा स्रोत दो प्रकार के हैंड ### स्कीमा की परिभाषा -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### असेंबली स्क्रिप्ट मैप्पिंग्स @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## एक NEAR सबग्राफ की तैनाती -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -नोड कॉन्फ़िगरेशन इस बात पर निर्भर करेगा कि सबग्राफ को कहाँ तैनात किया जा रहा है। +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -एक बार आपका सबग्राफ तैनात हो जाने के बाद, इसे ग्राफ़ नोड द्वारा अनुक्रमित किया जाएगा। आप सबग्राफ को क्वेरी करके इसकी प्रगति की जांच कर सकते हैं: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ NEAR को अनुक्रमित करने वाले ग्रा ## NEAR सबग्राफ को क्वेरी करना -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## सब-ग्राफ के उदाहरण -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### बीटा कैसे काम करता है? -NEAR समर्थन बीटा में है, जिसका मतलब है कि एपीआई में बदलाव हो सकते हैं क्योंकि हम इंटीग्रेशन में सुधार पर काम करना जारी रखेंगे। कृपया near@thegraph.com पर ईमेल करें ताकि हम NEAR सबग्राफ बनाने में आपकी सहायता कर सकें, और आपको नवीनतम विकासों के बारे में अपडेट रख सकें! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Can a subgraph index both NEAR and EVM chains? +### Can a Subgraph index both NEAR and EVM chains? -नहीं, एक सब-ग्राफ केवल एक चेन/नेटवर्क से डाटा सोर्स को सपोर्ट कर सकता है +No, a Subgraph can only support data sources from one chain/network. -### क्या सबग्राफ अधिक विशिष्ट ट्रिगर्स पर प्रतिक्रिया कर सकते हैं? +### Can Subgraphs react to more specific triggers? वर्तमान में, केवल अवरोधित करें और प्राप्त करें ट्रिगर समर्थित हैं। हम एक निर्दिष्ट खाते में फ़ंक्शन कॉल के लिए ट्रिगर्स की जांच कर रहे हैं। एक बार जब NEAR को नेटिव ईवेंट समर्थन मिल जाता है, तो हम ईवेंट ट्रिगर्स का समर्थन करने में भी रुचि रखते हैं। @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### क्या मैपिंग के दौरान NEAR सबग्राफ, NEAR खातों को व्यू कॉल कर सकते हैं? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? यह समर्थित नहीं है। हम मूल्यांकन कर रहे हैं कि अनुक्रमण के लिए यह कार्यक्षमता आवश्यक है या नहीं। -### क्या मैं अपने NEAR सबग्राफ में डेटा स्रोत टेम्प्लेट का उपयोग कर सकता हूँ? +### Can I use data source templates in my NEAR Subgraph? यह वर्तमान में समर्थित नहीं है। हम मूल्यांकन कर रहे हैं कि अनुक्रमण के लिए यह कार्यक्षमता आवश्यक है या नहीं। -### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -NEAR सबग्राफ के लिए पेंडिंग कार्यक्षमता अभी तक समर्थित नहीं है। अंतरिम में, आप एक अलग "नामित" सबग्राफ के लिए एक नया संस्करण तैनात कर सकते हैं, और फिर जब वह चेन हेड के साथ सिंक हो जाता है, तो आप अपने प्राथमिक "नामित" सबग्राफ में फिर से तैनात कर सकते हैं, जो उसी अंतर्निहित डेप्लॉयमेंट आईडी का उपयोग करेगा, इसलिए मुख्य सबग्राफ तुरंत सिंक हो जाएगा। +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### My question hasn't been answered, where can I get more help building NEAR subgraphs? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## संदर्भ diff --git a/website/src/pages/hi/subgraphs/cookbook/polymarket.mdx b/website/src/pages/hi/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/hi/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/hi/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/hi/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/hi/subgraphs/cookbook/secure-api-keys-nextjs.mdx index 4e690b3b4f7e..95004121ceb7 100644 --- a/website/src/pages/hi/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/hi/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -2,11 +2,11 @@ title: कैसे सुरक्षित करें API Keys का उपयोग करके Next.js Server Components --- -## अवलोकन +## Overview -हम Next.js server components(https://nextjs.org/docs/app/building-your-application/rendering/server-components) का उपयोग करके अपने dapp के frontend में API key को exposure से सुरक्षित रख सकते हैं। API key की सुरक्षा को और बढ़ाने के लिए, हम Subgraph Studio में अपनी API key को कुछ subgraphs या domains तक सीमित कर सकते हैं(/cookbook/upgrading-a-subgraph/#securing-your-api-key) +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -इस cookbook में, हम यह समझेंगे कि कैसे एक Next.js server component बनाया जाए जो subgraph से query करता है, साथ ही API key को frontend से छिपाने का तरीका भी शामिल है। +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### चेतावनी @@ -18,13 +18,13 @@ title: कैसे सुरक्षित करें API Keys का उप एक मानक React एप्लिकेशन में, फ्रंटेंड कोड में शामिल API कुंजियाँ क्लाइंट-साइड पर उजागर हो सकती हैं, जिससे सुरक्षा का जोखिम बढ़ता है। जबकि.env फ़ाइलें सामान्यत: उपयोग की जाती हैं, ये कुंजियों की पूरी सुरक्षा नहीं करतीं क्योंकि React का कोड क्लाइंट साइड पर निष्पादित होता है, जो API कुंजी को हेडर में उजागर करता है। Next.js सर्वर घटक इस मुद्दे का समाधान करते हैं द्वारा संवेदनशील कार्यों को सर्वर-साइड पर संभालना। -### क्लाइंट-साइड रेंडरिंग का उपयोग करके एक subgraph को क्वेरी करना +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) ### Prerequisites -- [Subgraph Studio](https://thegraph.com/studio) से एक API कुंजी +- [Subgraph Studio](https://thegraph.com/studio) से एक API कुंजी - Next.js और React का बुनियादी ज्ञान - एक मौजूदा Next.js प्रोजेक्ट जो App Router (https://nextjs.org/docs/app). का उपयोग करता है। diff --git a/website/src/pages/hi/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/hi/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..c80ba576434e --- /dev/null +++ b/website/src/pages/hi/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Overview + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## पूर्व आवश्यकताएँ + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## शुरू करिये + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### विशिष्टताएँ + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/hi/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/hi/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..44f730ada687 --- /dev/null +++ b/website/src/pages/hi/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## शुरू करिये + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/hi/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/hi/subgraphs/cookbook/subgraph-debug-forking.mdx index 0dc044459311..089b4cd545c5 100644 --- a/website/src/pages/hi/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/hi/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: फोर्क्स का उपयोग करके त्वरित और आसान सबग्राफ डिबगिंग --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## ठीक है वो क्या है? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## क्या?! कैसे? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## कृपया मुझे कुछ कोड दिखाओ! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. फिक्स का प्रयास करने का सामान्य तरीका है: 1. मैपिंग सोर्स में बदलाव करें, जो आपको लगता है कि समस्या का समाधान करेगा (जबकि मुझे पता है कि यह नहीं होगा)। -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. इसके सिंक-अप होने की प्रतीक्षा करें। 4. यदि यह फिर से टूट जाता है तो 1 पर वापस जाएँ, अन्यथा: हुर्रे! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. मैपिंग सोर्स में परिवर्तन करें, जिसके बारे में आपको लगता है कि इससे समस्या हल हो जाएगी. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. यदि यह फिर से ब्रेक जाता है, तो 1 पर वापस जाएँ, अन्यथा: हुर्रे! अब, आपके 2 प्रश्न हो सकते हैं: @@ -69,18 +69,18 @@ Using **subgraph forking** we can essentially eliminate this step. Here is how i और मैं उत्तर देता हूं: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. फोर्किंग आसान है, पसीना बहाने की जरूरत नहीं: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! तो, यहाँ मैं क्या करता हूँ: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/hi/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/hi/subgraphs/cookbook/subgraph-uncrashable.mdx index ace90495aef8..f53c976a796b 100644 --- a/website/src/pages/hi/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/hi/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: सुरक्षित सबग्राफ कोड जेनरेटर --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## सबग्राफ अनक्रैशेबल के साथ एकीकृत क्यों करें? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - फ्रेमवर्क में इकाई वैरिएबल के समूहों के लिए कस्टम, लेकिन सुरक्षित, सेटर फ़ंक्शन बनाने का एक तरीका (कॉन्फिग फ़ाइल के माध्यम से) भी शामिल है। इस तरह उपयोगकर्ता के लिए एक पुरानी ग्राफ़ इकाई को लोड/उपयोग करना असंभव है और फ़ंक्शन द्वारा आवश्यक वैरिएबल को सहेजना या सेट करना भूलना भी असंभव है। -- चेतावनी लॉग को लॉग के रूप में रिकॉर्ड किया जाता है, जो यह इंगित करता है कि Subgraph लॉजिक में कहां उल्लंघन हो रहा है, ताकि समस्या को ठीक किया जा सके और डेटा की सटीकता सुनिश्चित हो सके। +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. सबग्राफ अनक्रैशेबल को ग्राफ़ CLI codegen कमांड का उपयोग करके एक वैकल्पिक फ़्लैग के रूप में चलाया जा सकता है। @@ -26,4 +26,4 @@ title: सुरक्षित सबग्राफ कोड जेनरे graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/hi/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/hi/subgraphs/cookbook/transfer-to-the-graph.mdx index ae5023b492a4..49ccbb4a4476 100644 --- a/website/src/pages/hi/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/hi/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: The Graph पर ट्रांसफर करें +title: Transfer to The Graph --- -अपने subgraphs को किसी भी प्लेटफ़ॉर्म से The Graph's decentralized network(https://thegraph.com/networks/) में जल्दी से अपग्रेड करें। +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## The Graph पर स्विच करने के लाभ -- आपके ऐप्स द्वारा पहले से उपयोग किए जा रहे वही subgraph को बिना किसी डाउनटाइम के माइग्रेशन के लिए उपयोग करें। +- Use the same Subgraph that your apps already use with zero-downtime migration. - 100+ Indexers द्वारा समर्थित एक वैश्विक नेटवर्क से विश्वसनीयता बढ़ाएं। -- सबग्राफ के लिए 24/7 तेज़ और तुरंत समर्थन प्राप्त करें, एक ऑन-कॉल इंजीनियरिंग टीम के साथ। +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## अपने Subgraph को The Graph में 3 आसान कदमों में अपग्रेड करें @@ -21,9 +21,9 @@ title: The Graph पर ट्रांसफर करें ### सबग्राफ बनाएँ Subgraph Studio में - [Subgraph Studio](https://thegraph.com/studio/) पर जाएँ और अपने वॉलेट को कनेक्ट करें। -- "एक सबग्राफ बनाएं" पर क्लिक करें। सबग्राफ का नाम टाइटल केस में रखनाrecommended है: "सबग्राफ नाम चेन नाम"। +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Graph CLI स्थापित करें @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -इस कमांड का उपयोग करें और CLI का उपयोग करके Studio में एक subgraph बनाएँ: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. अपने Subgraph को Studio पर डिप्लॉय करें -यदि आपके पास अपना सोर्स कोड है, तो आप इसे आसानी से Studio में डिप्लॉय कर सकते हैं। यदि आपके पास यह नहीं है, तो यहां एक त्वरित तरीका है अपनी subgraph को डिप्लॉय करने का। +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. The Graph CLI में, निम्नलिखित कमांड चलाएँ: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> नोट: प्रत्येक subgraph का एक IPFS हैश (Deployment ID) होता है, जो इस प्रकार दिखता है: "Qmasdfad...". बस इसे deploy करने के लिए इस IPFS हैश का उपयोग करें। आपको एक संस्करण दर्ज करने के लिए कहा जाएगा (जैसे, v0.0.1)। +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. अपने Subgraph को The Graph Network पर प्रकाशित करें @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### अपने Subgraph को क्वेरी करें -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### उदाहरण -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -इस subgraph का क्वेरी URL है: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgrap ### सबग्राफ की स्थिति की निगरानी करें -एक बार जब आप अपग्रेड करते हैं, तो आप Subgraph Studio(https://thegraph.com/studio/) में अपने सबग्राफ्स को एक्सेस और प्रबंधित कर सकते हैं और The Graph Explorer(https://thegraph.com/networks/) में सभी सबग्राफ्स को एक्सप्लोर कर सकते हैं। +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Additional Resources -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- आप अपने subgraph के प्रदर्शन को बेहतर बनाने के लिए इसे अनुकूलित और कस्टमाइज़ करने के सभी तरीकों का पता लगाने के लिए, creating a subgraph here(/developing/creating-a-subgraph/) पर और पढ़ें। +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/hi/subgraphs/developing/_meta-titles.json b/website/src/pages/hi/subgraphs/developing/_meta-titles.json index 01a91b09ed77..bb1dffb7294d 100644 --- a/website/src/pages/hi/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/hi/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "बनाने", + "deploying": "परिनियोजित", + "publishing": "प्रकाशित करना", + "managing": "प्रबंध" } diff --git a/website/src/pages/hi/subgraphs/developing/creating/_meta-titles.json b/website/src/pages/hi/subgraphs/developing/creating/_meta-titles.json index 6106ac328dc1..553273beaf56 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/_meta-titles.json +++ b/website/src/pages/hi/subgraphs/developing/creating/_meta-titles.json @@ -1,3 +1,3 @@ { - "graph-ts": "AssemblyScript API" + "graph-ts": "असेंबलीस्क्रिप्ट एपीआई" } diff --git a/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx b/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx index ac869ec36e5b..f6540dd317c2 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/advanced.mdx @@ -1,23 +1,23 @@ --- -title: Advanced Subgraph Features +title: उन्नत Subgraph विशेषताएँ --- -## अवलोकन +## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +अपने Subgraph के निर्माण को उन्नत करने के लिए उन्नत सबग्राफ सुविधाएँ जोड़ें और लागू करें। -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +`specVersion` `0.0.4` से शुरू होकर, सबग्राफ सुविधाओं को स्पष्ट रूप से `विशेषता` अनुभाग में शीर्ष स्तर पर घोषित किया जाना चाहिए, जो उनके `camelCase` नाम का उपयोग करके किया जाता है, जैसा कि नीचे दी गई तालिका में सूचीबद्ध है: -| Feature | Name | -| ---------------------------------------------------- | ---------------- | -| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | -| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| विशेषता | नाम | +| ------------------------------------------------------ | -------------------- | +| [गैर-घातक त्रुटियाँ](#non-fatal-errors) | `गैर-घातक त्रुटियाँ` | +| [पूर्ण-पाठ खोज](#defining-fulltext-search-fields) | `पूर्ण-पाठ खोज` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +instance के लिए, यदि कोई सबग्राफ **Full-Text Search** और **Non-fatal Errors** सुविधाओं का उपयोग करता है, तो मैनिफेस्ट में `विशेषता` फ़ील्ड इस प्रकार होनी चाहिए: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,17 +25,17 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> कोई फ़ीचर घोषित किए बिना उसका उपयोग करने से **मान्यकरण त्रुटि** होगी जब Subgraph डिप्लॉय किया जाएगा, लेकिन यदि कोई फ़ीचर घोषित किया जाता है लेकिन उपयोग नहीं किया जाता, तो कोई त्रुटि नहीं होगी। ## Timeseries और Aggregations -Prerequisites: +पूर्व आवश्यकताएँ: -- Subgraph specVersion must be ≥1.1.0. +- सबग्राफ का specVersion ≥1.1.0 होना चाहिए। -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries और aggregations आपके Subgraph को दैनिक औसत मूल्य, प्रति घंटे कुल ट्रांसफर और अन्य आँकड़े ट्रैक करने में सक्षम बनाते हैं। -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +यह सुविधा दो नए प्रकार की सबग्राफ entity पेश करती है। Timeseries entities समय मुहर (timestamps) के साथ डेटा पॉइंट्स रिकॉर्ड करती हैं। Aggregation entities पहले से घोषित गणनाएँ करती हैं, जो Timeseries डेटा पॉइंट्स पर प्रति घंटे या दैनिक आधार पर की जाती हैं, फिर परिणामों को आसान पहुंच के लिए GraphQL के माध्यम से संग्रहीत किया जाता है। ### उदाहरण स्कीमा @@ -53,33 +53,33 @@ type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { } ``` -### How to Define Timeseries and Aggregations +### टाइमसीरीज़ और एग्रीगेशन को कैसे परिभाषित करें -Timeseries entities are defined with `@entity(timeseries: true)` in the GraphQL schema. Every timeseries entity must: +टाइमसीरीज़ entities GraphQL स्कीमा में `@entity(timeseries: true)` के साथ परिभाषित की जाती हैं। हर टाइमसीरीज़ entities को अवश्य: -- have a unique ID of the int8 type -- have a timestamp of the Timestamp type -- include data that will be used for calculation by aggregation entities. +- एक अद्वितीय आईडी हो जो int8 प्रकार की हो। +- टाइमस्टैम्प प्रकार का एक टाइमस्टैम्प रखें। +- गणना के लिए अभिग्रहण entities द्वारा उपयोग किए जाने वाले डेटा को शामिल करें। -These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the aggregation entities. +इन टाइमसीरीज़ entities को नियमित ट्रिगर handler में सेव किया जा सकता है और ये एग्रीगेशन entities के लिए "कच्चे डेटा" के रूप में कार्य करती हैं। -Aggregation entities are defined with `@aggregation` in the GraphQL schema. Every aggregation entity defines the source from which it will gather data (which must be a timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). +एग्रीगेशन entities को GraphQL schema में `@aggregation` के साथ परिभाषित किया जाता है। प्रत्येक aggregation entity उस साधन को परिभाषित करती है जिससे वह डेटा एकत्र करेगी (जो कि एक timeseries entity होनी चाहिए), अंतराल सेट करती है (जैसे, घंटे, दिन), और उस aggregation function को निर्दिष्ट करती है जिसका वह उपयोग करेगी (जैसे, sum, count, min, max, first, last)। -Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. +एग्रीगेशन entities निर्दिष्ट साधन के आधार पर आवश्यक अंतराल के अंत में स्वचालित रूप से गणना की जाती हैं। #### उपलब्ध Aggregation अंतराल -- `hour`: sets the timeseries period every hour, on the hour. -- `day`: sets the timeseries period every day, starting and ending at 00:00. +- `hour`: हर घंटे, ठीक घंटे पर, टाइमसीरीज़ अवधि सेट करता है। +- `day`: टाइमसीरीज़ अवधि को हर दिन सेट करता है, जो 00:00 पर शुरू और समाप्त होती है। #### उपलब्ध Aggregation फ़ंक्शन -- `sum`: Total of all values. -- `count`: Number of values. -- `min`: Minimum value. -- `max`: Maximum value. -- `first`: First value in the period. -- `last`: Last value in the period. +- `sum`: सभी मानों का कुल योग। +- `count`: मानों की संख्या। +- `min`: न्यूनतम मान। +- `max`: अधिकतम मान। +- `first`: अवधि में पहला मान। +- `last`: अवधि में अंतिम मान। #### उदाहरण Aggregations queries @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## गैर-घातक त्रुटियाँ -पहले से सिंक किए गए सबग्राफ पर इंडेक्सिंग त्रुटियां, डिफ़ॉल्ट रूप से, सबग्राफ को विफल कर देंगी और सिंक करना बंद कर देंगी। सबग्राफ को वैकल्पिक रूप से त्रुटियों की उपस्थिति में समन्वयन जारी रखने के लिए कॉन्फ़िगर किया जा सकता है, हैंडलर द्वारा किए गए परिवर्तनों को अनदेखा करके त्रुटि उत्पन्न हुई। यह सबग्राफ लेखकों को अपने सबग्राफ को ठीक करने का समय देता है, जबकि नवीनतम ब्लॉक के विरुद्ध प्रश्नों को जारी रखा जाता है, हालांकि त्रुटि के कारण बग के कारण परिणाम असंगत हो सकते हैं। ध्यान दें कि कुछ त्रुटियाँ अभी भी हमेशा घातक होती हैं। गैर-घातक होने के लिए, त्रुटि नियतात्मक होने के लिए जानी जानी चाहिए। +indexing त्रुटियाँ, जो पहले से सिंक हो चुके सबग्राफ पर होती हैं, डिफ़ॉल्ट रूप से सबग्राफ को विफल कर देंगी और सिंकिंग रोक देंगी। वैकल्पिक रूप से, सबग्राफ को इस तरह कॉन्फ़िगर किया जा सकता है कि वे त्रुटियों की उपस्थिति में भी सिंकिंग जारी रखें, उन परिवर्तनों को अनदेखा करके जो उस handler द्वारा किए गए थे जिसने त्रुटि उत्पन्न की। यह सबग्राफ लेखकों को अपने सबग्राफ को सही करने का समय देता है, जबकि नवीनतम ब्लॉक के विरुद्ध क्वेरीज़ दी जाती रहती हैं, हालांकि परिणाम उस बग के कारण असंगत हो सकते हैं जिसने त्रुटि उत्पन्न की थी। ध्यान दें कि कुछ त्रुटियाँ फिर भी हमेशा घातक होती हैं। गैर-घातक होने के लिए, त्रुटि को निर्धारक (deterministic) रूप से ज्ञात होना चाहिए। -> **ध्यान दें:** The Graph Network अभी तक गैर-घातक त्रुटियों non-fatal errors का समर्थन नहीं करता है, और डेवलपर्स को Studio के माध्यम से उस कार्यक्षमता का उपयोग करके सबग्राफ को नेटवर्क पर परिनियोजित (deploy) नहीं करना चाहिए। +> **नोट:**ग्राफ नेटवर्क अभी तक गैर-घातक त्रुटियों का समर्थन नहीं करता है, और डेवलपर्स को स्टूडियो के माध्यम से उस कार्यक्षमता का उपयोग करके सबग्राफ को नेटवर्क पर परिनियोजित नहीं करना चाहिए। -गैर-घातक त्रुटियों को सक्षम करने के लिए सबग्राफ मेनिफ़ेस्ट पर निम्न फ़ीचर फ़्लैग सेट करने की आवश्यकता होती है: +सबग्राफ मैनिफेस्ट पर निम्नलिखित फीचर फ्लैग सेट करके नॉन-फैटल एरर सक्षम किया जा सकता है ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -Queries को संभावित असंगतियों वाले डेटा को queries करने के लिए `subgraphError` आर्ग्यूमेंट के माध्यम से ऑप्ट-इन करना होगा। यह भी अनुशंसा की जाती है कि `_meta` को queries करें यह जांचने के लिए कि subgraph ने त्रुटियों को स्किप किया है या नहीं, जैसे इस उदाहरण में: +क्वेरी को `subgraphError` आर्ग्यूमेंट के माध्यम से संभावित असंगतियों वाले डेटा को क्वेरी करने के लिए भी ऑप्ट-इन करना आवश्यक है। साथ ही, यह अनुशंसित है कि `_meta` को क्वेरी किया जाए ताकि यह जांचा जा सके कि सबग्राफ ने किसी त्रुटि को छोड़ दिया है या नहीं, जैसा कि निम्न उदाहरण में दिखाया गया है: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -यदि subgraph में कोई त्रुटि आती है, तो वह queries डेटा और एक graphql त्रुटि के साथ `"indexing_error"` संदेश लौटाएगी, जैसा कि इस उदाहरण उत्तर में दिखाया गया है: +यदि सबग्राफ को कोई त्रुटि मिलती है, तो वह क्वेरी डेटा के साथ एक GraphQL त्रुटि वापस करेगी, जिसमें संदेश "indexing_error" होगा, जैसा कि इस उदाहरण प्रतिक्रिया में है: ```graphql "data": { @@ -145,11 +145,11 @@ _meta { ## IPFS/Arweave फ़ाइल डेटा स्रोत -फाइल डेटा स्रोत एक नई subgraph कार्यक्षमता है जो indexing के दौरान ऑफ-चेन डेटा तक एक मजबूत, विस्तारित तरीके से पहुँच प्रदान करती है। फाइल डेटा स्रोत IPFS और Arweave से फ़ाइलें फ़ेच करने का समर्थन करते हैं। +फाइल डेटा स्रोत एक नया सबग्राफ कार्यक्षमता है जो इंडेक्सिंग के दौरान ऑफ-चेन डेटा तक पहुँचने के लिए एक मजबूत और विस्तारित तरीका प्रदान करता है। फाइल डेटा स्रोत IPFS और Arweave से फ़ाइलें प्राप्त करने का समर्थन करता है। > यह ऑफ-चेन डेटा के नियतात्मक अनुक्रमण के साथ-साथ स्वैच्छिक HTTP-स्रोत डेटा के संभावित परिचय के लिए आधार भी देता है। -### अवलोकन +### Overview "लाइन" में हैंडलर कार्यान्वयन के दौरान फ़ाइलों को लाने के बजाय, यह टेम्पलेट्स को पेश करता है जिन्हें एक दिए गए फ़ाइल पहचानकर्ता के लिए नए डेटा स्रोतों के रूप में उत्पन्न किया जा सकता है। ये नए डेटा स्रोत फ़ाइलों को लाते हैं, यदि वे असफल होते हैं तो पुनः प्रयास करते हैं, और जब फ़ाइल मिलती है तो एक समर्पित हैंडलर चलाते हैं। @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -278,11 +278,11 @@ export function handleMetadata(content: Bytes): void { अब आप चेन-आधारित हैंडलर के निष्पादन के दौरान फ़ाइल डेटा स्रोत बना सकते हैं: - ऑटो-जनरेटेड `templates` से टेम्पलेट आयात करें। -- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave +- मानचित्रण के भीतर `TemplateName.create(cid: string)` को कॉल करें, जहाँ cid एक वैध कंटेंट पहचानकर्ता है IPFS या Arweave के लिए। -For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifiers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). +IPFS के लिए, ग्राफ-नोड [v0 और v1 कंटेंट आइडेंटिफायर्स] का समर्थन करता है(https://docs.ipfs.tech/concepts/content-addressing/), और डायरेक्ट्रीज़ के साथ कंटेंट आइडेंटिफायर्स (जैसे `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`)। -For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). +Arweave के लिए, संस्करण 0.33.0 के अनुसार, ग्राफ-नोड Arweave गेटवे से उनके [ लेन-देन(transaction) ID] (https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) के आधार पर संग्रहित फ़ाइलों को प्राप्त कर सकता है ([उदाहरण फ़ाइल](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave उन लेन-देन(transaction) का समर्थन करता है जो Irys (पूर्व में Bundlr) के माध्यम से अपलोड की गई हैं, और ग्राफ-नोड [ Irys manifests](https://docs.irys.xyz/overview/gateways#indexing)के आधार पर भी फ़ाइलों को प्राप्त कर सकता है। उदाहरण: @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -315,25 +315,25 @@ export function handleTransfer(event: TransferEvent): void { यह एक नया file data source बनाएगा, जो Graph Node के configured किए गए IPFS या Arweave endpoint का सर्वेक्षण करेगा, यदि यह नहीं मिलता है तो पुनः प्रयास करेगा। जब file मिल जाती है, तो file data source handler execute किया जाएगा। -This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. +यह उदाहरण पेरेंट `टोकन ` entities और परिणामी `TokenMetadata` entities के बीच लुकअप के रूप में CID का उपयोग कर रहा है। -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> पहले, इस बिंदु पर, एक सबग्राफ डेवलपर `ipfs.cat(CID)` को कॉल करता था ताकि फ़ाइल को प्राप्त किया जा सके। बधाई हो, आप फ़ाइल डेटा स्रोतों का उपयोग कर रहे हैं! -#### अपने उप-अनुच्छेदों को तैनात करना +#### अपने सबग्राफ को परिनियोजित करना -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +आप अब अपने सबग्राफ को किसी भी ग्राफ-नोड >=v0.30.0-rc.0 पर `build` और `deploy` कर सकते हैं। #### परिसीमन -फ़ाइल डेटा स्रोत हैंडलर और संस्थाएँ अन्य सबग्राफ संस्थाओं से अलग हैं, यह सुनिश्चित करते हुए कि वे निष्पादित होने पर नियतात्मक हैं, और श्रृंखला-आधारित डेटा स्रोतों का कोई संदूषण सुनिश्चित नहीं करते हैं। विस्तार से: +फ़ाइल डेटा स्रोत handlers और entities अन्य सबग्राफ entities से अलग होते हैं, जिससे यह सुनिश्चित होता है कि वे निष्पादन के समय निर्धारक (deterministic) बने रहें और चेन-आधारित डेटा स्रोतों में कोई मिलावट न हो। विशेष रूप से: - फ़ाइल डेटा स्रोतों द्वारा बनाई गई इकाइयाँ अपरिवर्तनीय हैं, और इन्हें अद्यतन नहीं किया जा सकता है - फ़ाइल डेटा स्रोत हैंडलर अन्य फ़ाइल डेटा स्रोतों से संस्थाओं तक नहीं पहुँच सकते - फ़ाइल डेटा स्रोतों से जुड़ी संस्थाओं को चेन-आधारित हैंडलर द्वारा एक्सेस नहीं किया जा सकता है -> हालांकि यह बाधा अधिकांश उपयोग-मामलों के लिए समस्याग्रस्त नहीं होनी चाहिए, यह कुछ के लिए जटिलता का परिचय दे सकती है। यदि आपको अपने फ़ाइल-आधारित डेटा को सबग्राफ में मॉडलिंग करने में समस्या आ रही है, तो कृपया डिस्कॉर्ड के माध्यम से संपर्क करें! +> यह बाधा अधिकांश उपयोग के मामलों में समस्या उत्पन्न नहीं करेगी, लेकिन कुछ के लिए जटिलता बढ़ा सकती है। यदि आपको अपने फ़ाइल-आधारित डेटा को सबग्राफ में मॉडल करने में समस्या हो रही है, तो कृपया Discord के माध्यम से संपर्क करें! इसके अतिरिक्त, फ़ाइल डेटा स्रोत से डेटा स्रोत बनाना संभव नहीं है, चाहे वह ऑनचेन डेटा स्रोत हो या अन्य फ़ाइल डेटा स्रोत। भविष्य में यह प्रतिबंध हटाया जा सकता है। @@ -341,41 +341,42 @@ You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. यदि आप NFT मेटाडेटा को संबंधित टोकन से लिंक कर रहे हैं, तो टोकन इकाई से मेटाडेटा इकाई को संदर्भित करने के लिए मेटाडेटा के IPFS हैश का उपयोग करें। एक आईडी के रूप में IPFS हैश का उपयोग करके मेटाडेटा इकाई को सहेजें। -You can use [DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. +आप [ DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) का उपयोग कर सकते हैं जब आप File Data साधन बना रहे हों ताकि अतिरिक्त जानकारी पास की जा सके जो File Data साधन handler में उपलब्ध होगी। -If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. +यदि आपके पास ऐसी entities हैं जो कई बार रिफ्रेश होती हैं, तो IPFS हैश और entity ID का उपयोग करके unique file-based entities बनाएं, और उन्हें chain-based entity में एक derived field का उपयोग करके संदर्भित करें। +entities > हम ऊपर दिए गए सुझाव को बेहतर बनाने के लिए काम कर रहे हैं, इसलिए क्वेरी केवल "नवीनतम" संस्करण लौटाती हैं #### ज्ञात समस्याएँ -File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. +फ़ाइल डेटा साधन को वर्तमान में ABIs, की आवश्यकता होती है, हालांकि ABIs का उपयोग नहीं किया जाता है ([issue])(https://github.com/graphprotocol/graph-cli/issues/961)। इसका समाधान यह है कि कोई भी ABI जोड़ें। -Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. +फ़ाइल डेटा साधन लिए handler उन फ़ाइलों में नहीं हो सकते जो `eth_call`contract बाइंडिंग्स को आयात करती हैं, जिससे "unknown import:`ethereum::ethereum.call `has not been defined" त्रुटि होती है ([issue](https://github.com/graphprotocol/graph-node/issues/4309). वर्कअराउंड के रूप में फ़ाइल डेटा साधन handler को एक समर्पित फ़ाइल में बनाना चाहिए। #### उदाहरण -[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) +[Crypto Coven सबग्राफ migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) #### संदर्भ -[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) +[GIP File Data साधन ] (https://forum.thegraph.com/t/gip-file-data-sources/2721) ## सूचीकृत तर्क फ़िल्टर / विषय फ़िल्टर -> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` +> **आवश्यकता**: [SpecVersion](#specversion-releases) >= `1.2.0` -विषय फ़िल्टर, जिन्हें इंडेक्स किए गए तर्क फ़िल्टर भी कहा जाता है, एक शक्तिशाली विशेषता है जो उपयोगकर्ताओं को उनके इंडेक्स किए गए तर्कों के मानों के आधार पर ब्लॉकचेन घटनाओं को सटीक रूप से फ़िल्टर करने की अनुमति देती है। +Topic filters, जिन्हें indexed argument filters के रूप में भी जाना जाता है, सबग्राफ में एक शक्तिशाली विशेषता हैं जो उपयोगकर्ताओं को उनके indexed arguments के मूल्यों के आधार पर ब्लॉकचेन घटनाओं को सटीक रूप से फ़िल्टर करने की अनुमति देती हैं। -- ये फ़िल्टर ब्लॉकचेन पर घटनाओं की विशाल धारा से रुचि की विशिष्ट घटनाओं को अलग करने में मदद करते हैं, जिससे सबग्राफ़ केवल प्रासंगिक डेटा पर ध्यान केंद्रित करके अधिक कुशलता से कार्य कर सके। +- ये फ़िल्टर ब्लॉकचेन पर बड़ी संख्या में घटनाओं की धाराओं से विशिष्ट घटनाओं को अलग करने में मदद करते हैं, जिससे सबग्राफ अधिक कुशलता से काम कर सकते हैं और केवल प्रासंगिक डेटा पर ध्यान केंद्रित कर सकते हैं। -- यह व्यक्तिगत subgraphs बनाने के लिए उपयोगी है जो विशेष पते और विभिन्न स्मार्ट कॉन्ट्रैक्ट्स के साथ उनके इंटरैक्शन को ट्रैक करते हैं ब्लॉकचेन पर। +- यह विशिष्ट पतों और उनके विभिन्न स्मार्ट contract के साथ इंटरैक्शन को ट्रैक करने वाले व्यक्तिगत सबग्राफ बनाने के लिए उपयोगी है। ### शीर्षक फ़िल्टर कैसे काम करते हैं -जब एक स्मार्ट कॉन्ट्रैक्ट एक इवेंट को उत्पन्न करता है, तो कोई भी तर्क जो 'indexed' के रूप में चिह्नित किया गया है, एक 'subgraph' की मैनिफेस्ट में फ़िल्टर के रूप में उपयोग किया जा सकता है। यह 'subgraph' को इन 'indexed' तर्कों से मेल खाने वाले इवेंट्स के लिए चयनात्मक रूप से सुनने की अनुमति देता है। +जब कोई स्मार्ट contract कोई इवेंट एमिट करता है, तो कोई भी आर्ग्यूमेंट जो indexed के रूप में चिह्नित होता है, उसे एक सबग्राफ के मैनिफेस्ट में फ़िल्टर के रूप में उपयोग किया जा सकता है। यह सबग्राफ को उन इवेंट्स को चयनित रूप से सुनने की अनुमति देता है जो इन indexed आर्ग्यूमेंट्स से मेल खाते हैं। -- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. +- इस आयोजन का पहला इंडेक्स किया गया तर्क `topic1`, से संबंधित है, दूसरा `topic2`, से और इसी तरह, `topic3`, तक, क्योंकि Ethereum Virtual Machine (EVM) प्रत्येक आयोजन में तीन तक इंडेक्स किए गए तर्कों की अनुमति देता है ```solidity // SPDX-License-Identifier: MIT @@ -395,13 +396,13 @@ contract Token { इस उदाहरण में: -- The `Transfer` event is used to log transactions of tokens between addresses. -- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. -- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. +- `Transfer`आयोजन घटना का उपयोग पते के बीच टोकन log लेन-देन को लॉग करने के लिए किया जाता है। +- The` from` और `to` पैरामीटर सूचकांकित होते हैं, जिससे आयोजन लिस्नर्स को विशिष्ट पतों से जुड़ी ट्रांसफर को फ़िल्टर और मॉनिटर करने की अनुमति मिलती है। +- `transfer` फ़ंक्शन एक साधारण प्रतिनिधित्व है एक टोकन ट्रांसफर क्रिया का, जो हर बार इसे कॉल किए जाने पर Transfer आयोजन को उत्पन्न करता है। #### सबस्पष्ट में कॉन्फ़िगरेशन -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +टॉपिक फ़िल्टर्स को सीधे इवेंट हैंडलर कॉन्फ़िगरेशन के भीतर सबग्राफ मैनिफेस्ट में परिभाषित किया जाता है। इन्हें इस प्रकार कॉन्फ़िगर किया जाता है: ```yaml eventHandlers: @@ -414,7 +415,7 @@ eventHandlers: इस सेटअप में: -- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. +- `topic`1 इवेंट के पहले आयोजन किए गए तर्क के अनुरूप है, `topic2` दूसरे के अनुरूप है, और `topic3` तीसरे के अनुरूप है। - प्रत्येक विषय में एक या अधिक मान हो सकते हैं, और एक घटना केवल तभी प्रोसेस की जाती है जब वह प्रत्येक निर्दिष्ट विषय में से किसी एक मान से मेल खाती है। #### फ़िल्टर लॉजिक @@ -434,9 +435,9 @@ eventHandlers: इस कॉन्फ़िगरेशन में: -- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- `topic1` को `Transfer` आयोजन को फ़िल्टर करने के लिए कॉन्फ़िगर किया गया है जहाँ `0xAddressA` भेजने वाला है। +- `topic2` को इस प्रकार से कॉन्फ़िगर किया गया है कि यह `Transfer`आयोजन घटनाओं को फिल्टर करता है जहां 0xAddressB रिसीवर है। +- सबग्राफ केवल उन्हीं लेन-देन को इंडेक्स करेगा जो सीधे `0xAddressA` से `0xAddressB` तक होते हैं। #### उदाहरण 2: दो या अधिक 'पते' के बीच किसी भी दिशा में लेन-देन को ट्रैक करना @@ -450,31 +451,31 @@ eventHandlers: इस कॉन्फ़िगरेशन में: -- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- Subgraph उन कई पतों के बीच होने वाले लेनदेन को दोनों दिशाओं में सूचीबद्ध करेगा, जिससे सभी पतों के बीच इंटरैक्शन की व्यापक निगरानी संभव हो सकेगी। +- `topic1` को `Transfer`आयोजन को फिल्टर करने के लिए कॉन्फ़िगर किया गया है जहाँ `0xAddressA`, `0xAddressB`,` 0xAddressC` प्रेषक हैं। +- `topic2` को `Transfer` आयोजन को फिल्टर करने के लिए कॉन्फ़िगर किया गया है, जहाँ `0xAddressB` और `0xAddressC` रिसीवर हैं। +- सबग्राफ उन सभी पतों के बीच दोनों दिशाओं में होने वाले लेन-देन को अनुक्रमित करेगा, जिससे सभी पतों के बीच होने वाली अंतःक्रियाओं की व्यापक निगरानी संभव हो सकेगी। ## घोषित eth_call > नोट: यह एक प्रयोगात्मक फीचर है जो अभी तक स्थिर Graph Node रिलीज़ में उपलब्ध नहीं है। आप इसे केवल Subgraph Studio या अपने स्वयं-होस्टेड नोड में ही उपयोग कर सकते हैं। -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` एक मूल्यवान सबग्राफ विशेषता है जो `eth_calls` को पहले से निष्पादित करने की अनुमति देती है, जिससे `graph-node` उन्हें समानांतर रूप से निष्पादित कर सकता है। यह फ़ीचर निम्नलिखित कार्य करता है: -- इथेरियम ब्लॉकचेन से डेटा प्राप्त करने के प्रदर्शन में महत्वपूर्ण सुधार करता है, जिससे कई कॉल के लिए कुल समय कम होता है और सबग्राफ की समग्र दक्षता का अनुकूलन होता है। +- यह Ethereum ब्लॉकचेन से डेटा प्राप्त करने के प्रदर्शन में महत्वपूर्ण सुधार करता है, जिससे कई कॉल के लिए कुल समय कम हो जाता है और सबग्राफ की समग्र दक्षता में वृद्धि होती है। - यह तेजी से डेटा फ़ेचिंग की अनुमति देता है, जिससे तेजी से क्वेरी प्रतिक्रियाएँ और बेहतर उपयोगकर्ता अनुभव मिलता है। - कई Ethereum कॉल्स से डेटा को एकत्रित करने की आवश्यकता वाली अनुप्रयोगों के लिए प्रतीक्षा समय को कम करता है, जिससे डेटा पुनर्प्राप्ति प्रक्रिया अधिक प्रभावी हो जाती है। ### मुख्य अवधारणाएँ -- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. +- घोषणात्मक `eth_calls`: एथेरियम कॉल्स जिन्हें अनुक्रमिक रूप से निष्पादित होने के बजाय समानांतर में निष्पादित किया जाना परिभाषित किया गया है। - समानांतर निष्पादन: एक कॉल समाप्त होने की प्रतीक्षा करने के बजाय, कई कॉल एक साथ आरंभ किए जा सकते हैं। - समय दक्षता: सभी कॉल के लिए कुल समय व्यक्तिगत कॉल के समय के योग (अनुक्रमिक) से बदलकर सबसे लंबे कॉल के द्वारा लिए गए समय (समानांतर) में बदल जाता है। -#### Scenario without Declarative `eth_calls` +#### केवल `eth_calls` के बिना परिदृश्य -आपके पास एक subgraph है जिसे एक उपयोगकर्ता के लेनदेन, बैलेंस और टोकन होल्डिंग्स के बारे में डेटा प्राप्त करने के लिए तीन Ethereum कॉल करने की आवश्यकता है। +मान लीजिए कि आपके पास एक सबग्राफ है जिसे किसी उपयोगकर्ता के लेन-देन, बैलेंस और टोकन होल्डिंग्स के बारे में डेटा लाने के लिए तीन Ethereum कॉल करने की आवश्यकता है। परंपरागत रूप से, ये कॉल क्रमिक रूप से की जा सकती हैं: @@ -484,7 +485,7 @@ Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` कुल समय लिया गया = 3 + 2 + 4 = 9 सेकंड -#### Scenario with Declarative `eth_calls` +#### परिदृश्य डिक्लेरेटिव `eth_calls` के साथ इस फीचर के साथ, आप इन कॉल्स को समानांतर में निष्पादित करने के लिए घोषित कर सकते हैं: @@ -498,15 +499,15 @@ Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` #### कैसे कार्य करता है -1. In the subgraph manifest, आप Ethereum कॉल्स को इस तरह घोषित करते हैं कि ये समानांतर में निष्पादित किए जा सकें। +1. सबग्राफ manifest में, आप Ethereum कॉल्स को इस तरह घोषित करते हैं जिससे संकेत मिलता है कि वे समानांतर रूप से निष्पादित किए जा सकते हैं। 2. पैरलेल निष्पादन इंजन: The Graph Node का निष्पादन इंजन इन घोषणाओं को पहचानता है और कॉल को समानांतर में चलाता है। -3. परिणाम संग्रहण: जब सभी कॉल समाप्त हो जाते हैं, तो परिणामों को एकत्रित किया जाता है और आगे की प्रक्रिया के लिए उपयोग किया जाता है। +3. परिणाम एकत्रीकरण: सभी कॉल पूरे होने के बाद, परिणाम एकत्र किए जाते हैं और आगे की प्रोसेसिंग के लिए सबग्राफ द्वारा उपयोग किए जाते हैं। #### उदाहरण कॉन्फ़िगरेशन Subgraph मैनिफेस्ट में -Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. +निर्धारित `eth_calls` underlying आयोजन के `event.address`के साथ-साथ सभी `event.params` तक पहुँच सकते हैं। -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` का उपयोग करते हुए `event.address`: ```yaml eventHandlers: @@ -519,38 +520,39 @@ calls: उदाहरण उपरोक्त के लिए विवरण: -- `global0X128` is the declared `eth_call`. -- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. -- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` -- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. +- `global0X128` घोषित `eth_call` है| +- यह टेक्स्ट (`global0X128`) उस `eth_call` के लिए लेबल है जिसे त्रुटियों को log करते समय उपयोग किया जाता है। +- यह पाठ (`Pool[आयोजन.address].feeGrowthGlobal0X128()`) वह वास्तविक `eth_call` है जो निष्पादित किया जाएगा, जो `Contract[address].function(arguments)` के रूप में है। +- `address` और `arguments` को उन वेरिएबल्स से बदला जा सकता है जो handler के निष्पादन के समय उपलब्ध होंगे। -`Subgraph.yaml` using `event.params` +`subgraph.yaml`का उपयोग करते हुए `event.params` ```yaml calls: - ERC20DecimalsToken0: ERC20[event.params.token0].decimals() + ``` ### मौजूदा सबग्राफ पर ग्राफ्टिंग -> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). +> **नोट**: प्रारंभिक रूप से The Graph Network में अपग्रेड करते समय graft का उपयोग करने की अनुशंसा नहीं की जाती है। अधिक जानें [यहाँ](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network)। -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +जब कोई सबग्राफ पहली बार डिप्लॉय किया जाता है, तो यह संबंधित चेन के जेनेसिस ब्लॉक (या प्रत्येक डेटा स्रोत के साथ परिभाषित `startBlock`) से इवेंट्स को indexing करना शुरू करता है। कुछ परिस्थितियों में, मौजूदा सबग्राफ से डेटा को पुन: उपयोग करना और किसी बाद के ब्लॉक से इंडेक्सिंग शुरू करना फायदेमंद होता है। इस indexing मोड को _Grafting_ कहा जाता है। उदाहरण के लिए, विकास के दौरान, यह मैपिंग में छोटे एरर्स को जल्दी से पार करने या किसी मौजूदा सबग्राफ को फिर से चालू करने के लिए उपयोगी होता है, यदि वह फेल हो गया हो। -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +एक सबग्राफ को एक बेस सबग्राफ पर graft किया जाता है जब `subgraph.yaml` में सबग्राफ manifest में शीर्ष स्तर पर एक `graft` ब्लॉक होता है। ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +जब कोई सबग्राफ , जिसकी मैनिफेस्ट में `graft` ब्लॉक शामिल होता है, डिप्लॉय किया जाता है, तो ग्राफ-नोड दिए गए `block` तक base सबग्राफ के डेटा को कॉपी करेगा और फिर उस ब्लॉक से नए सबग्राफ को इंडेक्स करना जारी रखेगा। base सबग्राफ को लक्षित ग्राफ-नोड इंस्टेंस पर मौजूद होना चाहिए और कम से कम दिए गए ब्लॉक तक इंडेक्स किया जाना चाहिए। इस प्रतिबंध के कारण, ग्राफ्टिंग का उपयोग केवल डेवलपमेंट के दौरान या किसी आपात स्थिति में एक समान गैर-ग्राफ्टेड सबग्राफ को जल्दी से तैयार करने के लिए किया जाना चाहिए। -क्योंकि आधार डेटा को अनुक्रमित करने के बजाय प्रतियों को ग्राफ्ट करना, स्क्रैच से अनुक्रमणित करने की तुलना में सबग्राफ को वांछित ब्लॉक में प्राप्त करना बहुत तेज है, हालांकि बहुत बड़े सबग्राफ के लिए प्रारंभिक डेटा कॉपी में अभी भी कई घंटे लग सकते हैं। जबकि ग्राफ्टेड सबग्राफ को इनिशियलाइज़ किया जा रहा है, ग्राफ़ नोड उन एंटिटी प्रकारों के बारे में जानकारी लॉग करेगा जो पहले ही कॉपी किए जा चुके हैं। +ग्राफ्टिंग मूल डेटा के बजाय प्रतिलिपियाँ बनाता है, इसलिए यह शुरू से इंडेक्सिंग करने की तुलना में सबग्राफ को वांछित ब्लॉक तक पहुँचाने में कहीं अधिक तेज़ होता है, हालाँकि बहुत बड़े सबग्राफ के लिए प्रारंभिक डेटा कॉपी करने में अभी भी कई घंटे लग सकते हैं। जब तक ग्राफ्ट किया गया सबग्राफ प्रारंभिक रूप से स्थापित हो रहा होता है, तब तक The ग्राफ नोड उन entity प्रकारों के बारे में जानकारी लॉग करेगा जिन्हें पहले ही कॉपी किया जा चुका है। -ग्राफ्टेड सबग्राफ एक ग्राफक्यूएल स्कीमा का उपयोग कर सकता है जो बेस सबग्राफ के समान नहीं है, लेकिन इसके अनुकूल हो। यह अपने आप में एक मान्य सबग्राफ स्कीमा होना चाहिए, लेकिन निम्नलिखित तरीकों से बेस सबग्राफ के स्कीमा से विचलित हो सकता है: +ग्राफ्टेड Subgraph एक ग्राफक्यूएल स्कीमा का उपयोग कर सकता है जो बेस Subgraph के समान नहीं है, लेकिन इसके अनुकूल हो। यह अपने आप में एक मान्य Subgraph स्कीमा होना चाहिए, लेकिन निम्नलिखित तरीकों से बेस Subgraph के स्कीमा से विचलित हो सकता है: - यह इकाई के प्रकारों को जोड़ या हटा सकता है| - यह इकाई प्रकारों में से गुणों को हटाता है| @@ -560,4 +562,4 @@ When a subgraph whose manifest contains a `graft` block is deployed, Graph Node - यह इंटरफेस जोड़ता या हटाता है| - यह कि, किन इकाई प्रकारों के लिए इंटरफ़ेस लागू होगा, इसे बदल देता है| -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** grafting को features में घोषित किया जाना आवश्यक है सबग्राफ मैनिफेस्ट में। diff --git a/website/src/pages/hi/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/hi/subgraphs/developing/creating/assemblyscript-mappings.mdx index 38441c623127..beb33c359091 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -2,7 +2,7 @@ title: Writing AssemblyScript Mappings --- -## अवलोकन +## Overview The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-too ## कोड जनरेशन -स्मार्ट कॉन्ट्रैक्ट्स, इवेंट्स और एंटिटीज के साथ काम करना आसान और टाइप-सेफ बनाने के लिए, ग्राफ सीएलआई सबग्राफ के ग्राफक्यूएल स्कीमा और डेटा स्रोतों में शामिल कॉन्ट्रैक्ट एबीआई से असेंबलीस्क्रिप्ट प्रकार उत्पन्न कर सकता है। +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. इसके साथ किया जाता है @@ -80,7 +80,7 @@ There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-too graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..9f47691b06a1 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/README.md @@ -6,7 +6,7 @@ TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to [The Graph](https://github.com/graphprotocol/graph-node). -## Usage +## उपयोग For a detailed guide on how to create a subgraph, please see the [Graph CLI docs](https://github.com/graphprotocol/graph-cli). diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/_meta-titles.json b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/_meta-titles.json index 7580246e94fd..efb08ac104b3 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/_meta-titles.json +++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/_meta-titles.json @@ -1,5 +1,5 @@ { "README": "Introduction", - "api": "API Reference", + "api": "एपीआई संदर्भ", "common-issues": "Common Issues" } diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx index e967ffa1b80b..e1cb224c81ce 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -यह पृष्ठ दस्तावेज करता है कि Subgraph मैपिंग लिखते समय किन अंतर्निहित एपीआई का उपयोग किया जा सकता है। बॉक्स से बाहर दो प्रकार के एपीआई उपलब्ध हैं: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The Graph TypeScript लाइब्रेरी (https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (graph-ts) -- `graph codegen` द्वारा subgraph files से उत्पन्न code +- Code generated from Subgraph files by `graph codegen` आप अन्य पुस्तकालयों को भी निर्भरताओं के रूप में जोड़ सकते हैं, बशर्ते कि वे [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) के साथ संगत हों। @@ -15,30 +15,30 @@ title: AssemblyScript API ## API Reference -The `@graphprotocol/graph-ts` library provides the following APIs: +`@graphprotocol/graph-ts` library निम्नलिखित API प्रदान करती है: - An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. - A `store` API to load and save entities from and to the Graph Node store. - A `log` API to log messages to the Graph Node output and Graph Explorer. -- An `ipfs` API to load files from IPFS. -- A `json` API to parse JSON data. -- A `crypto` API to use cryptographic functions. +- IPFS से files load करने के लिए एक `ipfs` API। +- JSON data को parse करने के लिए एक `json` API। +- Cryptographic functions का उपयोग करने के लिए एक `crypto` API। - एथेरियम, JSON, ग्राफक्यूएल और असेंबलीस्क्रिप्ट जैसे विभिन्न प्रकार की प्रणालियों के बीच अनुवाद करने के लिए निम्न-स्तरीय आदिम। ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### संस्थाओं का निर्माण @@ -280,10 +280,10 @@ if (transfer == null) { As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. +Store API उन इकाइयों entities को पुनः प्राप्त करने की सुविधा प्रदान करता है जो वर्तमान ब्लॉक में बनाई गई थीं या अपडेट की गई थीं। इसका एक सामान्य परिदृश्य यह है कि एक हैंडलर किसी ऑनचेन इवेंट से एक ट्रांज़ेक्शन बनाता है, और बाद में कोई अन्य हैंडलर इस ट्रांज़ेक्शन तक पहुंचना चाहता है, यदि यह मौजूद है। -- यदि लेन-देन मौजूद नहीं है, तो subgraph को केवल यह पता लगाने के लिए डेटाबेस में जाना होगा कि Entity मौजूद नहीं है। यदि subgraph लेखक पहले से जानता है कि Entity उसी ब्लॉक में बनाई जानी चाहिए थी, तो `loadInBlock` का उपयोग इस डेटाबेस राउंडट्रिप से बचाता है। -- कुछ subgraphs के लिए, ये छूटे हुए लुकअप्स indexing समय में महत्वपूर्ण योगदान दे सकते हैं। +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -329,7 +329,7 @@ let tokens = holder.tokens.load() किसी मौजूदा निकाय को अद्यतन करने के दो तरीके हैं: 1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. -2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. +2. बस उदाहरण के साथ entity बनाएं। `new Transfer(id)`, entity पर properties set करें, फिर इसे store पर `.save()` करें। यदि entity पहले से मौजूद है, तो परिवर्तन उसमें merge कर दिए जाते हैं। ज्यादातर मामलों में गुण बदलना सीधे आगे है, उत्पन्न संपत्ति सेटर्स के लिए धन्यवाद: @@ -380,11 +380,11 @@ store.remove('Transfer', id) #### एथेरियम प्रकार के लिए समर्थन -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. एक सामान्य पैटर्न उस अनुबंध का उपयोग करना है जिससे कोई घटना उत्पन्न होती है। यह निम्नलिखित कोड के साथ हासिल किया गया है: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -कोई अन्य अनुबंध जो सबग्राफ का हिस्सा है, उत्पन्न कोड से आयात किया जा सकता है और एक वैध पते के लिए बाध्य किया जा सकता है। +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### रिवर्टेड कॉल्स को हैंडल करना @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,16 +590,12 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. ```typescript -log.info('संदेश प्रदर्शित किया जाना है: {}, {}, {}', [ - value.toString(), - OtherValue.toString(), - 'पहले से ही एक स्ट्रिंग', -]) +log.info ('संदेश प्रदर्शित किया जाना है: {}, {}, {}', [value.toString (), OtherValue.toString (), 'पहले से ही एक स्ट्रिंग']) ``` #### एक या अधिक मान लॉग करना @@ -725,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### क्रिप्टो एपीआई @@ -840,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -891,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/common-issues.mdx index 155469a5960b..fb8daba8b9a6 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: आम AssemblyScript मुद्दे --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: -- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. -- Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). +- `Private` क्लास वेरिएबल्स [AssemblyScript] (https://www.assemblyscript.org/status.html#language-features) में अनिवार्य नहीं होते हैं। क्लास ऑब्जेक्ट से सीधे क्लास वेरिएबल्स को बदले जाने से बचाने का कोई तरीका नहीं है। +- Scope को [closure functions](https://www.assemblyscript.org/status.html#on-closures) में inherite नहीं किया गया है, यानी closure functions के बाहर declared variables का उपयोग नहीं किया जा सकता है। [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s) में स्पष्टीकरण। diff --git a/website/src/pages/hi/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/hi/subgraphs/developing/creating/install-the-cli.mdx index 84d3b139b130..031c70bf3507 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: . ग्राफ़ सीएलआई इनस्टॉल करें --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). -## अवलोकन +## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## शुरू करना @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## एक सबग्राफ बनाएं ### एक मौजूदा कॉन्ट्रैक्ट से -यह कमांड एक subgraph बनाता है जो एक मौजूदा कॉन्ट्रैक्ट के सभी इवेंट्स को इंडेक्स करता है: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - यदि कोई वैकल्पिक तर्क गायब है, तो यह आपको एक इंटरैक्टिव फॉर्म के माध्यम से मार्गदर्शन करता है। -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### एक उदाहरण सबग्राफ से -निम्नलिखित कमांड एक उदाहरण subgraph से एक नया प्रोजेक्ट प्रारंभ करता है: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is एबीआई फाइल(फाइलों) को आपके अनुबंध(ओं) से मेल खाना चाहिए। ABI फ़ाइलें प्राप्त करने के कुछ तरीके हैं: - यदि आप अपना खुद का प्रोजेक्ट बना रहे हैं, तो आपके पास अपने सबसे मौजूदा एबीआई तक पहुंच होने की संभावना है। -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## स्पेकवर्जन रिलीज़ - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | घटना हैंडलरों को लेनदेन रसीदों तक पहुंच प्रदान करने के लिए समर्थन जोड़ा गया है। | -| 0.0.4 | घटना हैंडलरों को लेनदेन रसीदों तक पहुंच प्रदान करने के लिए समर्थन जोड़ा गया है। | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/hi/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/hi/subgraphs/developing/creating/ql-schema.mdx index 5c2b1f2037bc..1c03459ea7fb 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/ql-schema.mdx @@ -2,9 +2,9 @@ title: The Graph QL Schema --- -## अवलोकन +## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar इससे पहले कि आप एन्टिटीज को परिभाषित करें, यह महत्वपूर्ण है कि आप एक कदम पीछे हटें और सोचें कि आपका डेटा कैसे संरचित और लिंक किया गया है। -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - यह उपयोगी हो सकता है कि संस्थाओं की कल्पना 'डेटा' समाहित करने वाले वस्तुओं के रूप में की जाए, न कि घटनाओं या कार्यों के रूप में। - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two नीचे दिए गए स्केलर्स GraphQL API में समर्थित हैं: -| प्रकार | Description | -| --- | --- | -| `Bytes` | बाइट सरणी, एक हेक्साडेसिमल स्ट्रिंग के रूप में दर्शाया गया है। आमतौर पर एथेरियम हैश और पतों के लिए उपयोग किया जाता है। | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| प्रकार | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | बाइट सरणी, एक हेक्साडेसिमल स्ट्रिंग के रूप में दर्शाया गया है। आमतौर पर एथेरियम हैश और पतों के लिए उपयोग किया जाता है। | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -एक-से-अनेक संबंधों के लिए, संबंध को हमेशा 'एक' पक्ष में संग्रहीत किया जाना चाहिए, और 'अनेक' पक्ष हमेशा निकाला जाना चाहिए। संबंधों को इस तरह से संग्रहीत करने के बजाय, 'अनेक' पक्ष पर संस्थाओं की एक सरणी संग्रहीत करने के परिणामस्वरूप, सबग्राफ को अनुक्रमित करने और क्वेरी करने दोनों के लिए नाटकीय रूप से बेहतर प्रदर्शन होगा। सामान्य तौर पर, संस्थाओं की सरणियों को संग्रहीत करने से जितना संभव हो उतना बचा जाना चाहिए। +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### उदाहरण @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -मैनी-टू-मैनी संबंधों को संग्रहीत करने के इस अधिक विस्तृत तरीके के परिणामस्वरूप सबग्राफ के लिए कम डेटा संग्रहीत होगा, और इसलिए एक सबग्राफ के लिए जो अक्सर इंडेक्स और क्वेरी के लिए नाटकीय रूप से तेज़ होता है। +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### स्कीमा में टिप्पणियां जोड़ना @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## भाषाओं का समर्थन किया @@ -295,24 +295,24 @@ query { समर्थित भाषा शब्दकोश: -| Code | शब्दकोष | -| ------ | --------- | -| simple | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | पुर्तगाली | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| Code | शब्दकोष | +| ------ | ---------- | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | पुर्तगाली | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | ### रैंकिंग एल्गोरिदम diff --git a/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx index a162f802cf9c..4931e6b1fd34 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -2,22 +2,34 @@ title: Starting Your Subgraph --- -## अवलोकन +## Overview -ग्राफ़ में पहले से ही हजारों सबग्राफ उपलब्ध हैं, जिन्हें क्वेरी के लिए उपयोग किया जा सकता है, तो The Graph Explorer(https://thegraph.com/explorer) को चेक करें और ऐसा कोई Subgraph ढूंढें जो पहले से आपकी ज़रूरतों से मेल खाता हो। +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -जब आप एक [सबग्राफ](/subgraphs/developing/subgraphs/)बनाते हैं, तो आप एक कस्टम ओपन API बनाते हैं जो ब्लॉकचेन से डेटा निकालता है, उसे प्रोसेस करता है, स्टोर करता है और इसे GraphQL के माध्यम से क्वेरी करना आसान बनाता है। +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/hi/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/hi/subgraphs/developing/creating/subgraph-manifest.mdx index 31dbc7079552..71a66d7a0b36 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/subgraph-manifest.mdx @@ -2,34 +2,34 @@ title: Subgraph Manifest --- -## अवलोकन +## Overview -subgraph मैनिफेस्ट, subgraph.yaml, उन स्मार्ट कॉन्ट्रैक्ट्स और नेटवर्क को परिभाषित करता है जिन्हें आपका subgraph इंडेक्स करेगा, इन कॉन्ट्रैक्ट्स से ध्यान देने योग्य इवेंट्स, और इवेंट डेटा को उन संस्थाओं के साथ मैप करने का तरीका जिन्हें Graph Node स्टोर करता है और जिन्हें क्वेरी करने की अनुमति देता है। +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -**subgraph definition** में निम्नलिखित फ़ाइलें शामिल हैं: +The **Subgraph definition** consists of the following files: -- subgraph.yaml: में subgraph मैनिफेस्ट शामिल है +- `subgraph.yaml`: Contains the Subgraph manifest -- schema.graphql: एक GraphQL स्कीमा जो आपके लिए डेटा को परिभाषित करता है और इसे GraphQL के माध्यम से क्वेरी करने का तरीका बताता है. +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph क्षमताएँ -एक सिंगल subgraph कर सकता है: +A single Subgraph can: - कई स्मार्ट कॉन्ट्रैक्ट्स से डेटा को इंडेक्स करें (लेकिन कई नेटवर्क नहीं)। -- IPFS फ़ाइलों से डेटा को डेटा स्रोत फ़ाइलें का उपयोग करके अनुक्रमित करें। +- IPFS फ़ाइलों से डेटा को डेटा स्रोत फ़ाइलें का उपयोग करके अनुक्रमित करें। - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). मेनिफेस्ट के लिए अद्यतन करने के लिए महत्वपूर्ण प्रविष्टियां हैं: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## आयोजन Handlers -Event handlers एक subgraph में स्मार्ट कॉन्ट्रैक्ट्स द्वारा ब्लॉकचेन पर उत्पन्न होने वाले विशिष्ट घटनाओं पर प्रतिक्रिया करते हैं और subgraph के मैनिफेस्ट में परिभाषित हैंडलर्स को ट्रिगर करते हैं। इससे subgraphs को परिभाषित लॉजिक के अनुसार घटना डेटा को प्रोसेस और स्टोर करने की अनुमति मिलती है। +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### इवेंट हैंडलर को परिभाषित करना -एक event handler को डेटा स्रोत के भीतर subgraph के YAML configuration में घोषित किया जाता है। यह निर्दिष्ट करता है कि कौन से events पर ध्यान देना है और उन events का पता चलने पर कार्यान्वित करने के लिए संबंधित function क्या है। +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## कॉल हैंडलर्स -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. कॉल हैंडलर केवल दो मामलों में से एक में ट्रिगर होंगे: जब निर्दिष्ट फ़ंक्शन को अनुबंध के अलावा किसी अन्य खाते द्वारा कॉल किया जाता है या जब इसे सॉलिडिटी में बाहरी के रूप में चिह्नित किया जाता है और उसी अनुबंध में किसी अन्य फ़ंक्शन के भाग के रूप में कॉल किया जाता है। -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### कॉल हैंडलर को परिभाषित करना @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### मानचित्रण समारोह -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## ब्लॉक हैंडलर -Contract events या function calls की सदस्यता लेने के अलावा, एक subgraph अपने data को update करना चाह सकता है क्योंकि chain में नए blocks जोड़े जाते हैं। इसे प्राप्त करने के लिए एक subgraph every block के बाद या pre-defined filter से match होन वाले block के बाद एक function चला सकता है। +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### समर्थित फ़िल्टर @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. ब्लॉक हैंडलर के लिए फ़िल्टर की अनुपस्थिति सुनिश्चित करेगी कि हैंडलर को प्रत्येक ब्लॉक कहा जाता है। डेटा स्रोत में प्रत्येक फ़िल्टर प्रकार के लिए केवल एक ब्लॉक हैंडलर हो सकता है। @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once फ़िल्टर @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -'once' फ़िल्टर के साथ परिभाषित हैंडलर केवल एक बार सभी अन्य हैंडलर्स चलने से पहले कॉल किया जाएगा। यह कॉन्फ़िगरेशन 'subgraph' को प्रारंभिक हैंडलर के रूप में उपयोग करने की अनुमति देता है, जिससे 'indexing' के शुरू होने पर विशिष्ट कार्य किए जा सकते हैं। +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### मानचित्रण समारोह -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer संकेत -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> इस संदर्भ में "history" का अर्थ उन आंकड़ों को संग्रहीत करने से है जो 'mutable' संस्थाओं की पुरानी स्थितियों को दर्शाते हैं। +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. दिए गए ब्लॉक के रूप में इतिहास की आवश्यकता है: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- उस ब्लॉक पर 'subgraph' को वापस लाना +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block यदि ब्लॉक के रूप में ऐतिहासिक डेटा को प्रून किया गया है, तो उपरोक्त क्षमताएँ उपलब्ध नहीं होंगी। > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: विशिष्ट मात्रा में ऐतिहासिक डेटा बनाए रखने के लिए: @@ -532,3 +532,18 @@ For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/# indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx index 89a802802610..f1f1aacab6ff 100644 --- a/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/hi/subgraphs/developing/creating/unit-testing-framework.mdx @@ -4,12 +4,12 @@ title: |- कला --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - यह Rust में लिखा गया है और उच्च प्रदर्शन के लिए अनुकूलित है। -- यह आपको डेवलपर विशेषता तक पहुंच प्रदान करता है, जिसमें contract कॉल्स को मॉक करने, स्टोर स्टेट के बारे में एसेर्शन करने, सबग्राफ विफलताओं की निगरानी करने, टेस्ट परफॉर्मेंस जांचने और बहुत कुछ करने की क्षमता शामिल है। +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## शुरू करना @@ -35,7 +35,7 @@ yarn add --dev matchstick-as brew install postgresql ``` -यहां तक कि नवीनतम libpq.5.lib\_ का एक symlink बनाएं। आपको पहले यह dir बनाने की आवश्यकता हो सकती है: `/usr/local/opt/postgresql/lib/` +यहां तक कि नवीनतम libpq.5.lib_ का एक symlink बनाएं। आपको पहले यह dir बनाने की आवश्यकता हो सकती है: `/usr/local/opt/postgresql/lib/` ```sh ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib @@ -89,7 +89,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### सीएलआई विकल्प @@ -115,7 +115,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -147,17 +147,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### डेमो सबग्राफ +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### वीडियो शिक्षण -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -664,7 +664,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im ये रहा - हमने अपना पहला परीक्षण बना लिया है! 👏 -अब हमारे परीक्षण चलाने के लिए आपको बस अपने सबग्राफ रूट फ़ोल्डर में निम्नलिखित को चलाने की आवश्यकता है: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -758,7 +758,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -767,7 +767,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1174,7 +1174,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1291,11 +1291,11 @@ test('file/ipfs dataSource creation example', () => { ## टेस्ट कवरेज -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. -### Prerequisites +### आवश्यक शर्तें To run the test coverage functionality provided in **Matchstick**, there are a few things you need to prepare beforehand: @@ -1313,7 +1313,7 @@ In order for that function to be visible (for it to be included in the `wat` fil export { handleNewGravatar } ``` -### Usage +### उपयोग एक बार यह सब सेट हो जाने के बाद, परीक्षण कवरेज टूल चलाने के लिए, बस चलाएँ: @@ -1397,7 +1397,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## प्रतिक्रिया diff --git a/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx index 3e03014aba51..d10ef9160dc6 100644 --- a/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/hi/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: मल्टीपल नेटवर्क्स पर एक Subgraph डिप्लॉय करना +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## सबग्राफ को कई नेटवर्क पर तैनात करना +## Deploying the Subgraph to multiple networks -कुछ मामलों में, आप एक ही सबग्राफ को इसके सभी कोड को डुप्लिकेट किए बिना कई नेटवर्क पर तैनात करना चाहेंगे। इसके साथ आने वाली मुख्य चुनौती यह है कि इन नेटवर्कों पर अनुबंध के पते अलग-अलग हैं। +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### graph-cli का उपयोग करते हुए @@ -21,7 +22,7 @@ This page explains how to deploy a subgraph to multiple networks. To deploy a su ``` -आप --network विकल्प का उपयोग करके एक नेटवर्क कॉन्फ़िगरेशन को एक json मानक फ़ाइल (डिफ़ॉल्ट रूप से networks.json) से निर्दिष्ट कर सकते हैं ताकि विकास के दौरान आसानी से अपने subgraph को अपडेट किया जा सके +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > ध्यान दें: init कमांड अब दी गई जानकारी के आधार पर एक networks.json को स्वचालित रूप से उत्पन्न करेगा। इसके बाद आप मौजूदा नेटवर्क को अपडेट कर सकेंगे या अतिरिक्त नेटवर्क जोड़ सकेंगे। @@ -55,7 +56,7 @@ This page explains how to deploy a subgraph to multiple networks. To deploy a su > ध्यान दें: आपको किसी भी 'templates' (यदि आपके पास कोई है) को config फ़ाइल में निर्दिष्ट करने की आवश्यकता नहीं है, केवल 'dataSources' को। यदि 'subgraph.yaml' फ़ाइल में कोई 'templates' घोषित किए गए हैं, तो उनका नेटवर्क स्वचालित रूप से उस नेटवर्क में अपडेट हो जाएगा जो 'network' विकल्प के साथ निर्दिष्ट किया गया है। -मान लीजिए कि आप अपने subgraph को mainnet और sepolia नेटवर्क पर डिप्लॉय करना चाहते हैं, और यह आपका subgraph.yaml है: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -97,7 +98,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -build कमांड आपके subgraph.yaml को sepolia कॉन्फ़िगरेशन के साथ अपडेट करेगा और फिर से subgraph को पुनः-कंपाइल करेगा। आपका subgraph.yaml फ़ाइल अब इस प्रकार दिखना चाहिए: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -128,7 +129,7 @@ yarn deploy --network sepolia --network-file path/to/config एक तरीका है 'graph-cli' के पुराने संस्करणों का उपयोग करके अनुबंध पते जैसी विशेषताओं को पैरामीटरित करना, जो कि एक टेम्पलेटिंग सिस्टम जैसे Mustache (https://mustache.github.io/) या Handlebars (https://handlebarsjs.com/) के साथ इसके कुछ हिस्सों को जनरेट करना है। -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -180,7 +181,7 @@ dataSources: } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh मेननेट: @@ -194,25 +195,25 @@ yarn prepare && yarn deploy यह दृष्टिकोण अधिक जटिल परिस्थितियों में भी लागू किया जा सकता है, जहां अनुबंध पते और नेटवर्क नामों के अलावा अधिक को प्रतिस्थापित करने की आवश्यकता होती है या जहां टेम्पलेट से मैपिंग या ABIs उत्पन्न करने की आवश्यकता होती है। -यह आपको chainHeadBlock देगा जिसे आप अपने subgraph पर latestBlock के साथ तुलना कर सकते हैं यह जाँचने के लिए कि क्या यह पीछे चल रहा है। synced यह बताता है कि क्या subgraph कभी श्रृंखला के साथ मेल खा गया है। health वर्तमान में दो मान ले सकता है: healthy अगर कोई त्रुटियाँ नहीं हुई हैं, या failed अगर कोई त्रुटि हुई है जिसने subgraph की प्रगति को रोक दिया है। इस स्थिति में, आप इस त्रुटि के विवरण के लिए fatalError फ़ील्ड की जांच कर सकते हैं। +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## सबग्राफ स्टूडियो सबग्राफ संग्रह नीति +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -इस नीति से प्रभावित प्रत्येक सबग्राफ के पास विचाराधीन संस्करण को वापस लाने का विकल्प है। +Every Subgraph affected with this policy has an option to bring the version in question back. -## सबग्राफ स्वास्थ्य की जाँच करना +## Checking Subgraph health -यदि एक सबग्राफ सफलतापूर्वक सिंक हो जाता है, तो यह एक अच्छा संकेत है कि यह हमेशा के लिए अच्छी तरह से चलता रहेगा। हालांकि, नेटवर्क पर नए ट्रिगर्स के कारण आपका सबग्राफ एक अनुपयोगी त्रुटि स्थिति में आ सकता है या यह प्रदर्शन समस्याओं या नोड ऑपरेटरों के साथ समस्याओं के कारण पीछे पड़ना शुरू हो सकता है। +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node एक GraphQL endpoint को उजागर करता है जिसे आप अपने subgraph की स्थिति की जांच करने के लिए क्वेरी कर सकते हैं। होस्टेड सेवा पर, यह https://api.thegraph.com/index-node/graphql पर उपलब्ध है। एक स्थानीय नोड पर, यह डिफ़ॉल्ट रूप से पोर्ट 8030/graphql पर उपलब्ध है। इस endpoint के लिए पूरा स्कीमा यहां (https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql) पाया जा सकता है। यहां एक उदाहरण क्वेरी है जो एक subgraph के वर्तमान संस्करण की स्थिति की जांच करती है: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -239,4 +240,4 @@ Graph Node एक GraphQL endpoint को उजागर करता है } ``` -यह आपको chainHeadBlock देगा जिसे आप अपने subgraph पर latestBlock के साथ तुलना कर सकते हैं यह जाँचने के लिए कि क्या यह पीछे चल रहा है। synced यह बताता है कि क्या subgraph कभी श्रृंखला के साथ मेल खा गया है। health वर्तमान में दो मान ले सकता है: healthy अगर कोई त्रुटियाँ नहीं हुई हैं, या failed अगर कोई त्रुटि हुई है जिसने subgraph की प्रगति को रोक दिया है। इस स्थिति में, आप इस त्रुटि के विवरण के लिए fatalError फ़ील्ड की जांच कर सकते हैं। +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx index 3fa668ee3535..4ab6dece55a9 100644 --- a/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/hi/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,30 +2,30 @@ title: Deploying Using Subgraph Studio --- -अपने subgraph को Subgraph Studio में डिप्लॉय करना सीखें। +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio का अवलोकन In Subgraph Studio,आप निम्नलिखित कर सकते हैं: -- आपने बनाए गए subgraphs की सूची देखें -- एक विशेष subgraph की स्थिति को प्रबंधित करें, विवरण देखें और दृश्य रूप में प्रदर्शित करें -- विशिष्ट सबग्राफ के लिए अपनी एपीआई keys बनाएं और प्रबंधित करें +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - अपने API कुंजी को विशेष डोमेन तक सीमित करें और केवल कुछ Indexers को उनके साथ क्वेरी करने की अनुमति दें -- अपना subgraph बनाएं -- अपने subgraph को The Graph CLI का उपयोग करके डिप्लॉय करें -- अपने 'subgraph' को 'playground' वातावरण में टेस्ट करें -- अपने स्टेजिंग में 'subgraph' को विकास क्वेरी URL का उपयोग करके एकीकृत करें -- अपने subgraph को The Graph Network पर प्रकाशित करें +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - अपने बिलिंग को प्रबंधित करें ## The Graph CLI स्थापित करें Deploy करने से पहले, आपको The Graph CLI इंस्टॉल करना होगा। -आपको The Graph CLI का उपयोग करने के लिए Node.js(https://nodejs.org/) और आपकी पसंद का पैकेज मैनेजर (npm, yarn या pnpm) स्थापित होना चाहिए। सबसे हालिया (https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI संस्करण की जांच करें। +आपको The Graph CLI का उपयोग करने के लिए Node.js(https://nodejs.org/) और आपकी पसंद का पैकेज मैनेजर (npm, yarn या pnpm) स्थापित होना चाहिए। सबसे हालिया (https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI संस्करण की जांच करें। ### इंस्टॉल करें 'yarn' के साथ @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. खोलें [Subgraph Studio](https://thegraph.com/studio/). 2. अपने वॉलेट से साइन इन करें। - आप इसे MetaMask, Coinbase Wallet, WalletConnect, या Safe के माध्यम से कर सकते हैं। -3. साइन इन करने के बाद, आपका यूनिक डिप्लॉय की आपकी subgraph विवरण पृष्ठ पर प्रदर्शित होगा। - - Deploy key आपको अपने subgraphs को प्रकाशित करने या अपने API keys और billing को प्रबंधित करने की अनुमति देता है। यह अद्वितीय है लेकिन यदि आपको लगता है कि यह समझौता किया गया है, तो इसे पुनः उत्पन्न किया जा सकता है। +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> महत्वपूर्ण: आपको subgraphs को क्वेरी करने के लिए एक API कुंजी की आवश्यकता है +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### ग्राफ नेटवर्क के साथ सबग्राफ अनुकूलता -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- निम्नलिखित सुविधाओं में से किसी का उपयोग नहीं करना चाहिए: - - ipfs.cat & ipfs.map - - गैर-घातक त्रुटियाँ - - ग्राफ्टिंग +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## अपने Subgraph को प्रारंभ करें -एक बार जब आपका subgraph Subgraph Studio में बना दिया गया है, तो आप इस कमांड का उपयोग करके CLI के माध्यम से इसके कोड को प्रारंभ कर सकते हैं: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -आप `` मान को अपने subgraph विवरण पृष्ठ पर Subgraph Studio में पा सकते हैं, नीचे दी गई छवि देखें: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -`graph init` चलाने के बाद, आपसे संपर्क पता, नेटवर्क, और एक ABI इनपुट करने के लिए कहा जाएगा जिसे आप क्वेरी करना चाहते हैं। यह आपके स्थानीय मशीन पर एक नया फोल्डर उत्पन्न करेगा जिसमें आपके Subgraph पर काम करना शुरू करने के लिए कुछ मूल कोड होगा। आप फिर अपने Subgraph को अंतिम रूप दे सकते हैं ताकि यह सुनिश्चित किया जा सके कि यह अपेक्षित रूप से काम करता है। +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## ग्राफ प्रमाणीकरण -अपने subgraph को Subgraph Studio पर डिप्लॉय करने से पहले, आपको CLI के भीतर अपने खाते में लॉग इन करना होगा। ऐसा करने के लिए, आपको अपना deploy key चाहिए होगा, जिसे आप अपने subgraph विवरण पृष्ठ के तहत पा सकते हैं। +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. फिर, CLI से प्रमाणित करने के लिए निम्नलिखित आदेश का उपयोग करें: @@ -91,11 +85,11 @@ graph auth ## Subgraph डिप्लॉय करना -जब आप तैयार हों, तो आप अपना subgraph को Subgraph Studio पर डिप्लॉय कर सकते हैं। +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> CLI का उपयोग करके subgraph को डिप्लॉय करना उसे Studio में पुश करता है, जहां आप इसे टेस्ट कर सकते हैं और मेटाडेटा को अपडेट कर सकते हैं। यह क्रिया आपके subgraph को विकेंद्रीकृत नेटवर्क पर प्रकाशित नहीं करेगी। +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -निम्नलिखित CLI कमांड का उपयोग करके अपना subgraph डिप्लॉय करें: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ graph deploy ## अपने Subgraph का परीक्षण करें -डिप्लॉय करने के बाद, आप अपने subgraph का परीक्षण कर सकते हैं (या तो Subgraph Studio में या अपने ऐप में, डिप्लॉयमेंट क्वेरी URL के साथ), एक और संस्करण डिप्लॉय करें, मेटाडेटा को अपडेट करें, और जब आप तैयार हों, तो Graph Explorer(https://thegraph.com/explorer) पर प्रकाशित करें। +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Subgraph Studio का उपयोग करके डैशबोर्ड पर लॉग्स की जांच करें और अपने subgraph के साथ किसी भी त्रुटियों की तलाश करें। +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## अपने Subgraph को प्रकाशित करें -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## अपने Subgraph को CLI के साथ संस्करण बनाना -यदि आप अपने subgraph को अपडेट करना चाहते हैं, तो आप निम्नलिखित कर सकते हैं: +If you want to update your Subgraph, you can do the following: - आप स्टूडियो में CLI का उपयोग करके एक नया संस्करण डिप्लॉय कर सकते हैं (इस समय यह केवल निजी होगा)। - एक बार जब आप इससे संतुष्ट हो जाएं, तो आप अपने नए डिप्लॉयमेंट को Graph Explorer(https://thegraph.com/explorer). पर प्रकाशित कर सकते हैं। -- यह क्रिया आपके नए संस्करण का निर्माण करेगी जिसे Curators सिग्नल करना शुरू कर सकते हैं और Indexers अनुक्रमित कर सकते हैं। +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## सबग्राफ संस्करणों का स्वचालित संग्रह -जब भी आप Subgraph Studio में एक नया subgraph संस्करण डिप्लॉय करते हैं, तो पिछले संस्करण को आर्काइव कर दिया जाएगा। आर्काइव किए गए संस्करणों को इंडेक्स/सिंक नहीं किया जाएगा और इसलिए उन्हें क्वेरी नहीं किया जा सकता। आप Subgraph Studio में अपने subgraph के आर्काइव किए गए संस्करण को अनआर्काइव कर सकते हैं। +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> नोट: स्टूडियो में डिप्लॉय किए गए गैर-प्रकाशित subgraphs के पिछले संस्करणों को स्वचालित रूप से आर्काइव किया जाएगा। +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/hi/subgraphs/developing/developer-faq.mdx b/website/src/pages/hi/subgraphs/developing/developer-faq.mdx index 6eeb3c64ff7f..154f01dfe721 100644 --- a/website/src/pages/hi/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/hi/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ sidebarTitle: FAQ ## सबग्रह संबंधित -### 1. सबग्राफ क्या है? +### 1. What is a Subgraph? -एक subgraph एक कस्टम API है जो ब्लॉकचेन डेटा पर आधारित है। subgraphs को GraphQL क्वेरी भाषा का उपयोग करके क्वेरी किया जाता है और इन्हें The Graph CLI का उपयोग करके Graph Node पर तैनात किया जाता है। एक बार तैनात और The Graph के विकेन्द्रीकृत नेटवर्क पर प्रकाशित होने के बाद, Indexers subgraphs को प्रोसेस करते हैं और उन्हें subgraph उपभोक्ताओं के लिए क्वेरी करने के लिए उपलब्ध कराते हैं। +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. एक Subgraph बनाने का पहला कदम क्या है? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. क्या मैं अभी भी एक subgraph बना सकता हूँ यदि मेरी स्मार्ट कॉन्ट्रैक्ट्स में कोई इवेंट्स नहीं हैं? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -यह अत्यधिक अनुशंसित है कि आप अपने स्मार्ट अनुबंधों को इस तरह से संरचित करें कि उन डेटा के साथ घटनाएँ हों जिनमें आपकी रुचि है। अनुबंध की घटनाओं द्वारा संचालित 'event handlers' को Subgraph में ट्रिगर किया जाता है और यह उपयोगी डेटा प्राप्त करने का सबसे तेज़ तरीका है। +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -अगर आप जिन अनुबंधों के साथ काम कर रहे हैं, उनमें घटनाएँ नहीं हैं, तो आपका subgraph कॉल और ब्लॉक हैंडलर्स का उपयोग कर सकता है ताकि इंडेक्सिंग को ट्रिगर किया जा सके। हालाँकि, यह अनुशंसित नहीं है, क्योंकि प्रदर्शन काफी धीमा होगा। +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. क्या मैं अपने सबग्राफ से जुड़े GitHub खाते को बदल सकता हूँ? +### 4. Can I change the GitHub account associated with my Subgraph? -एक बार जब एक subgraph बनाया जाता है, तो संबंधित GitHub खाता नहीं बदला जा सकता है। कृपया अपने subgraph को बनाने से पहले इसे ध्यान से विचार करें। +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. मैं मुख्य नेटवर्क पर एक subgraph को कैसे अपडेट करूँ? +### 5. How do I update a Subgraph on mainnet? -आप अपने subgraph का नया संस्करण Subgraph Studio में CLI का उपयोग करके डिप्लॉय कर सकते हैं। यह क्रिया आपके subgraph को निजी रखती है, लेकिन जब आप इससे खुश हों, तो आप Graph Explorer में इसे प्रकाशित कर सकते हैं। इससे आपके subgraph का एक नया संस्करण बनेगा जिस पर Curators सिग्नल करना शुरू कर सकते हैं। +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. एक Subgraph को दूसरे खाते या एंडपॉइंट पर बिना पुनः तैनात किए डुप्लिकेट करना संभव है? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -आपको सबग्राफ को फिर से तैनात करना होगा, लेकिन अगर सबग्राफ आईडी (आईपीएफएस हैश) नहीं बदलता है, तो इसे शुरुआत से सिंक नहीं करना पड़ेगा। +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. आप अपने subgraph mappings से एक contract function को कैसे कॉल करें या एक सार्वजनिक state variable तक कैसे पहुँचें? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? AssemblyScript में वर्तमान में मैपिंग्स नहीं लिखी जा रही हैं। @@ -45,15 +45,15 @@ AssemblyScript में वर्तमान में मैपिंग् ### 9. कई कॉन्ट्रैक्ट सुनते समय, क्या घटनाओं को सुनने के लिए कॉन्ट्रैक्ट के क्रम का चयन करना संभव है? -एक सबग्राफ के भीतर, घटनाओं को हमेशा उसी क्रम में संसाधित किया जाता है जिस क्रम में वे ब्लॉक में दिखाई देते हैं, भले ही वह कई अनुबंधों में हो या नहीं। +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. टेम्प्लेट्स और डेटा स्रोतों में क्या अंतर है? -Templates आपको डेटा स्रोतों को तेजी से बनाने की अनुमति देते हैं, जबकि आपका subgraph इंडेक्सिंग कर रहा है। आपका कॉन्ट्रैक्ट नए कॉन्ट्रैक्ट उत्पन्न कर सकता है जब लोग इसके साथ इंटरैक्ट करते हैं। चूंकि आप उन कॉन्ट्रैक्टों का आकार (ABI, इवेंट, आदि) पहले से जानते हैं, आप यह निर्धारित कर सकते हैं कि आप उन्हें एक टेम्पलेट में कैसे इंडेक्स करना चाहते हैं। जब वे उत्पन्न होते हैं, तो आपका subgraph कॉन्ट्रैक्ट पते को प्रदान करके एक डायनामिक डेटा स्रोत बनाएगा। +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. क्या मैं अपना subgraph हटा सकता हूँ? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## नेटवर्क से संबंधित। @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. यहां कुछ सुझाव दिए गए हैं ताकि इंडेक्सिंग का प्रदर्शन बढ़ सके। मेरा subgraph बहुत लंबे समय तक सिंक होने में समय ले रहा है। +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. क्या कोई तरीका है कि 'subgraph' को सीधे क्वेरी करके यह पता लगाया जा सके कि उसने कौन सा लेटेस्ट ब्लॉक नंबर इंडेक्स किया है? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? हाँ! निम्न आदेश का प्रयास करें, "संगठन/सबग्राफनाम" को उस संगठन के साथ प्रतिस्थापित करें जिसके अंतर्गत वह प्रकाशित है और आपके सबग्राफ का नाम: @@ -132,11 +132,11 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## विविध +## विविध -### क्या Apollo Federation का उपयोग graph-node के ऊपर किया जा सकता है? +### क्या Apollo Federation का उपयोग graph-node के ऊपर किया जा सकता है? Federation अभी समर्थित नहीं है। फिलहाल, आप schema stitching का उपयोग कर सकते हैं, या तो क्लाइंट पर या एक प्रॉक्सी सेवा के माध्यम से। diff --git a/website/src/pages/hi/subgraphs/developing/introduction.mdx b/website/src/pages/hi/subgraphs/developing/introduction.mdx index 12e2aba18447..cc7e3f61d20d 100644 --- a/website/src/pages/hi/subgraphs/developing/introduction.mdx +++ b/website/src/pages/hi/subgraphs/developing/introduction.mdx @@ -5,27 +5,27 @@ sidebarTitle: Introduction To start coding right away, go to [Developer Quick Start](/subgraphs/quick-start/). -## अवलोकन +## Overview एक डेवलपर के रूप में, आपको अपने dapp को बनाने और शक्ति प्रदान करने के लिए डेटा की आवश्यकता होती है। ब्लॉकचेन डेटा को क्वेरी करना और इंडेक्स करना चुनौतीपूर्ण होता है, लेकिन The Graph इस समस्या का समाधान प्रदान करता है। The Graph पर, आप: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. मौजूदा subgraphs को क्वेरी करने के लिए GraphQL का उपयोग करें। +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### GraphQL क्या है? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### डेवलपर क्रियाएँ -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- विशिष्ट डेटा आवश्यकताओं को पूरा करने के लिए कस्टम सबग्राफ़ बनाएं, जिससे अन्य डेवलपर्स के लिए स्केलेबिलिटी और लचीलापन में सुधार हो सके। -- अपने subgraphs को The Graph Network में तैनात करें, प्रकाशित करें और संकेत दें। +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### सबग्राफ़ क्या हैं? +### What are Subgraphs? -एक Subgraph एक कस्टम API है जो ब्लॉकचेन डेटा पर आधारित होता है। यह ब्लॉकचेन से डेटा निकालता है, उसे प्रोसेस करता है, और उसे इस तरह से संग्रहित करता है कि उसे GraphQL के माध्यम से आसानी से क्वेरी किया जा सके। +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/hi/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/hi/subgraphs/developing/managing/deleting-a-subgraph.mdx index e0889b86b0ab..02fdc71480ef 100644 --- a/website/src/pages/hi/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/hi/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## चरण-दर-चरण -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- क्यूरेटर अब सबग्राफ पर संकेत नहीं दे पाएंगे। -- Subgraph पर पहले से संकेत कर चुके Curators औसत शेयर मूल्य पर अपना संकेत वापस ले सकते हैं। -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/hi/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/hi/subgraphs/developing/managing/transferring-a-subgraph.mdx index 1b71f96fd6e8..5e1517d2c1c0 100644 --- a/website/src/pages/hi/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/hi/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -विभिन्न नेटवर्क पर प्रकाशित subgraphs के लिए उस पते पर एक NFT जारी किया गया है जिसने subgraph प्रकाशित किया। NFT एक मानक ERC721 पर आधारित है, जो The Graph नेटवर्क पर खातों के बीच स्थानांतरण की सुविधा देता है। +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. -## अनुस्मारक +## अनुस्मारक -- जो भी 'NFT' का मालिक है, वह subgraph को नियंत्रित करता है। -- यदि मालिक 'NFT' को बेचने या स्थानांतरित करने का निर्णय लेता है, तो वे नेटवर्क पर उस subgraph को संपादित या अपडेट नहीं कर पाएंगे। -- आप आसानी से एक subgraph का नियंत्रण एक multi-sig में स्थानांतरित कर सकते हैं। -- एक समुदाय का सदस्य DAO की ओर से एक subgraph बना सकता है। +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## अपने 'subgraph' को एक NFT के रूप में देखें -अपने 'subgraph' को एक NFT के रूप में देखने के लिए, आप एक NFT मार्केटप्लेस जैसे OpenSea पर जा सकते हैं: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## चरण-दर-चरण -एक Subgraph का स्वामित्व स्थानांतरित करने के लिए, निम्नलिखित करें: +To transfer ownership of a Subgraph, do the following: 1. 'Subgraph Studio' में निर्मित UI का उपयोग करें: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. उस पते का चयन करें जिसे आप 'subgraph' को स्थानांतरित करना चाहेंगे: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 51a773bb8012..4de4472caf4c 100644 --- a/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/hi/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: विकेंद्रीकृत नेटवर्क के लिए एक सबग्राफ प्रकाशित करना +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -जब आप एक subgraph को विकेंद्रीकृत नेटवर्क पर प्रकाशित करते हैं, तो आप इसे उपलब्ध कराते हैं: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -एक मौजूदा subgraph के सभी प्रकाशित संस्करण कर सकते हैं: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### प्रकाशित सबग्राफ के लिए मेटाडेटा अपडेट करना +### Updating metadata for a published Subgraph -- अपने सबग्राफ को विकेंद्रीकृत नेटवर्क पर प्रकाशित करने के बाद, आप Subgraph Studio में किसी भी समय मेटाडेटा को अपडेट कर सकते हैं। +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - एक बार जब आप अपने परिवर्तनों को सहेज लेते हैं और अपडेट प्रकाशित कर देते हैं, तो वे Graph Explorer में दिखाई देंगे। - यह ध्यान रखना महत्वपूर्ण है कि इस प्रक्रिया से कोई नया संस्करण नहीं बनेगा क्योंकि आपका डिप्लॉयमेंट नहीं बदला है। ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. `graph-cli` खोलें। 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. एक विंडो खुलेगी, जो आपको अपनी वॉलेट कनेक्ट करने, मेटाडेटा जोड़ने, और अपने अंतिम Subgraph को आपकी पसंद के नेटवर्क पर डिप्लॉय करने की अनुमति देगी। +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### अपने डिप्लॉयमेंट को अनुकूलित करना -आप अपने Subgraph बिल्ड को एक विशेष IPFSनोड पर अपलोड कर सकते हैं और निम्नलिखित फ्लैग्स के साथ अपने डिप्लॉयमेंट को और अनुकूलित कर सकते हैं: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -डेवलपर्स अपने Subgraph में GRT सिग्नल जोड़ सकते हैं ताकि Indexer को Subgraph पर क्वेरी करने के लिए प्रेरित किया जा सके। +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- यदि कोई Subgraph इंडेक्सिंग पुरस्कारों के लिए पात्र है, तो जो Indexer "इंडेक्सिंग का प्रमाण" प्रदान करते हैं, उन्हें संकेतित GRTकी मात्रा के आधार पर GRT पुरस्कार मिलेगा। +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> यदि आपका Subgraph पुरस्कारों के लिए पात्र है, तो यह अनुशंसा की जाती है कि आप अपने Subgraph को कम से कम 3,000 GRT के साथ क्यूरेट करें ताकि अधिक Indexer को आपके सबग्राफ़ को इंडेक्स करने के लिए आकर्षित किया जा सके। +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer सबग्राफ](/img/explorer-subgraphs.png) +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. Subgraph Studio आपको अपने सबग्राफ़ में सिग्नल जोड़ने की सुविधा देता है, जिसमें आप अपने सबग्राफ़ के क्यूरेशन पूल में उसी लेन-देन के साथ GRT जोड़ सकते हैं, जब इसे प्रकाशित किया जाता है. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. + ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/hi/subgraphs/developing/subgraphs.mdx b/website/src/pages/hi/subgraphs/developing/subgraphs.mdx index 153754823989..03d4a6ad952d 100644 --- a/website/src/pages/hi/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/hi/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: सबग्राफ ## Subgraph क्या है? -एक subgraph एक कस्टम, ओपन API है जो एक ब्लॉकचेन से डेटा निकालता है, उसे प्रोसेस करता है, और उसे इस तरह से स्टोर करता है कि उसे GraphQL के माध्यम से आसानी से क्वेरी किया जा सके। +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph क्षमताएँ - डेटा एक्सेस करें: Subgraphs web3 के लिए ब्लॉकचेन डेटा के क्वेरी और इंडेक्सिंग को सक्षम बनाते हैं। -- बनाएँ: डेवलपर्स The Graph Network पर subgraphs बना सकते हैं, डिप्लॉय कर सकते हैं और प्रकाशित कर सकते हैं। शुरुआत करने के लिए, subgraph डेवलपर Quick Start(quick-start/) देखें। -- इंडेक्स और क्वेरी: एक बार जब एक subgraph को इंडेक्स किया जाता है, तो कोई भी इसे क्वेरी कर सकता है।GraphExplorer(https://thegraph.com/explorer) में नेटवर्क पर प्रकाशित सभी subgraphs का अन्वेषण और क्वेरी करें। +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## एक Subgraph के अंदर -subgraph मैनिफेस्ट, subgraph.yaml, उन स्मार्ट कॉन्ट्रैक्ट्स और नेटवर्क को परिभाषित करता है जिन्हें आपका subgraph इंडेक्स करेगा, इन कॉन्ट्रैक्ट्स से ध्यान देने योग्य इवेंट्स, और इवेंट डेटा को उन संस्थाओं के साथ मैप करने का तरीका जिन्हें Graph Node स्टोर करता है और जिन्हें क्वेरी करने की अनुमति देता है। +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -**subgraph definition** में निम्नलिखित फ़ाइलें शामिल हैं: +The **Subgraph definition** consists of the following files: -- subgraph.yaml: में subgraph मैनिफेस्ट शामिल है +- `subgraph.yaml`: Contains the Subgraph manifest -- schema.graphql: एक GraphQL स्कीमा जो आपके लिए डेटा को परिभाषित करता है और इसे GraphQL के माध्यम से क्वेरी करने का तरीका बताता है. +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -प्रत्येक उपग्राफ घटक के बारे में अधिक जानने के लिए, देखें creating a subgraph(/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## सबग्राफ जीवनचक्र -यहाँ एक Subgraph के जीवनचक्र का सामान्य अवलोकन है। +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph जीवनचक्र ](/img/subgraph-lifecycle.png) ## सबग्राफ विकास -1. [एक subgraph बनाएँ](/developing/creating-a-subgraph/) -2. [डिप्लॉय a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [एक 'subgraph' का परीक्षण करें](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/hi/subgraphs/explorer.mdx b/website/src/pages/hi/subgraphs/explorer.mdx index f0b92dfd72b1..64a671781463 100644 --- a/website/src/pages/hi/subgraphs/explorer.mdx +++ b/website/src/pages/hi/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). -## अवलोकन +## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- आपके अपने तैयार किए गए subgraphs +- Your own finished Subgraphs - दूसरों द्वारा प्रकाशित subgraphs -- आपके पास जिस विशेष subgraph की आवश्यकता है (निर्माण की तारीख, सिग्नल राशि, या नाम के आधार पर)। +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -जब आप एक subgraph पर क्लिक करते हैं, तो आप निम्नलिखित कर सकेंगे: +When you click into a Subgraph, you will be able to do the following: - प्लेग्राउंड में परीक्षण प्रश्न करें और सूचनापूर्ण निर्णय लेने के लिए नेटवर्क विवरण का उपयोग करें। -- अपने स्वयं के subgraph या दूसरों के subgraphs पर GRT का सिग्नल दें ताकि indexers इसकी महत्ता और गुणवत्ता के बारे में जागरूक हो सकें। +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -हर subgraph के समर्पित पृष्ठ पर, आप निम्नलिखित कार्य कर सकते हैं: +On each Subgraph’s dedicated page, you can do the following: -- सबग्राफ पर सिग्नल/अन-सिग्नल +- Signal/Un-signal on Subgraphs - चार्ट, वर्तमान परिनियोजन आईडी और अन्य मेटाडेटा जैसे अधिक विवरण देखें -- सबग्राफ के पिछले पुनरावृत्तियों का पता लगाने के लिए संस्करणों को स्विच करें -- ग्राफ़क्यूएल के माध्यम से क्वेरी सबग्राफ -- खेल के मैदान में टेस्ट सबग्राफ -- उन अनुक्रमणकों को देखें जो एक निश्चित सबग्राफ पर अनुक्रमणित कर रहे हैं +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - सबग्राफ आँकड़े (आवंटन, क्यूरेटर, आदि) -- उस इकाई को देखें जिसने सबग्राफ प्रकाशित किया था +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexer प्रोटोकॉल की रीढ़ हैं। वे सबग्राफ पर स्टेक करते हैं, उन्हें इंडेक्स करते हैं, और उन सभी को प्रश्न प्रदान करते हैं जो सबग्राफ का उपभोग करते हैं। +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -Indexers तालिका में, आप Indexers के डेलीगेशन पैरामीटर, उनकी स्टेक, प्रत्येक subgraph के लिए उन्होंने कितना स्टेक किया है, और उन्होंने प्रश्न शुल्क और इंडेक्सिंग पुरस्कारों से कितना राजस्व प्राप्त किया है, देख सकते हैं। +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **विशिष्टताएँ** @@ -74,7 +74,7 @@ Indexers तालिका में, आप Indexers के डेलीगे - कूलडाउन शेष - वह समय जो उपरोक्त डेलीगेशन पैरामीटर को बदलने के लिए Indexer को बचा है। कूलडाउन अवधि वे होती हैं जो Indexers अपने डेलीगेशन पैरामीटर को अपडेट करते समय सेट करते हैं। - यह है Indexer का जमा किया गया हिस्सेदारी, जिसे दुष्ट या गलत व्यवहार के लिए काटा जा सकता है। - प्रतिनिधि - 'Delegators' से स्टेक जो 'Indexers' द्वारा आवंटित किया जा सकता है, लेकिन इसे स्लैश नहीं किया जा सकता। -- आवंटित- वह स्टेक है जिसे Indexers उन subgraphs के लिए सक्रिय रूप से आवंटित कर रहे हैं जिन्हें वे इंडेक्स कर रहे हैं। +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - अवेलबल डेलीगेशन कैपेसिटी - वह मात्रा जो डेलीगेटेड स्टेक है, जो Indexers अभी भी प्राप्त कर सकते हैं इससे पहले कि वे ओवर-डेलीगेटेड हो जाएं। - अधिकतम प्रत्यायोजन क्षमता - प्रत्यायोजित हिस्सेदारी की अधिकतम राशि जिसे इंडेक्सर उत्पादक रूप से स्वीकार कर सकता है। आवंटन या पुरस्कार गणना के लिए एक अतिरिक्त प्रत्यायोजित हिस्सेदारी का उपयोग नहीं किया जा सकता है। - क्वेरी शुल्क - यह कुल शुल्क है जो अंतिम उपयोगकर्ताओं ने सभी समय में एक Indexer से क्वेरी के लिए भुगतान किया है। @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. क्यूरेटर -क्यूरेटर subgraphs का विश्लेषण करते हैं ताकि यह पहचान सकें कि कौन से subgraphs उच्चतम गुणवत्ता के हैं। एक बार जब एक क्यूरेटर एक संभावित उच्च गुणवत्ता वाले subgraph को खोज लेता है, तो वे इसके बॉन्डिंग कर्व पर सिग्नल देकर इसे क्यूरेट कर सकते हैं। ऐसा करके, क्यूरेटर इंडेक्सर्स को बताते हैं कि कौन से subgraphs उच्च गुणवत्ता के हैं और उन्हें इंडेक्स किया जाना चाहिए। +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- क्यूरेटर समुदाय के सदस्य, डेटा उपभोक्ता, या यहां तक कि अपने Subgraphs पर संकेत देने के लिए GRT टोकन को बॉन्डिंग कर्व में जमा करके अपने स्वयं के Subgraph पर संकेत देने वाले सबग्रह डेवलपर्स भी हो सकते हैं। - - GRT जमा करके, Curators एक subgraph के curation shares का निर्माण करते हैं। इसके परिणामस्वरूप, वे उस subgraph से उत्पन्न query fees का एक भाग अर्जित कर सकते हैं जिस पर उन्होंने संकेत दिया है। +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - "Bonding curve" क्यूरेटर्स को सबसे उच्च गुणवत्ता वाले डेटा स्रोतों को क्यूरेट करने के लिए प्रोत्साहित करता है। यहां 'Curator' तालिका में नीचे दी गई जानकारी को देख सकते हैं: @@ -131,7 +131,7 @@ If you want to learn more about how to become a Delegator, check out the [offici On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -#### अवलोकन +#### Overview ओवरव्यू सेक्शन में वर्तमान नेटवर्क मैट्रिक्स और समय के साथ कुछ संचयी मैट्रिक्स दोनों शामिल हैं: @@ -144,8 +144,8 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep कुछ महत्वपूर्ण विवरण नोट करने के लिए: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep ### सबग्राफ टैब -सबग्राफ टैब में, आप अपने प्रकाशित सबग्राफ को देखेंगे। +In the Subgraphs tab, you’ll see your published Subgraphs. -> यह उन subgraphs को शामिल नहीं करेगा जो परीक्षण उद्देश्यों के लिए CLI के साथ तैनात किए गए हैं। subgraphs तब ही दिखाई देंगे जब उन्हें विकेंद्रीकृत नेटवर्क पर प्रकाशित किया जाएगा। +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### अनुक्रमण टैब -इंडेक्सिंग टैब में, आपको एक तालिका मिलेगी जिसमें सभी सक्रिय और ऐतिहासिक आवंटन सबग्राफ के प्रति हैं। आप चार्ट भी पाएंगे जहां आप एक Indexerके रूप में अपने पिछले प्रदर्शन को देख और विश्लेषण कर सकते हैं। +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. इस खंड में आपके नेट इंडेक्सर रिवार्ड्स और नेट क्वेरी फीस के विवरण भी शामिल होंगे। आपको ये मेट्रिक दिखाई देंगे: @@ -223,13 +223,13 @@ Delegator ,The Graph नेटवर्क के लिए महत्वप ### क्यूरेटिंग टैब -क्यूरेशन टैब में, आपको वे सभी सबग्राफ मिलेंगे जिन पर आप संकेत कर रहे हैं (इस प्रकार आपको क्वेरी शुल्क प्राप्त करने में सक्षम बनाता है)। सिग्नलिंग क्यूरेटर को इंडेक्सर्स को हाइलाइट करने की अनुमति देता है जो उपग्राफ मूल्यवान और भरोसेमंद हैं, इस प्रकार संकेत देते हैं कि उन्हें अनुक्रमित करने की आवश्यकता है। +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. इस टैब के भीतर, आपको इसका अवलोकन मिलेगा: -- सभी सबग्राफ आप सिग्नल विवरण के साथ क्यूरेट कर रहे हैं -- प्रति सबग्राफ शेयर योग -- क्वेरी पुरस्कार प्रति सबग्राफ +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - दिनांक विवरण पर अद्यतन किया गया ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/hi/subgraphs/guides/arweave.mdx b/website/src/pages/hi/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..505f7ddd5785 --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: आरवीव पर सब-ग्राफ्र्स बनाना +--- + +> Arweave समर्थन Graph Node और सबग्राफ Studio में बीटा में है: कृपया हमसे [Discord](https://discord.gg/graphprotocol) पर संपर्क करें यदि आपके पास Arweave सबग्राफ बनाने के बारे में कोई प्रश्न हैं! + +इस गाइड में आप आरवीव ब्लॉकचेन पर सब ग्राफ्स बनाना और डेप्लॉय करना सीखेंगे! + +## आरवीव क्या है? + +आरवीव प्रोटोकॉल डेवेलपर्स को स्थायी तौर पर डाटा स्टोर करने की क्षमता देता है जो कि IPFS और आरवीव के बीच का मुख्या अंतर भी है, जहाँ IPFS में इस क्षमता की कमी है, वहीँ आरवीवे पर फाइल्स डिलीट या बदली नहीं जा सकती | + +अरवीव द्वारा पहले से ही कई लाइब्रेरी विभिन्न प्रोग्रामिंग भाषाओं में विकशित की गई हैं| अधिक जानकारी के लिए आप इनका रुख कर सकते हैं: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## आरवीवे सब ग्राफ्स क्या हैं? + +The Graph आपको कस्टम ओपन API बनाने की सुविधा देता है, जिन्हें "Subgraphs" कहा जाता है। Subgraphs का उपयोग Indexers (सर्वर ऑपरेटर्स) को यह बताने के लिए किया जाता है कि ब्लॉकचेन पर कौन सा डेटा Indexing करना है और इसे उनके सर्वर पर सहेजना है, ताकि आप इसे किसी भी समय [GraphQL](https://graphql.org/) का उपयोग करके क्वेरी कर सकें। + +[Graph Node(https://github.com/graphprotocol/graph-node) अब Arweave protocol पर डेटा को इंडेक्स करने में सक्षम है। वर्तमान इंटीग्रेशन केवल Arweave को एक ब्लॉकचेन के रूप में indexing कर रहा है (blocks and transactions), यह अभी संग्रहीत फ़ाइलों को indexing नहीं कर रहा है। + +## एक आरवीव सब ग्राफ बनाना + +आरवीवे पर सब ग्राफ बनाने के लिए हमे दो पैकेजेस की जरूरत है: + +1. `@graphprotocol/graph-cli` संस्करण 0.30.2 से ऊपर - यह एक कमांड-लाइन टूल है जो सबग्राफ बनाने और डिप्लॉय करने के लिए उपयोग किया जाता है। [यहाँ क्लिक करें](https://www.npmjs.com/package/@graphprotocol/graph-cli) `npm` का उपयोग करके डाउनलोड करने के लिए। +2. `@graphprotocol/graph-ts` संस्करण 0.27.0 से ऊपर - यह Subgraph-specific types की एक लाइब्रेरी है। [यहाँ क्लिक करें](https://www.npmjs.com/package/@graphprotocol/graph-ts) इसे `npm` का उपयोग करके डाउनलोड करने के लिए। + +## सब ग्राफ के कॉम्पोनेन्ट + +तीन घटक एक Subgraph के होते हैं: - + +### 1. मैनिफेस्ट- `subgraph.yaml` + +डाटा का स्रोत्र और उनको प्रोसेस करने के बारे में बताता है| आरवीव एक नए प्रकार का डाटा सोर्स है| + +### 2. स्कीमा- `schema.graphql` + +यहाँ आप बताते हैं की आप कौन सा डाटा इंडेक्सिंग के बाद क्वेरी करना चाहते हैं| दरसअल यह एक API के मॉडल जैसा है, जहाँ मॉडल द्वारा रिक्वेस्ट बॉडी का स्ट्रक्चर परिभाषित किया जाता है| + +आर्वीव सबग्राफ के लिए आवश्यकताओं को [मौजूदा दस्तावेज़ीकरण](/developing/creating-a-subgraph/#the-graphql-schema) द्वारा कवर किया गया है। + +### 3. AssemblyScript मैपिंग्स- `mapping.ts` + +यह किसी के द्वारा इस्तेमाल किये जा रहे डाटा सोर्स से डाटा को पुनः प्राप्त करने और स्टोर करने के लॉजिक को बताता है| डाटा अनुवादित होकर आपके द्वारा सूचीबद्ध स्कीमा के अनुसार स्टोर हो जाता है| + +Subgraph को बनाते वक़्त दो मुख्य कमांड हैं: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +``` + +## सब ग्राफ मैनिफेस्ट की परिभाषा + +सबग्राफ manifest `subgraph.yaml` उन डेटा स्रोतों की पहचान करता है जिनका उपयोग सबग्राफ के लिए किया जाता है, वे ट्रिगर जो रुचि के हैं, और वे फ़ंक्शन जो उन ट्रिगर्स के जवाब में चलाए जाने चाहिए। नीचे Arweave सबग्राफ के लिए एक उदाहरण सबग्राफ manifest दिया गया है: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave सबग्राफ एक नए प्रकार के डेटा स्रोत (`arweave`) को प्रस्तुत करते हैं +- नेटवर्क को होस्टिंग Graph Node पर मौजूद नेटवर्क से मेल खाना चाहिए। सबग्राफ Studio में, Arweave का मुख्य नेटवर्क arweave-mainnet है। +- अरवीव डाटा सोर्स द्वारा एक वैकल्पिक source.owner फील्ड लाया गया, जो की एक आरवीव वॉलेट का मपब्लिक key है| + +आरवीव डाटा सोर्स द्वारा दो प्रकार के हैंडलर्स उपयोग किये जा सकते हैं: + +- `blockHandlers` - हर नए Arweave ब्लॉक पर चलाया जाता है। कोई source.owner आवश्यक नहीं है। +- `transactionHandlers` - प्रत्येक लेन-देन(transaction) पर चलाया जाता है जहाँ डेटा स्रोत का `source.owner` मालिक होता है। वर्तमान में, `transactionHandlers` के लिए एक मालिक आवश्यक है, यदि उपयोगकर्ता सभी लेन-देन(transaction) को प्रोसेस करना चाहते हैं, तो उन्हें `source.owner` के रूप में "" प्रदान करना चाहिए। + +> यहां source.owner ओनर का एड्रेस या उनका पब्लिक की हो सकता है| +> +> ट्रांसक्शन आरवीव परमावेब के लिए निर्माण खंड (बिल्डिंग ब्लॉक्स) की तरह होते हैं और एन्ड-यूजर के द्वारा बनाये गए ऑब्जेक्ट होते हैं| +> +> Note: [Irys (पहले Bundlr)](https://irys.xyz/) लेन-देन(transaction) अभी समर्थित नहीं हैं। + +## स्कीमा की परिभाषा + +Schema definition परिणामी सबग्राफ डेटाबेस की संरचना और entities के बीच संबंधों का वर्णन करता है। यह मूल डेटा स्रोत से स्वतंत्र होता है। सबग्राफ schema definition के बारे में अधिक विवरण [यहाँ](/developing/creating-a-subgraph/#the-graphql-schema) उपलब्ध है। + +## असेंबली स्क्रिप्ट मैप्पिंग्स + +आयोजन को प्रोसेस करने के लिए handler[AssemblyScript](https://www.assemblyscript.org/) में लिखे गए हैं। + +Arweave indexing [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) में Arweave-विशिष्ट डेटा प्रकार प्रस्तुत करता है। + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +ब्लॉक हैंडलर एक Block प्राप्त करते हैं, जबकि लेनदेन एक लेन-देन(transaction) प्राप्त करते हैं। + +Arweave सबग्राफ का मैपिंग लिखना Ethereum सबग्राफ के मैपिंग लिखने के बहुत समान है। अधिक जानकारी के लिए, [यहाँ क्लिक करें](/developing/creating-a-subgraph/#writing-mappings)। + +## Deploying an Arweave Subgraph in Subgraph Studio + +एक बार जब आपका सबग्राफ आपके सबग्राफ Studio डैशबोर्ड पर बना लिया जाता है, तो आप graph deploy CLI कमांड का उपयोग करके इसे डिप्लॉय कर सकते हैं। + +```bash +graph deploy --access-token +``` + +## आरवीव सब-ग्राफ क्वेरी करना + +The GraphQL endpoint Arweave सबग्राफ के लिए schema परिभाषा द्वारा निर्धारित किया जाता है, जिसमें मौजूदा API इंटरफ़ेस होता है। अधिक जानकारी के लिए कृपया [GraphQL API documentation](/subgraphs/querying/graphql-api/) देखें। + +## सब-ग्राफ के उदाहरण + +यहाँ संदर्भ के लिए एक उदाहरण सबग्राफ दिया गया है: - + +- [उदाहरण सबग्राफ for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### क्या सबग्राफ Arweave और अन्य चेन को इंडेक्स कर सकता है? + +नहीं, एक सब-ग्राफ केवल एक चेन/नेटवर्क से डाटा सोर्स को सपोर्ट कर सकता है + +### क्या मैं आरवीव पर स्टोर की फाइल्स को इंडेक्स कर सकता हूँ? + +वर्तमान में द ग्राफ आरवीव को केवल एक ब्लॉकचेन की तरह इंडेक्स करता है (उसके ब्लॉक्स और ट्रांसक्शन्स)| + +### क्या मैं अपने Subgraph में Bundlr bundles की पहचान कर सकता हूँ? + +यह वर्तमान में सपोर्टेड नहीं है| + +### क्या मैं किसी विशिष्ट अकाउंट से ट्रांसक्शन्स छाँट सकता हूँ? + +एक यूजर का पब्लिक की या अकाउंट एड्रेस source.owner हो सकता है + +### वर्तमान एन्क्रिप्शन फॉर्मेट क्या है? + +डेटा आमतौर पर Bytes के रूप में मैपिंग्स में पास किया जाता है, जिसे यदि सीधे संग्रहीत किया जाए, तो यह सबग्राफ में hex प्रारूप में लौटाया जाता है (उदाहरण: ब्लॉक और लेन-देन हैश)। आप अपने मैपिंग्स में इसे base64 या base64 URL-सुरक्षित प्रारूप में परिवर्तित करना चाह सकते हैं, ताकि यह उन ब्लॉक एक्सप्लोरर्स में प्रदर्शित होने वाले प्रारूप से मेल खाए, जैसे कि Arweave Explorer। + +यह `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` हेल्पर फंक्शन का उपयोग किया जा सकता है, और इसे `graph-ts` में जोड़ा जाएगा: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/hi/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/hi/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..d7c546cc22c2 --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Overview + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### आवश्यक शर्तें + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +या + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### निष्कर्ष + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/hi/subgraphs/guides/enums.mdx b/website/src/pages/hi/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..b4883c3bce8b --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: NFT मार्केटप्लेस को Enums का उपयोग करके करें +--- + +Enams का उपयोग करके अपने कोड को साफ और कम त्रुटिपूर्ण बनाएं। यहां NFT मार्केटप्लेस पर Enams के उपयोग का एक पूरा उदाहरण है। + +## Enums क्या हैं? + +Enums, या enumeration types, एक विशिष्ट डेटा प्रकार होते हैं जो आपको विशिष्ट, अनुमत मानों का एक सेट परिभाषित करने की अनुमति देते हैं। + +### अपने Schema में Enums का उदाहरण + +यदि आप एक Subgraph बना रहे हैं जो मार्केटप्लेस पर टोकनों के स्वामित्व इतिहास को ट्रैक करता है, तो प्रत्येक टोकन विभिन्न स्वामित्वों से गुजर सकता है, जैसे OriginalOwner, SecondOwner और ThirdOwner। एनम्स का उपयोग करके, आप इन विशिष्ट स्वामित्वों को परिभाषित कर सकते हैं, जिससे यह सुनिश्चित होगा कि केवल पूर्वनिर्धारित मान ही असाइन किए जाएं। + +आप अपनी स्कीमा में एन्सम्स (enums) को परिभाषित कर सकते हैं, और एक बार परिभाषित हो जाने के बाद, आप एन्सम के मानों की स्ट्रिंग प्रस्तुति का उपयोग करके एक एन्सम फ़ील्ड को एक entities पर सेट कर सकते हैं। + +यहां आपके स्कीमा में एक enum परिभाषा इस प्रकार हो सकती है, उपरोक्त उदाहरण के आधार पर: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +यह इसका मतलब है कि जब आप अपने स्कीमा में TokenStatus प्रकार का उपयोग करते हैं, तो आप इसकी अपेक्षा करते हैं कि यह पहले से परिभाषित मानों में से एक हो: OriginalOwner, SecondOwner, या ThirdOwner, जिससे निरंतरता और वैधता सुनिश्चित होती है। + +इस बारे में अधिक जानने के लिए Creating a Subgraph(/developing/creating-a-subgraph/#enums) और GraphQL documentation(https://graphql.org/learn/schema/#enumeration-types) देखें। + +## Enums का उपयोग करने के लाभ + +- स्पष्टता: Enums एन्उम्स मानों के लिए सार्थक नाम प्रदान करते हैं, जिससे डेटा को समझना आसान होता है। +- सत्यापन: Enums कड़े मान मान्यताएँ लागू करते हैं, जो अवैध डेटा प्रविष्टियों को रोकते हैं। +- रखरखाव: जब आपको नए श्रेणियाँ या ईनम्स (enums) जोड़ने या बदलने की आवश्यकता हो, तो आप इसे एक केंद्रित तरीके से कर सकते हैं। + +### बिना Enums + +यदि आप Enum का उपयोग करने के बजाय प्रकार को एक स्ट्रिंग के रूप में परिभाषित करते हैं, तो आपका कोड इस प्रकार दिख सकता है: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +इस स्कीमा में, TokenStatus एक साधारण स्ट्रिंग है जिसमें कोई विशिष्ट, अनुमत मान नहीं होते हैं। + +#### यह एक समस्या क्यों है? + +- TokenStatus मानों की कोई सीमा नहीं है, इसलिए कोई भी स्ट्रिंग गलती से असाइन की जा सकती है। इससे यह सुनिश्चित करना कठिन हो जाता है कि केवल वैध स्टेटस जैसे OriginalOwner, SecondOwner, या ThirdOwner सेट किए जाएं। +- यह टाइपो करना आसान है जैसे Orgnalowner को OriginalOwner के बजाय, जिससे डेटा और संभावित queries अप्रतिबद्ध हो सकती हैं। + +### Enums के साथ + +इसके बजाय कि आप फ्री-फॉर्म स्ट्रिंग्स असाइन करें, आप TokenStatus के लिए एक enum परिभाषित कर सकते हैं जिसमें विशिष्ट मान हों: OriginalOwner, SecondOwner, या ThirdOwner। enum का उपयोग करने से यह सुनिश्चित होता है कि केवल अनुमत मान ही उपयोग किए जाएं। + +Enums प्रकार सुरक्षा प्रदान करते हैं, टाइपो के जोखिम को कम करते हैं, और सुनिश्चित करते हैं कि परिणाम लगातार और विश्वसनीय हों। + +## NFT मार्केटप्लेस के लिए एन्उम्स को परिभाषित करना + +> नोट: निम्नलिखित guide CryptoCoven NFT स्मार्ट कॉन्ट्रैक्ट का उपयोग करती है। + +NFTs के व्यापार किए जाने वाले विभिन्न मार्केटप्लेस के लिए enums को परिभाषित करने के लिए, अपने Subgraph स्कीमा में निम्नलिखित का उपयोग करें: + +```gql +#मार्केटप्लेस के लिए Enum जो CryptoCoven कॉन्ट्रैक्ट के साथ इंटरएक्टेड हैं (संभवत: ट्रेड/मिंट) +enum Marketplace { + OpenSeaV1 # जब CryptoCoven NFT को इस बाजार में व्यापार किया जाता है + OpenSeaV2 # जब CryptoCoven NFT को OpenSeaV2 बाजार में व्यापार किया जाता है + SeaPort # जब CryptoCoven NFT को SeaPort बाजार में व्यापार किया जाता है + LooksRare # जब CryptoCoven NFT को LooksRare बाजार में व्यापार किया जाता है + # ...और अन्य बाजार +} +``` + +## NFT Marketplaces के लिए Enums का उपयोग + +एक बार परिभाषित हो जाने पर, enums का उपयोग आपके Subgraph में transactions या events को वर्गीकृत करने के लिए किया जा सकता है। + +उदाहरण के लिए, जब logging NFT बिक्री लॉग करते हैं, तो आप ट्रेड में शामिल मार्केटप्लेस को enum का उपयोग करके निर्दिष्ट कर सकते हैं। + +### NFT मार्केटप्लेस के लिए एक फंक्शन लागू करना + +यहाँ बताया गया है कि आप एक फ़ंक्शन को कैसे लागू कर सकते हैं जो enum से मार्केटप्लेस का नाम एक स्ट्रिंग के रूप में प्राप्त करता है: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // यदि-और-else कथनों का उपयोग करके enum मान को एक स्ट्रिंग में मैप करें + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // यदि बाज़ार OpenSea है, तो इसकी स्ट्रिंग प्रतिनिधित्व लौटाएँ + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // यदि बाज़ार SeaPort है, तो इसकी स्ट्रिंग प्रतिनिधित्व लौटाएँ + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // यदि बाज़ार LooksRare है, तो इसकी स्ट्रिंग प्रतिनिधित्व लौटाएँ + // ... और अन्य बाज़ार + } +} +``` + +## Enums का उपयोग करने के लिए सर्वोत्तम प्रथाएँ + +- सुसंगत नामकरण: पठनीयता को बेहतर बनाने के लिए enum मानों के लिए स्पष्ट, वर्णनात्मक नामों का उपयोग करें। +- केंद्रीकृत प्रबंधन: एकल फ़ाइल में enums रखें ताकि सुसंगतता बनी रहे। इससे enums को अपडेट करना आसान हो जाता है और यह सत्य का एकमात्र source बनता है। +- दस्तावेज़ीकरण: एनम में उनकी उद्देश्य और उपयोग को स्पष्ट करने के लिए टिप्पणियाँ जोड़ें। + +## queries में Enums का उपयोग करना + +क्वेरी में Enums आपके डेटा की गुणवत्ता में सुधार करने और आपके परिणामों को समझने में आसान बनाने में मदद करते हैं। ये फ़िल्टर और प्रतिक्रिया तत्व के रूप में कार्य करते हैं, बाज़ार के मूल्यों में स्थिरता सुनिश्चित करते हैं और त्रुटियों को कम करते हैं। + +**विशिष्टताएँ** + +- **Enums के साथ फ़िल्टरिंग:** Enums स्पष्ट फ़िल्टर प्रदान करते हैं, जिससे आप निश्चित रूप से विशिष्ट मार्केटप्लेस को शामिल या बाहर कर सकते हैं। +- **प्रतिसादों में Enums:** एन्‍यम्‍स यह सुनिश्चित करते हैं कि केवल मान्यता प्राप्त मार्केटप्लेस नाम ही वापस आएं, जिससे परिणाम मानकीकृत और सटीक हों। + +### नमूना queries + +#### Query 1: सबसे अधिक NFT मार्केटप्लेस इंटरएक्शन वाला खाता + +यह क्वेरी निम्नलिखित कार्य करती है: + +- यह खाते को खोजता है जिसमें सबसे अधिक अनूठे NFT मार्केटप्लेस इंटरैक्शन होते हैं, जो क्रॉस-मार्केटप्लेस गतिविधि का विश्लेषण करने के लिए बेहतरीन है। +- मार्केटप्लेस फील्ड marketplace एनम का उपयोग करता है, जो प्रतिक्रिया में सुसंगत और मान्य मार्केटप्लेस मान सुनिश्चित करता है। + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### रिटर्न्स + +यह प्रतिक्रिया खाता विवरण और मानकीकृत स्पष्टता के लिए एनम मानों के साथ अद्वितीय मार्केटप्लेस इंटरैक्शन्स की सूची प्रदान करती है: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: CryptoCoven transactions के लिए सबसे सक्रिय बाज़ार + +यह क्वेरी निम्नलिखित कार्य करती है: + +- यह उस मार्केटप्लेस की पहचान करता है जहां CryptoCoven लेनदेन का सबसे अधिक वॉल्यूम होता है। +- यह मार्केटप्लेस enum का उपयोग करता है ताकि प्रतिक्रिया में केवल मान्य मार्केटप्लेस प्रकार ही दिखाई दें, जिससे आपके डेटा में विश्वसनीयता और स्थिरता बनी रहती है। + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### परिणाम 2 + +अपेक्षित प्रतिक्रिया में मार्केटप्लेस और संबंधित transaction संख्या शामिल है, जो मार्केटप्लेस प्रकार को संकेत करने के लिए enum का उपयोग करती है: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### प्रश्न 3: उच्च लेन-देन गणना के साथ बाज़ार परस्पर क्रियाएँ + +यह क्वेरी निम्नलिखित कार्य करती है: + +- यह 100 से अधिक transactions वाले शीर्ष चार बाजारों को पुनः प्राप्त करता है, "Unknown" बाजारों को छोड़कर। +- यह केवल वैध मार्केटप्लेस प्रकारों को शामिल करने के लिए फ़िल्टर के रूप में एंनम का उपयोग करता है, जिससे सटीकता बढ़ती है। + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### परिणाम 3 + +अपेक्षित आउटपुट में उन मार्केटप्लेस का समावेश है जो मानदंडों को पूरा करते हैं, प्रत्येक को एक enum मान द्वारा प्रदर्शित किया जाता है: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Additional Resources + +अधिक जानकारी के लिए, इस guide's के [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums) को देखें। diff --git a/website/src/pages/hi/subgraphs/guides/grafting.mdx b/website/src/pages/hi/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..4c2f59e18ed0 --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/grafting.mdx @@ -0,0 +1,204 @@ +--- +title: एक कॉन्ट्रैक्ट बदलें और उसका इतिहास ग्राफ्टिंग के साथ रखें +--- + +इस गाइड में, आप मौजूदा Subgraph को ग्राफ्ट करके नए Subgraph को बनाना और तैनात करना सीखेंगे। + +## ग्राफ्टिंग क्या है? + +Grafting मौजूदा Subgraph से डेटा को पुनः उपयोग करता है और इसे बाद के ब्लॉक पर indexing करना शुरू करता है। यह विकास के दौरान सरल त्रुटियों को जल्दी से पार करने या किसी मौजूदा Subgraph को फिर से कार्यशील बनाने के लिए उपयोगी है, जब यह विफल हो जाता है। साथ ही, जब किसी Subgraph में कोई ऐसा फीचर जोड़ा जाता है जिसे शुरू से इंडेक्स करने में अधिक समय लगता है, तब भी इसका उपयोग किया जा सकता है। + +ग्राफ्टेड Subgraph एक ग्राफक्यूएल स्कीमा का उपयोग कर सकता है जो बेस Subgraph के समान नहीं है, लेकिन इसके अनुकूल हो। यह अपने आप में एक मान्य Subgraph स्कीमा होना चाहिए, लेकिन निम्नलिखित तरीकों से बेस Subgraph के स्कीमा से विचलित हो सकता है: + +- यह इकाई के प्रकारों को जोड़ या हटा सकता है| +- यह इकाई प्रकारों में से गुणों को हटाता है| +- यह प्रभावहीन गुणों को इकाई प्रकारों में जोड़ता है| +- यह प्रभाव वाले गुणों को प्रभावहीन गुणों में बदल देता है| +- यह इनम्स में महत्व देता है| +- यह इंटरफेस जोड़ता या हटाता है| +- यह कि, किन इकाई प्रकारों के लिए इंटरफ़ेस लागू होगा, इसे बदल देता है| + +अधिक जानकारी के लिए आप देख सकते हैं: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +इस ट्यूटोरियल में, हम एक बेसिक use case कवर करेंगे। हम एक मौजूदा contract को एक identical contract से replace करेंगे (जिसका नया address होगा, लेकिन code वही रहेगा)। इसके बाद, मौजूदा Subgraph को उस "base" Subgraph से graft करेंगे, जो नए contract को track करता है। + +## Important Note on Grafting When Upgrading to the Network + +> Caution: यह अनुशंसा की जाती है कि The Graph Network पर प्रकाशित किए गए Subgraphs के लिए grafting का उपयोग न करें। + +### यह क्यों महत्वपूर्ण है? + +Grafting एक शक्तिशाली feature है जो आपको एक Subgraph को दूसरे पर "graft" करने की सुविधा देता है, जिससे मौजूदा Subgraph का historical data नए version में प्रभावी रूप से ट्रांसफर हो जाता है। The Graph Network से वापस Subgraph Studio में किसी Subgraph को graft करना संभव नहीं है। + +### Best Practices + +Initial Migration: जब आप अपना Subgraph पहली बार decentralized network पर deploy करें, तो इसे grafting के बिना करें। सुनिश्चित करें कि Subgraph स्थिर है और अपेक्षित रूप से कार्य कर रहा है। + +Subsequent Updates: जब आपका Subgraph decentralized network पर live और stable हो जाए, तो आप भविष्य के versions के लिए grafting का उपयोग कर सकते हैं ताकि transition स्मूथ हो और historical data संरक्षित रहे। + +इन guidelines का पालन करके, आप risks को कम करते हैं और एक smoot migration प्रक्रिया की सुनिश्चित करते हैं। + +## एक मौजूदा सब-ग्राफ बनाना + +Subgraphs बनाना The Graph का एक आवश्यक हिस्सा है, जिसे और गहराई से यहाँ समझाया गया है। इस ट्यूटोरियल में उपयोग किए गए मौजूदा Subgraph को build और deploy करने के लिए निम्नलिखित repo प्रदान किया गया है: + +- [Subgraph उदाहरण रिपॉजिटरी](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: Subgraph में उपयोग किया गया contract निम्नलिखित Hackathon Starterkit से लिया गया है। +> (https://github.com/schmidsi/hackathon-starterkit) + +## सब ग्राफ मैनिफेस्ट की परिभाषा + +Subgraph manifest subgraph.yaml Subgraph के लिए data sources, महत्वपूर्ण triggers, और उन triggers के जवाब में चलने वाले functions को निर्दिष्ट करता है। नीचे एक उदाहरण Subgraph manifest दिया गया है, जिसे आप उपयोग करेंगे: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- `Lock` डेटा स्रोत वह ABI और अनुबंध पता है जो हमें तब मिलेगा जब हम अनुबंध को संकलित और तैनात करेंगे। +- नेटवर्क को एक इंडेक्स किए गए नेटवर्क के अनुरूप होना चाहिए जिसे क्वेरी किया जा रहा है। चूंकि हम सेपोलीया टेस्टनेट पर चल रहे हैं, नेटवर्क `sepolia` है। +- `mapping` सेक्शन उन ट्रिगर्स को परिभाषित करता है जो दिलचस्प होते हैं और उन ट्रिगर्स के प्रतिक्रिया में चलने वाली कार्यों को परिभाषित करता है। इस मामले में, हम Withdrawal इवेंट की प्रतीक्षा कर रहे हैं और जब यह इवेंट उत्पन्न होता है, तो `handleWithdrawal` कार्य को कॉल किया जाता है। + +## ग्राफ्टिंग मैनिफेस्ट की परिभाषा + +Grafting के लिए मूल Subgraph manifest में दो नए आइटम जोड़ने की आवश्यकता होती है: + +```yaml +-- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` सभी उपयोग किए गए [विशेषताओं के नाम](/developing/creating-a-subgraph/#experimental-features) की एक सूची है। +- graft: एक map है जो base Subgraph और जिस block पर graft करना है, उसे परिभाषित करता है।block वह block number है जिससे indexing शुरू करनी है।The Graph base Subgraph का डेटा दिए गए block तक (और उसे शामिल करते हुए) कॉपी करेगा और फिर उसी block से नए Subgraph की indexing जारी रखेगा। + +base और block मान प्राप्त करने के लिए दो Subgraphs deploy करने होते हैं:Base indexing के लिए एक SubgraphGrafting वाले नए Subgraph के लिए एक Subgraph + +## बेस सब-ग्राफ को तैनात करना + +1. [Subgraph Studio](https://thegraph.com/studio/) पर जाएं और Sepolia testnet पर graft-example नाम से एक Subgraph बनाएं। +2. अपने Subgraph पेज के AUTH & DEPLOY सेक्शन में दिए गए निर्देशों का पालन करें और रिपोजिटरी की graft-example फोल्डर से Subgraph को डिप्लॉय करें। +3. एक बार पूरा होने पर, सत्यापित करें की इंडेक्सिंग सही ढंग से हो गयी है| यदि आप निम्न कमांड ग्राफ प्लेग्राउंड में चलाते हैं + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +तो हमे कुछ ऐसा दिखता है: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +एक बार जब आप सुनिश्चित कर लें कि Subgraph सही तरीके से indexing कर रहा है, तो आप grafting का उपयोग करके इसे तेजी से अपडेट कर सकते हैं। + +## ग्राफ्टिंग सब-ग्राफ तैनात करना + +ग्राफ्ट प्रतिस्तापित subgraph.yaml में एक नया कॉन्ट्रैक्ट एड्रेस होगा| यह तब हो सकता है जब आप अपना डैप अपडेट करें, कॉन्ट्रैक्ट को दोबारा तैनात करें, इत्यादि| + +1. [Subgraph Studio](https://thegraph.com/studio/) पर जाएं और Sepolia testnet पर graft-replacement नाम से एक Subgraph बनाएं। +2. एक नया manifest बनाएँ।graph-replacement के लिए subgraph.yaml में एक अलग contract address और grafting के लिए नई जानकारी होगी। इसमें निम्नलिखित शामिल होंगे:block – यह पुराने contract द्वारा उत्पन्न आखिरी event का block नंबर है, जिससे आप grafting शुरू करना चाहते हैं। आखिरी event का ट्रांजैक्शन यहाँ देखें: + https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452base – यह पुराने Subgraph का Subgraph ID है।base Subgraph ID = आपके मूल graph-example Subgraph का Deployment ID। इसे Subgraph Studio में जाकर प्राप्त किया जा सकता है। +3. अपने Subgraph पेज के AUTH & DEPLOY सेक्शन में दिए गए निर्देशों का पालन करें और रिपोजिटरी की graft-replacement फोल्डर से Subgraph को डिप्लॉय करें। +4. एक बार पूरा होने पर, सत्यापित करें की इंडेक्सिंग सही ढंग से हो गयी है| यदि आप निम्न कमांड ग्राफ प्लेग्राउंड में चलाते हैं + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +आपको यह वापस मिलना चाहिए: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +आप देख सकते हैं कि graft-replacement Subgraph पुराने graph-example डेटा और नए contract address से आने वाले डेटा को एक साथ index कर रहा है।मूल contract ने दो Withdrawal events उत्पन्न किए:Event 1: https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1dEvent 2: https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452नए contract ने एक Withdrawal event उत्पन्न किया:Event 3: https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209afअब, इन दोनों पुराने transactions (Event 1 और 2) और नए transaction (Event 3) को graft-replacement Subgraph में एक साथ जोड़ दिया गया है। + +बधाई हो! आपने सफलतापूर्वक एक Subgraph को दूसरे Subgraph पर graft कर लिया है। + +## Additional Resources + +यदि आप grafting के साथ अधिक अनुभव प्राप्त करना चाहते हैं, तो यहां कुछ लोकप्रिय कॉन्ट्रैक्ट्स के उदाहरण दिए गए हैं: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +ग्राफ विशेषज्ञ बनने के लिए, अन्य तरीकों के बारे में जानने पर विचार करें जो अंतर्निहित डेटा स्रोतों में परिवर्तन को संभाल सकते हैं। [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) जैसे विकल्प समान परिणाम प्राप्त कर सकते हैं। + +> ध्यान दें: इस लेख की अधिकांश सामग्री को पहले प्रकाशित [Arweave article](/subgraphs/cookbook/arweave/) से लिया गया है। diff --git a/website/src/pages/hi/subgraphs/guides/near.mdx b/website/src/pages/hi/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..03e1425484bf --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/near.mdx @@ -0,0 +1,284 @@ +--- +title: NEAR पर सबग्राफ बनाना +--- + +यह गाइड [NEAR ब्लॉकचेन](https://docs.near.org/) पर स्मार्ट contract को इंडेक्स करने वाले Subgraphs बनाने की एक परिचयात्मक गाइड है।सबग्राफ + +## NEAR क्या है? + +[NEAR](https://near.org/) एक स्मार्ट contract प्लेटफ़ॉर्म है जो विकेंद्रीकृत applications बनाने के लिए है। अधिक जानकारी के लिए [official documentation](https://docs.near.org/concepts/basics/protocol) देखें। + +## NEAR Subgraphs क्या हैं? + +The Graph डेवलपर्स को ब्लॉकचेन इवेंट्स को प्रोसेस करने और परिणामी डेटा को आसानी से एक GraphQL API के माध्यम से उपलब्ध कराने के टूल्स देता है, जिसे व्यक्तिगत रूप से एक सबग्राफ के रूप में जाना जाता है। [Graph Node](https://github.com/graphprotocol/graph-node) अब NEAR इवेंट्स को प्रोसेस करने में सक्षम है, जिसका अर्थ है कि NEAR डेवलपर्स अब अपने स्मार्ट contract को इंडेक्स करने के लिए Subgraphs बना सकते हैं। + +सबग्राफ इवेंट-आधारित होते हैं, जिसका अर्थ है कि वे ऑनचेन इवेंट्स को सुनते हैं और फिर उन्हें प्रोसेस करते हैं। वर्तमान में, NEAR सबग्राफ के लिए दो प्रकार के handlers समर्थित हैं: + +- ब्लॉक हैंडलर्स: ये हर नए ब्लॉक पर चलते हैं +- रसीद हैंडलर: किसी निर्दिष्ट खाते पर संदेश निष्पादित होने पर हर बार चलें + +[NEAR दस्तावेज़ से:](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt) + +> रसीद सिस्टम में एकमात्र कार्रवाई योग्य वस्तु है। जब हम NEAR प्लेटफॉर्म पर "एक लेन-देन को संसाधित करने" के बारे में बात करते हैं, तो अंततः इसका अर्थ किसी बिंदु पर "रसीदें लागू करना" होता है। + +## NEAR सबग्राफ बनाना + +`@graphprotocol/graph-cli` एक कमांड-लाइन टूल है जो सबग्राफ बनाने और डिप्लॉय करने के लिए उपयोग किया जाता है। + +`@graphprotocol/graph-ts` एक लाइब्रेरी है जो सबग्राफ-विशिष्ट प्रकार प्रदान करती है। + +NEAR सबग्राफ विकास के लिए `graph-cli` का संस्करण 0.23.0 से ऊपर और `graph-ts` का संस्करण `0.23.0` से ऊपर होना आवश्यक है। + +> NEAR सबग्राफ बनाना Ethereum को इंडेक्स करने वाले सबग्राफ बनाने के समान ही है। + +सबग्राफ परिभाषा के तीन पहलू हैं: + +**subgraph.yaml:** सबग्राफ मैनिफेस्ट, जो आवश्यक डेटा स्रोतों को परिभाषित करता है और उन्हें कैसे प्रोसेस किया जाना चाहिए। NEAR एक नया kind का डेटा स्रोत है। + +**schema.graphql:** एक स्कीमा फ़ाइल है जो यह परिभाषित करती है कि आपके सबग्राफ के लिए कौन सा डेटा संग्रहीत किया जाता है और इसे GraphQL के माध्यम से कैसे क्वेरी किया जाए। NEAR सबग्राफ के लिए आवश्यकताओं को [मौजूदा दस्तावेज़ीकरण](/developing/creating-a-subgraph/#the-graphql-schema) द्वारा कवर किया गया है। + +असेम्बलीस्क्रिप्ट मैपिंग्स: [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) जो इवेंट डेटा से आपके स्कीमा में परिभाषित एंटिटीज़ में अनुवाद करता है। NEAR समर्थन NEAR-विशिष्ट डेटा प्रकार और नई JSON पार्सिंग कार्यक्षमता पेश करता है। + +Subgraph को बनाते वक़्त दो मुख्य कमांड हैं: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +``` + +### सब ग्राफ मैनिफेस्ट की परिभाषा + +सबग्राफ manifest (`subgraph.yaml`) उन डेटा स्रोतों की पहचान करता है जो सबग्राफ के लिए आवश्यक हैं, उन ट्रिगर्स को निर्दिष्ट करता है जिनमें रुचि है, और उन फ़ंक्शनों को परिभाषित करता है जिन्हें उन ट्रिगर्स के जवाब में चलाया जाना चाहिए। नीचे NEAR सबग्राफ के लिए एक उदाहरण सबग्राफ manifest दिया गया है: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR सबग्राफ ने एक नए kind का data source (`near`) पेश किया है। +- `network` को होस्टिंग ग्राफ-नोड पर एक नेटवर्क से मेल खाना चाहिए। सबग्राफ Studio पर, NEAR का मेननेट `near-mainnet` है, और NEAR का टेस्टनेट `near-testnet` है। +- NEAR डेटा स्रोतों में एक वैकल्पिक `source.account` फ़ील्ड पेश किया गया है, जो एक मानव-पठनीय आईडी है जो एक [NEAR खाता](https://docs.near.org/concepts/protocol/account-model) से मेल खाती है। यह एक खाता या एक उप-खाता हो सकता है। +- NEAR डेटा स्रोत वैकल्पिक `source.accounts` फ़ील्ड पेश करते हैं, जिसमें वैकल्पिक उपसर्ग और प्रत्यय होते हैं। कम से कम उपसर्ग या प्रत्यय में से एक निर्दिष्ट किया जाना चाहिए, ये किसी भी खाते से मेल खाएंगे जो सूचीबद्ध मानों से शुरू या समाप्त होता है। नीचे दिया गया उदाहरण निम्नलिखित के लिए मेल खाएगा: `[app|good].*[morning.near|morning.testnet]`। यदि केवल उपसर्ग या प्रत्ययों की सूची आवश्यक हो तो दूसरा फ़ील्ड हटा दिया जा सकता है। + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +NEAR डेटा स्रोत दो प्रकार के हैंडलर का समर्थन करते हैं: + +- `blockHandlers`: हर नए NEAR ब्लॉक पर चलते हैं। कोई source.account आवश्यक नहीं है। +- Here’s the translation of the provided text into Hindi: + receiptHandlers: हर रिसीट पर तब चलाए जाते हैं जब डेटा स्रोत का source.account प्राप्तकर्ता हो। ध्यान दें कि केवल बिल्कुल मिलान वाले ही प्रोसेस किए जाते हैं ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) को स्वतंत्र डेटा स्रोत के रूप में जोड़ा जाना चाहिए)। + +### स्कीमा की परिभाषा + +Schema परिभाषा परिणामस्वरूप बनने वाले सबग्राफ डेटाबेस की संरचना और इकाइयों के बीच संबंधों का वर्णन करती है। यह मूल डेटा स्रोत से स्वतंत्र होती है। सबग्राफ schema परिभाषा के बारे में अधिक विवरण [यहाँ](/developing/creating-a-subgraph/#the-graphql-schema) उपलब्ध हैं। + +### असेंबली स्क्रिप्ट मैप्पिंग्स + +आयोजन को प्रोसेस करने के लिए handler[AssemblyScript](https://www.assemblyscript.org/) में लिखे गए हैं। + +NEAR इंडेक्सिंग [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) में NEAR-विशिष्ट डेटा प्रकारों को पेश करती है। + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// हमेशा शून्य जब संस्करण < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +ये प्रकार block और receipt handlers को पास किए जाते हैं: + +- ब्लॉक handler को एक `Block` प्राप्त होगा। +- रसीद handler को `ReceiptWithOutcome` प्राप्त होगा। + +अन्यथा, शेष [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) NEAR सबग्राफ डेवलपर्स के लिए मैपिंग निष्पादन के दौरान उपलब्ध है। + +यह एक नई JSON पार्सिंग फ़ंक्शन शामिल करता है - NEAR पर अक्सर stringified JSONs के रूप में लॉग्स जारी किए जाते हैं। एक नया `json.fromString(...)` फ़ंक्शन [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) के रूप में उपलब्ध है, जो डेवलपर्स को इन लॉग्स को आसानी से प्रोसेस करने की अनुमति देता है। + +## एक NEAR सबग्राफ की तैनाती + +एक बार जब आपने सबग्राफ बना लिया है, तो इसे ग्राफ-नोड पर Indexing के लिए डिप्लॉय करने का समय आ गया है। NEAR सबग्राफ को किसी भी ग्राफ-नोड >=v0.26.x पर डिप्लॉय किया जा सकता है (यह संस्करण अभी तक टैग और जारी नहीं किया गया है)। + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on सबग्राफ Studio पर सबग्राफ बनाने और तैनात करने के बारे में [यहाँ](/deploying/deploying-a-subgraph-to-studio/) पाया जा सकता है। + +पहला कदम आपका सबग्राफ "बनाना" है - यह केवल एक बार करने की आवश्यकता होती है। सबग्राफ Studio पर, इसे [आपके डैशबोर्ड](https://thegraph.com/studio/) से किया जा सकता है: "एक बनाएँ सबग्राफ "। + +एक बार जब आपका सबग्राफ बना लिया जाता है, तो आप `graph deploy` CLI कमांड का उपयोग करके अपने सबग्राफ को डिप्लॉय कर सकते हैं। + +```sh +$ graph create --node # एक स्थानीय ग्राफ-नोड पर सबग्राफ बनाता है (सबग्राफ Studio पर, यह UI के माध्यम से किया जाता है) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # निर्मित फ़ाइलों को निर्दिष्ट IPFS endpoint पर अपलोड करता है, और फिर manifest IPFS hash के आधार पर निर्दिष्ट ग्राफ-नोड पर सबग्राफ को डिप्लॉय करता है +``` + +नोड कॉन्फ़िगरेशन इस बात पर निर्भर करेगा कि सबग्राफ कहाँ तैनात किया जा रहा है। + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### स्थानीय ग्राफ़ नोड (डिफ़ॉल्ट कॉन्फ़िगरेशन पर आधारित) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +एक बार जब आपका सबग्राफ डिप्लॉय हो जाता है, तो इसे ग्राफ-नोड द्वारा इंडेक्स किया जाएगा। आप खुद सबग्राफ को क्वेरी करके इसकी प्रगति की जांच कर सकते हैं। + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### एक स्थानीय ग्राफ़ नोड के साथ NEAR को अनुक्रमणित करना + +NEAR को अनुक्रमित करने वाले ग्राफ़ नोड को चलाने के लिए निम्नलिखित परिचालन आवश्यकताएँ हैं: + +- Firehose इंस्ट्रूमेंटेशन के साथ NEAR इंडेक्सर फ्रेमवर्क +- NEAR Firehose कंपोनेंट्(स) +- Firehose एंडपॉइन्ट के साथ ग्राफ़ नोड कॉन्फ़िगर किया गया + +हम जल्द ही उपरोक्त कंपोनेंट्स को चलाने के बारे में और जानकारी प्रदान करेंगे। + +## NEAR सबग्राफ को क्वेरी करना + +NEAR Subgraphs के लिए GraphQL एंडपॉइंट स्कीमा परिभाषा द्वारा निर्धारित किया जाता है, जिसमें मौजूदा API इंटरफेस शामिल होता है। अधिक जानकारी के लिए कृपया [GraphQL API](/subgraphs/querying/graphql-api/) दस्तावेज़ देखें। + +## सब-ग्राफ के उदाहरण + +यहाँ कुछ उदाहरण सबग्राफ संदर्भ के लिए दिए गए हैं: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### बीटा कैसे काम करता है? + +NEAR समर्थन बीटा में है, जिसका अर्थ है कि जैसे-जैसे हम एकीकरण को बेहतर बनाने पर काम कर रहे हैं, API में परिवर्तन हो सकते हैं। कृपया हमें near@thegraph.com पर ईमेल करें ताकि हम आपको NEAR सबग्राफ बनाने में सहायता कर सकें और आपको नवीनतम विकास से अपडेट रख सकें! + +### क्या सबग्राफ दोनों NEAR और EVM चेन को इंडेक्स कर सकता है? + +नहीं, एक सब-ग्राफ केवल एक चेन/नेटवर्क से डाटा सोर्स को सपोर्ट कर सकता है + +### क्या सबग्राफ अधिक विशिष्ट ट्रिगर्स पर प्रतिक्रिया कर सकते हैं? + +वर्तमान में, केवल अवरोधित करें और प्राप्त करें ट्रिगर समर्थित हैं। हम एक निर्दिष्ट खाते में फ़ंक्शन कॉल के लिए ट्रिगर्स की जांच कर रहे हैं। एक बार जब NEAR को नेटिव ईवेंट समर्थन मिल जाता है, तो हम ईवेंट ट्रिगर्स का समर्थन करने में भी रुचि रखते हैं। + +### क्या रसीद हैंडलर खातों और उनके उप-खातों के लिए ट्रिगर करेंगे? + +यदि कोई `account` निर्दिष्ट किया गया है, तो यह केवल सटीक खाता नाम से मेल खाएगा। उप-खातों से मेल करना संभव है यदि `accounts` फ़ील्ड निर्दिष्ट की गई हो, जिसमें `suffixes` और `prefixes` शामिल हों ताकि खाते और उप-खाते मेल खा सकें। उदाहरण के लिए, निम्नलिखित सभी `mintbase1.near` उप-खातों से मेल खाएगा: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### क्या NEAR सबग्राफमैपिंग्स के दौरान NEAR खातों पर view कॉल कर सकते हैं? + +यह समर्थित नहीं है। हम मूल्यांकन कर रहे हैं कि अनुक्रमण के लिए यह कार्यक्षमता आवश्यक है या नहीं। + +### क्या मैं अपने NEAR सबग्राफ में data source templates का उपयोग कर सकता हूँ? + +यह वर्तमान में समर्थित नहीं है। हम मूल्यांकन कर रहे हैं कि अनुक्रमण के लिए यह कार्यक्षमता आवश्यक है या नहीं। + +### Ethereum सबग्राफ "pending" और "current" संस्करणों का समर्थन करते हैं, मैं NEAR सबग्राफ का "pending" संस्करण कैसे तैनात कर सकता हूँ? + +NEAR सबग्राफ के लिए लंबित कार्यक्षमता अभी तक समर्थित नहीं है। इस बीच, आप एक नए संस्करण को एक अलग "named" सबग्राफ पर तैनात कर सकते हैं, और जब वह चेन हेड के साथ सिंक हो जाता है, तो आप अपने प्राथमिक "named" सबग्राफ पर पुनः तैनाती कर सकते हैं, जो उसी अंतर्निहित deployment ID का उपयोग करेगा, जिससे मुख्य सबग्राफ तुरंत सिंक हो जाएगा। + +### मेरा प्रश्न अभी तक उत्तरित नहीं हुआ है, मुझे NEAR सबग्राफ बनाने में और सहायता कहाँ मिल सकती है? + +यदि यह सबग्राफ विकास से संबंधित एक सामान्य प्रश्न है, तो शेष [Developer documentation](/subgraphs/quick-start/) में बहुत अधिक जानकारी उपलब्ध है। अन्यथा, कृपया [The Graph Protocol Discord](https://discord.gg/graphprotocol) से जुड़ें और #near चैनल में पूछें या near@thegraph.com पर ईमेल करें। + +## संदर्भ + +- [NEAR डेवलपर दस्तावेज़](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/hi/subgraphs/guides/polymarket.mdx b/website/src/pages/hi/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..4584e1f127dc --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/polymarket.mdx @@ -0,0 +1,149 @@ +--- +title: ब्लॉकचेन डेटा को पोलिमार्केट से Subgraphs पर The Graph के साथ क्वेरी करना +sidebarTitle: Polymarket डेटा क्वेरी करें +--- + +Polymarket के ऑनचेन डेटा को GraphQL के माध्यम से सबग्राफ का उपयोग करके The Graph Network पर क्वेरी करें। सबग्राफ विकेंद्रीकृत API हैं, जिन्हें The Graph द्वारा संचालित किया जाता है, जो ब्लॉकचेन से डेटा को indexing और क्वेरी करने के लिए एक प्रोटोकॉल है। + +## Polymarket सबग्राफ पर Graph Explorer + +आप [Polymarket Subgraph के पेज पर The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) पर एक इंटरएक्टिव क्वेरी प्लेग्राउंड देख सकते हैं, जहां आप किसी भी क्वेरी का परीक्षण कर सकते हैं। + +![Polymarket Playground](/img/Polymarket-playground.png) + +## Visual Query Editor का उपयोग कैसे करें + +The visual query editor आपको अपने Subgraph से सैंपल क्वेरीज़ का परीक्षण करने में मदद करता है। + +आप GraphiQL Explorer का उपयोग करके अपनी GraphQL क्वेरीज को बनाने के लिए उन क्षेत्रों पर क्लिक कर सकते हैं जिन्हें आप चाहते हैं। + +### उदाहरण क्वेरी: Polymarket से शीर्ष 5 उच्चतम भुगतान प्राप्त करें + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### उदाहरण आउटपुट + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema इस Subgraph के लिए [Polymarket के GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql) में परिभाषित है। + +### Polymarket सबग्राफ Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket सबग्राफ एंडपॉइंट [Graph Explorer](https://thegraph.com/explorer) पर उपलब्ध है। + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## अपनी स्वयं की API कुंजी कैसे प्राप्त करें + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) और अपना वॉलेट कनेक्ट करें +2. https://thegraph.com/studio/apikeys/ API कुंजी बनाने के लिए + +आप इस API कुंजी का उपयोग किसी भी Subgraph में [Graph Explorer] +(https://thegraph.com/explorer) पर कर सकते हैं, और यह केवल Polymarket तक सीमित नहीं है। + +100k क्वेरी प्रति माह निःशुल्क हैं, जो आपके साइड प्रोजेक्ट के लिए बिल्कुल सही है! + +## अतिरिक्त Polymarket सबग्राफ- + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## API से क्वेरी कैसे करें + +आप किसी भी GraphQL क्वेरी को Polymarket एंडपॉइंट पर भेज सकते हैं और JSON प्रारूप में डेटा प्राप्त कर सकते हैं। + +यह निम्नलिखित कोड उदाहरण उपरोक्त के समान ही सटीक आउटपुट लौटाएगा। + +### नमूना कोड Node.js से + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// GraphQL क्वेरी भेजें +axios(graphQLRequest) + .then((response) => { + // यहां प्रतिक्रिया को संभालें + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // किसी भी त्रुटि को संभालें + console.error(error); + }); +``` + +### अन्य संसाधन + +For more information about querying data from your Subgraph, read more [यहाँ पढ़ें](/subgraphs/querying/introduction/). + +अपने Subgraph के प्रदर्शन को बेहतर बनाने के लिए इसे ऑप्टिमाइज़ और कस्टमाइज़ करने के सभी तरीकों का पता लगाने के लिए, [यहाँ Subgraph बनाने के बारे में और पढ़ें](/developing/creating-a-subgraph/)। diff --git a/website/src/pages/hi/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/hi/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..f3e6b588c636 --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: कैसे सुरक्षित करें API Keys का उपयोग करके Next.js Server Components +--- + +## Overview + +हम [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) का उपयोग करके अपने dapp के फ्रंटेंड में हमारे API कुंजी के एक्सपोज़र को सही तरीके से सुरक्षित कर सकते हैं। हमारी API कुंजी की सुरक्षा को और बढ़ाने के लिए, हम [अपनी API कुंजी को कुछ सबग्राफ या सबग्राफ Studio में कुछ डोमेन तक सीमित कर सकते हैं।](/cookbook/upgrading-a-subgraph/#securing-your-api-key) + +इस कुकबुक में, हम यह जानेंगे कि Next.js सर्वर कंपोनेंट कैसे बनाया जाए जो एक सबग्राफ से क्वेरी करता है, साथ ही API कुंजी को फ्रंटएंड से छुपाए रखता है। + +### चेतावनी + +- Next.js सर्वर घटक डिनायल ऑफ़ सर्विस अटैक का उपयोग करके API कुंजियों को समाप्त होने से सुरक्षित नहीं कर सकते। +- The Graph Network gateways में सेवा को बाधित करने के हमलों का पता लगाने और उन्हें रोकने की रणनीतियाँ मौजूद हैं, हालांकि server components का उपयोग करने से ये सुरक्षा कमजोर हो सकती है। +- Next.js server components केंद्रीकरण के जोखिम प्रस्तुत करते हैं क्योंकि सर्वर बंद हो सकता है। + +### यह क्यों आवश्यक है + +एक मानक React एप्लिकेशन में, फ्रंटेंड कोड में शामिल API कुंजियाँ क्लाइंट-साइड पर उजागर हो सकती हैं, जिससे सुरक्षा का जोखिम बढ़ता है। जबकि.env फ़ाइलें सामान्यत: उपयोग की जाती हैं, ये कुंजियों की पूरी सुरक्षा नहीं करतीं क्योंकि React का कोड क्लाइंट साइड पर निष्पादित होता है, जो API कुंजी को हेडर में उजागर करता है। Next.js सर्वर घटक इस मुद्दे का समाधान करते हैं द्वारा संवेदनशील कार्यों को सर्वर-साइड पर संभालना। + +### क्लाइंट-साइड रेंडरिंग का उपयोग करके सबग्राफ से क्वेरी करना + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### आवश्यक शर्तें + +- [Subgraph Studio](https://thegraph.com/studio) से एक API कुंजी +- Next.js और React का बुनियादी ज्ञान +- एक मौजूदा Next.js प्रोजेक्ट जो App Router (https://nextjs.org/docs/app). का उपयोग करता है। + +## स्टेप-बाय-स्टेप कुकबुक + +### चरण 1: पर्यावरण चर सेट करें + +1. हमारे Next.js प्रोजेक्ट की जड़ में, एक.env.local फ़ाइल बनाएं। +2. हमारा API कुंजी जोड़ें: `API_KEY=`. + +### चरण 2: एक सर्वर घटक बनाएं + +1. हमारे components निर्देशिका में, एक नया फ़ाइल बनाएं, ServerComponent.js। +2. प्रदान किए गए उदाहरण कोड का उपयोग करके सर्वर घटक सेट करें। + +### चरण 3: सर्वर-साइड API अनुरोध को लागू करें + +ServerComponent.js में, निम्नलिखित कोड जोड़ें: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### चरण 4: सर्वर घटक का उपयोग करें + +1. हमारी पृष्ठ फ़ाइल (जैसे, pages/index.js) में ServerComponent आयात करें। +2. कंपोनेंट को रेंडर करें: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### चरण 5: हमारा Dapp चलाएँ और परीक्षण करें + +अपने Next.js एप्लिकेशन को npm run dev का उपयोग करके प्रारंभ करें। सत्यापित करें कि सर्वर कंपोनेंट डेटा प्राप्त कर रहा है बिना API कुंजी को उजागर किए। + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### निष्कर्ष + +Next.js Server Components का उपयोग करके, हमने प्रभावी रूप से API key को क्लाइंट-साइड से छिपा दिया है, जिससे हमारे application की सुरक्षा बढ़ गई है। यह विधि सुनिश्चित करती है कि संवेदनशील संचालन server-side पर संभाले जाएं, जिससे संभावित client-side कमजोरियों से बचाव हो। अंत में, अपनी API कुंजी की सुरक्षा को और बढ़ाने के लिए [other API key security measures](/subgraphs/querying/managing-api-keys/) को अवश्य एक्सप्लोर करें। diff --git a/website/src/pages/hi/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/hi/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..be71b8199574 --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: डेटा को एकत्रित करें उपयोग करके Subgraph Composition +sidebarTitle: एक Composable Subgraph बनाएं जिसमें कई Subgraphs शामिल हों +--- + +Subgraph संयोजन का उपयोग करके विकास समय को तेज़ करें। आवश्यक डेटा के साथ एक मूल Subgraph बनाएं, फिर उसके ऊपर अतिरिक्त Subgraph बनाएं। + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### संयोजन के लाभ + +Subgraph संयोजन एक शक्तिशाली विशेषता है जो स्केलिंग के लिए अनुमति देती है: + +- पुनः उपयोग करें, मिलाएं, और मौजूदा डेटा को संयोजित करें +- विकास और क्वेरी को सुव्यवस्थित करें +- एकाधिक डेटा स्रोतों का उपयोग करें (अधिकतम पांच स्रोत Subgraphs तक) +- Subgraph की सिंकिंग स्पीड तेज करें +- त्रुटियों को संभालें और पुनःसिंक को अनुकूलित करें + +## आर्किटेक्चर अवलोकन + +यह उदाहरण दो Subgraphs की स्थापना के साथ जुड़ा हुआ है: + +1. **सोर्स Subgraph**: घटनाओं के डेटा को entities के रूप में ट्रैक करता है. +2. **आश्रित Subgrap**h: स्रोत Subgraph को डेटा स्रोत के रूप में उपयोग करता है। + +आप इन्हें `source` और `dependent` डायरेक्टरी में पा सकते हैं। + +- The **साधन Subgraph** एक बेसिक इवेंट-ट्रैकिंग Subgraph है जो संबंधित contract द्वारा एमिट किए गए इवेंट्स को रिकॉर्ड करता है। +- **निर्भर Subgraph** स्रोत Subgraph को एक डेटा स्रोत के रूप में संदर्भित करता है, और स्रोत से entities का उपयोग ट्रिगर के रूप में करता है। + +जबकि **स्रोत Subgraph** एक मानक Subgraph है, आश्रित Subgraph Subgraph संयोजन सुविधा का उपयोग करता है। + +## आवश्यक शर्तें + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## शुरू करिये + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### विशिष्टताएँ + +- इस उदाहरण को सरल रखने के लिए, सभी स्रोत Subgraph केवल ब्लॉक हैंडलर का उपयोग करते हैं। हालांकि, वास्तविक वातावरण में, प्रत्येक स्रोत Subgraph विभिन्न स्मार्ट कॉन्ट्रैक्ट्स से डेटा का उपयोग करेगा। +- ये उदाहरण दिखाते हैं कि किसी अन्य Subgraph की schema को कैसे आयात किया जाए और इसकी कार्यक्षमता को बढ़ाया जाए। +- प्रत्येक स्रोत Subgraph को एक विशिष्ट entity के साथ अनुकूलित किया जाता है। +- सभी कमांड आवश्यक डिपेंडेंसीज़ को इंस्टॉल करती हैं, GraphQL स्कीमा के आधार पर कोड जेनरेट करती हैं, Subgraph को बिल्ड करती हैं, और इसे आपकी लोकल Graph Node इंस्टेंस पर डिप्लॉय करती हैं। + +### चरण 1. Block Time साधन Subgraph को डिप्लॉय करें + +यह पहला स्रोत Subgraph प्रत्येक ब्लॉक के लिए ब्लॉक समय की गणना करता है। + +- यह अन्य Subgraphs से schemas को इम्पोर्ट करता है और प्रत्येक `ब्लॉक` के माइन किए जाने के समय को दर्शाने वाले timestamp फ़ील्ड के साथ एक block entity जोड़ता है। +- यह समय-संबंधित ब्लॉकचेन घटनाओं (जैसे, ब्लॉक टाइमस्टैम्प) को सुनता है और इस डेटा को प्रोसेस करके Subgraph की entities को अपडेट करता है। + +इस Subgraph को लोकल रूप से डिप्लॉय करने के लिए, निम्नलिखित कमांड्स चलाएँ: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### चरण 2. Block Cost Source Subgraph को डिप्लॉय करें + +यह दूसरा स्रोत Subgraph प्रत्येक ब्लॉक की लागत को इंडेक्स करता है। + +#### मुख्य कार्य + +- यह अन्य Subgraphs से schemas आयात करता है और लागत-संबंधी फ़ील्ड के साथ एक `block` entity जोड़ता है। +- यह ब्लॉकचेन घटनाओं को सुनता है जो लागत (जैसे गैस शुल्क, लेनदेन लागत) से संबंधित होती हैं और इस डेटा को प्रोसेस करके Subgraph की entities को अपडेट करता है। + +इस Subgraph को लोकल रूप से डिप्लॉय करने के लिए, ऊपर दिए गए वही कमांड्स चलाएँ। + +### स्टेप 3. स्रोत Subgraph में ब्लॉक साइज़ परिभाषित करें + +यह तीसरा स्रोत Subgraph प्रत्येक ब्लॉक के आकार को इंडेक्स करता है। इस Subgraph को लोकली डिप्लॉय करने के लिए, ऊपर दिए गए वही कमांड्स चलाएँ। + +#### मुख्य कार्य + +- यह मौजूदा schemas को अन्य Subgraphs से आयात करता है और एक `block` entity जोड़ता है, जिसमें प्रत्येक block के आकार को दर्शाने वाला एक `size` फ़ील्ड होता है। +- यह ब्लॉक साइज़ (जैसे, स्टोरेज या वॉल्यूम) से संबंधित ब्लॉकचेन इवेंट्स को सुनता है और इस डेटा को प्रोसेस करके Subgraph की entities को उचित रूप से अपडेट करता है। + +### चरण 4. ब्लॉक स्टैट्स में मिलाएँ Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> नोट: +> +> - किसी स्रोत Subgraph में कोई भी परिवर्तन संभवतः एक नया deployment ID उत्पन्न करेगा। +> - Subgraph manifest में डेटा स्रोत पते में नवीनतम परिवर्तनों का लाभ उठाने के लिए डिप्लॉयमेंट ID को अपडेट करना सुनिश्चित करें। +> - सभी स्रोत Subgraphs को तब तक तैनात किया जाना चाहिए जब तक कि संयोजित Subgraph तैनात न हो जाए। + +#### मुख्य कार्य + +- यह एक समेकित डेटा मॉडल प्रदान करता है जो सभी प्रासंगिक ब्लॉक मेट्रिक्स को शामिल करता है। +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## मुख्य निष्कर्ष + +- यह शक्तिशाली टूल आपके Subgraph डेवलपमेंट को स्केल करेगा और आपको कई Subgraph को एक साथ जोड़ने की अनुमति देगा। +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- यह विशेषता स्केलेबिलिटी को अनलॉक करती है, जिससे विकास और रखरखाव की दक्षता सरल हो जाती है। + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, और जानने के लिए देखें [Subgraph advanced features.](/developing/creating/advanced/) +- एग्रीगेशन के बारे में अधिक जानने के लिए, [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations) देखें। diff --git a/website/src/pages/hi/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/hi/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..1390bfe38bf0 --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: फोर्क्स का उपयोग करके त्वरित और आसान सबग्राफ डिबगिंग +--- + +जैसा कि कई प्रणालियों में बड़े पैमाने पर डेटा प्रोसेसिंग के दौरान होता है, The Graph के Indexers (Graph Nodes) को आपके सबग्राफ को लक्षित ब्लॉकचेन के साथ सिंक करने में काफी समय लग सकता है। डिबगिंग के उद्देश्य से त्वरित परिवर्तन करने और Indexing के लिए आवश्यक लंबे इंतजार के बीच का अंतर अत्यधिक प्रतिकूल होता है, और हम इस समस्या से भली-भांति परिचित हैं। इसी कारण हम **सबग्राफ फॉर्किंग** पेश कर रहे हैं, जिसे [LimeChain](https://limechain.tech/) द्वारा विकसित किया गया है, और इस लेख में मैं आपको दिखाऊंगा कि इस फीचर का उपयोग करके Subgraph डिबगिंग को काफी तेज़ कैसे किया जा सकता है! + +## ठीक है वो क्या है? + +**सबग्राफ फॉर्किंग** वह प्रक्रिया है जिसमें आलसी तरीके से किसी दूसरे सबग्राफ के स्टोर (आमतौर पर एक रिमोट स्टोर) से entities को लाया जाता है। + +सबग्राफ फॉर्किंग आपको अपने असफल सबग्राफ को ब्लॉक X पर डिबग करने की अनुमति देता है बिना ब्लॉक X तक सिंक होने का इंतजार किए। + +## क्या?! कैसे? + +जब आप एक सबग्राफ को रिमोट ग्राफ-नोड पर indexing के लिए डिप्लॉय करते हैं और यह ब्लॉक_X_पर फेल हो जाता है, तो अच्छी खबर यह है कि ग्राफ नोड अभी भी अपनी स्टोर का उपयोग करके GraphQL क्वेरीज़ को सर्व करेगा, जो ब्लॉक_X_ तक सिंक है। यह बहुत बढ़िया है! इसका मतलब है कि हम इस "अप-टू-डेट" स्टोर का लाभ उठा सकते हैं ताकि ब्लॉक_X_को indexing करते समय उत्पन्न होने वाली बग्स को ठीक किया जा सके। + +हम एक विफल हो रहे सबग्राफ को एक दूरस्थ ग्राफ-नोड से fork करने जा रहे हैं, जो निश्चित रूप से ब्लॉक X तक सबग्राफ को इंडेक्स कर चुका है, ताकि डिबग किए जा रहे स्थानीय रूप से तैनात सबग्राफ को ब्लॉक_X_पर इंडेक्सिंग स्थिति का अद्यतन दृश्य प्रदान किया जा सके। + +## कृपया मुझे कुछ कोड दिखाओ! + +सुबग्राफ डिबगिंग पर ध्यान केंद्रित रखने के लिए, चलिए चीजों को सरल रखते हैं और [example-सबग्राफ](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) के साथ चलते हैं, जो Ethereum Gravity स्मार्ट contract को indexing कर रहा है। + +यहां Gravatars को indexing करने के लिए handler परिभाषित किए गए हैं, जिनमें कोई बग नहीं है: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +अरे, कितनी दुर्भाग्यपूर्ण बात है, जब मैं अपना पूरी तरह से सही दिखने वाला सबग्राफ सबग्राफ Studio पर डिप्लॉय करता हूँ, तो यह_"Gravatar not found!"_ त्रुटि के साथ फेल हो जाता है। + +फिक्स का प्रयास करने का सामान्य तरीका है: + +1. मैपिंग सोर्स में बदलाव करें, जो आपको लगता है कि समस्या का समाधान करेगा (जबकि मुझे पता है कि यह नहीं होगा)। +2. सबग्राफ को [सबग्राफ Studio](https://thegraph.com/studio/) (या किसी अन्य remote ग्राफ-नोड) पर फिर से डिप्लॉय करें। +3. इसके सिंक-अप होने की प्रतीक्षा करें। +4. यदि यह फिर से टूट जाता है तो 1 पर वापस जाएँ, अन्यथा: हुर्रे! + +यह वास्तव में एक सामान्य डिबग प्रक्रिया के समान है, लेकिन इसमें एक कदम है जो प्रक्रिया को बहुत धीमा कर देता है: _3. इसके सिंक होने का इंतजार करें. + +**सबग्राफ फॉर्किंग** का उपयोग करके, हम मूल रूप से इस चरण को समाप्त कर सकते हैं। यह इस प्रकार दिखता है: + +0. लोकल ग्राफ-नोड को सेट **_appropriate fork-base_** के साथ चालू करें। +1. मैपिंग सोर्स में परिवर्तन करें, जिसके बारे में आपको लगता है कि इससे समस्या हल हो जाएगी. +2. स्थानीय ग्राफ-नोड पर डिप्लॉय करें, **_असफल हो रहे सबग्राफ को फोर्क_** करते हुए और समस्या वाले ब्लॉक से प्रारंभ करते हुए। +3. यदि यह फिर से ब्रेक जाता है, तो 1 पर वापस जाएँ, अन्यथा: हुर्रे! + +अब, आपके 2 प्रश्न हो सकते हैं: + +1. फोर्क-बेस क्या??? +2. फोर्किंग कौन?! + +और मैं उत्तर देता हूं: + +1. `fork-base` "मूल" URL है, जिससे जब _subgraph id_ जोड़ी जाती है, तो परिणामी URL (`/`) उस सबग्राफ के स्टोर के लिए एक वैध GraphQL एंडपॉइंट बन जाता है। +2. फोर्किंग आसान है, पसीना बहाने की जरूरत नहीं: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +इसके अलावा, सबग्राफ manifest में `dataSources.source.startBlock` फ़ील्ड को समस्या वाले ब्लॉक की संख्या पर सेट करना न भूलें, ताकि आप गैर-ज़रूरी ब्लॉकों को indexing करने से बच सकें और fork का लाभ उठा सकें!\` + +तो, यहाँ मैं क्या करता हूँ: + +1. मैंने एक लोकल ग्राफ-नोड स्पिन-अप किया [(यहाँ देखें कैसे करें)](https://github.com/graphprotocol/graph-node#running-a-local-graph-node) जिसमें `fork-base` ऑप्शन को सेट किया: `https://api.thegraph.com/subgraphs/id/`, क्योंकि मैं एक सबग्राफ को फोर्क करने जा रहा हूँ, जो कि पहले मैंने [सबग्राफ Studio](https://thegraph.com/studio/) पर डिप्लॉय किया था और उसमें बग्स थे। + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. सावधानी से निरीक्षण करने के बाद, मुझे पता चलता है कि मेरे दो हैंडलरों में `Gravatar` के `id` प्रतिनिधित्व में असंगति है। जबकि `handleNewGravatar` इसे हेक्स (`event.params.id.toHex()`) में बदलता है, handleUpdatedGravatar एक int32 (`event.params.id.toI32()`) का उपयोग करता है, जिससे `handleUpdatedGravatar` "Gravatar not found!" के साथ पैनिक हो जाता है। मैंने दोनों को `id` को हेक्स में बदलने के लिए संशोधित किया है। +3. मैंने बदलाव करने के बाद अपने सबग्राफ को लोकल Graph Node पर डिप्लॉय किया, **_failing सबग्राफ को fork_** करके और `subgraph.yaml` में `dataSources.source.startBlock` को `6190343` पर सेट किया। + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. मैं अपने अब बग-मुक्त सबग्राफ को एक दूरस्थ ग्राफ-नोड पर तैनात करता हूँ और खुशी-खुशी जीवन व्यतीत करता हूँ! (no potatoes tho) diff --git a/website/src/pages/hi/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/hi/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..6f6fcb7ace1e --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: सुरक्षित सबग्राफ कोड जेनरेटर +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) एक कोड जनरेशन टूल है जो किसी प्रोजेक्ट की GraphQL स्कीमा से हेल्पर फंक्शन्स का एक सेट जेनरेट करता है। यह सुनिश्चित करता है कि आपके सबग्राफ में सभी इंटरैक्शन्स पूरी तरह सुरक्षित और संगत हों। + +## सबग्राफ अनक्रैशेबल के साथ एकीकृत क्यों करें? + +- **निरंतर अपटाइम**। गलत तरीके से प्रबंधित entities आपके Subgraph को क्रैश कर सकते हैं, जिससे उन प्रोजेक्ट्स में बाधा आ सकती है जो The Graph पर निर्भर हैं। सहायक फ़ंक्शंस सेट करें ताकि आपका Subgraph "अनक्रैशेबल" बना रहे और व्यापार निरंतरता सुनिश्चित हो। + +- **पूरी तरह सुरक्षित**। Subgraph विकास में आम समस्याएँ यह होती हैं कि अपरिभाषित entities को लोड करने में समस्या आती है, सभी entities के मूल्यों को सेट या इनिशियलाइज़ नहीं किया जाता, और entities को लोड और सेव करने में race conditions हो सकती हैं। सुनिश्चित करें कि entities के साथ सभी इंटरैक्शन पूरी तरह से परमाणु (atomic) हों। + +- **यूज़र कॉन्फ़िगरेबल** डिफ़ॉल्ट मान सेट करें और सुरक्षा जाँच के स्तर को अपनी परियोजना की आवश्यकताओं के अनुसार कॉन्फ़िगर करें। चेतावनी लॉग दर्ज किए जाते हैं, जो यह संकेत देते हैं कि कहाँ पर Subgraph लॉजिक का उल्लंघन हुआ है, जिससे डेटा की सटीकता सुनिश्चित करने के लिए समस्या को ठीक किया जा सके। + +**मुख्य विशेषताएँ** + +- Code generation टूल **सभी** Subgraph प्रकारों को सपोर्ट करता है और उपयोगकर्ताओं को मूल्यों पर उपयुक्त डिफ़ॉल्ट सेट करने के लिए कॉन्फ़िगर करने योग्य बनाता है। यह कोड जनरेशन इस कॉन्फ़िगरेशन का उपयोग उपयोगकर्ता की विशिष्टताओं के अनुसार हेल्पर फ़ंक्शंस उत्पन्न करने के लिए करेगा। + +- फ्रेमवर्क में इकाई वैरिएबल के समूहों के लिए कस्टम, लेकिन सुरक्षित, सेटर फ़ंक्शन बनाने का एक तरीका (कॉन्फिग फ़ाइल के माध्यम से) भी शामिल है। इस तरह उपयोगकर्ता के लिए एक पुरानी ग्राफ़ इकाई को लोड/उपयोग करना असंभव है और फ़ंक्शन द्वारा आवश्यक वैरिएबल को सहेजना या सेट करना भूलना भी असंभव है। + +- Warning logs को उन लॉग्स के रूप में रिकॉर्ड किया जाता है जो यह संकेत देते हैं कि Subgraph लॉजिक में कहाँ उल्लंघन हुआ है, ताकि समस्या को ठीक करने में मदद मिल सके और डेटा की सटीकता सुनिश्चित की जा सके। + +सबग्राफ अनक्रैशेबल को ग्राफ़ CLI codegen कमांड का उपयोग करके एक वैकल्पिक फ़्लैग के रूप में चलाया जा सकता है। + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) या इस [वीडियो ट्यूटोरियल](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) को देखें ताकि आप अधिक जान सकें और सुरक्षित Subgraphs विकसित करना शुरू कर सकें। diff --git a/website/src/pages/hi/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/hi/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..47f32c5c5739 --- /dev/null +++ b/website/src/pages/hi/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,103 @@ +--- +title: The Graph पर स्थानांतरण +--- + +किसी भी प्लेटफ़ॉर्म से जल्दी से अपने सबग्राफ को [The Graph के विकेंद्रीकृत नेटवर्क](https://thegraph.com/networks/) पर अपग्रेड करें। + +## The Graph पर स्विच करने के लाभ + +- आपके ऐप्स पहले से जिस सबग्राफ का उपयोग कर रहे हैं, उसी का उपयोग करें और बिना किसी डाउनटाइम के माइग्रेशन करें। +- 100+ Indexers द्वारा समर्थित एक वैश्विक नेटवर्क से विश्वसनीयता बढ़ाएं। +- 24/7 बिजली की तेजी से सहायता प्राप्त करें सबग्राफ के लिए, एक ऑन-कॉल इंजीनियरिंग टीम के साथ। + +## अपने Subgraph को The Graph में 3 आसान कदमों में अपग्रेड करें + +1. [अपने स्टूडियो पर्यावरण को सेट करें](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [अपने सबग्राफ को Studio में डिप्लॉय करें](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. अपने Subgraph को The Graph के विकेंद्रीकृत नेटवर्क पर प्रकाशित करें](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. अपने स्टूडियो वातावरण को सेट करें + +### सबग्राफ बनाएँ Subgraph Studio में + +- [Subgraph Studio](https://thegraph.com/studio/) पर जाएँ और अपने वॉलेट को कनेक्ट करें। +- "Create a सबग्राफ" पर क्लिक करें। यह अनुशंसा की जाती है कि सबग्राफ का नाम टाइटल केस में रखा जाए: "सबग्राफ Name Chain Name"। + +> नोट: प्रकाशित करने के बाद, सबग्राफ का नाम संपादनीय होगा लेकिन प्रत्येक बार ऑनचेन क्रिया की आवश्यकता होगी, इसलिए इसे सही से नाम दें। + +### Graph CLI स्थापित करें + +आपको Node.js(https://nodejs.org/) और अपनी पसंद का पैकेज मैनेजर (npm या pnpm) इंस्टॉल करना होगा ताकि आप Graph CLI का उपयोग कर सकें। सबसे हालिया(https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI संस्करण चेक करें। + +अपने लोकल मशीन पर, निम्नलिखित कमांड चलाएँ: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Studio में CLI का उपयोग करके सबग्राफ बनाने के लिए निम्नलिखित कमांड का उपयोग करें: + +```sh +graph init --product subgraph-studio +``` + +### अपने Subgraph को प्रमाणित करें + +The Graph CLI में, 'auth' कमांड का उपयोग करें जो Subgraph Studio में देखा गया है: + +```sh +graph auth +``` + +## 2. अपने Subgraph को Studio पर डिप्लॉय करें + +यदि आपके पास आपका सोर्स कोड है, तो आप इसे आसानी से Studio पर डिप्लॉय कर सकते हैं। यदि आपके पास यह नहीं है, तो यहाँ आपके सबग्राफ को डिप्लॉय करने का एक त्वरित तरीका दिया गया है। + +The Graph CLI में, निम्नलिखित कमांड चलाएँ: + +```sh +graph deploy --ipfs-hash +``` + +> **नोट** हर सबग्राफ का एक IPFS हैश (Deployment ID) होता है, जो इस तरह दिखता है: "Qmasdfad...". डिप्लॉय करने के लिए बस इस IPFS हैश का उपयोग करें। आपको एक संस्करण दर्ज करने के लिए कहा जाएगा (जैसे, v0.0.1)। + +## 3. अपने Subgraph को The Graph Network पर प्रकाशित करें + +![पब्लिश बटन](/img/publish-sub-transfer.png) + +### अपने Subgraph को क्वेरी करें + +> कम से कम 3 Indexers को अपने सबग्राफ की क्वेरी करने के लिए आकर्षित करने के लिए, यह अनुशंसा की जाती है कि आप कम से कम 3,000 GRT क्यूरेट करें। क्यूरेटिंग के बारे में अधिक जानने के लिए, [Curating](/resources/roles/curating/) पर The Graph देखें। + +आप किसी भी सबग्राफ से [querying](/subgraphs/querying/introduction/) करके GraphQL क्वेरी को सबग्राफ के क्वेरी URL एंडपॉइंट पर भेज सकते हैं, जो कि उसके Explorer पेज के शीर्ष पर सबग्राफ Studio में स्थित होता है। + +#### उदाहरण + +[CryptoPunks Ethereum सबग्राफ ](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +यह सबग्राफ के लिए क्वेरी URL है: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +अब, आपको केवल अपना API Key भरने की आवश्यकता है ताकि आप इस endpoint पर GraphQL queries भेज सकें। + +### अपनी खुद की API Key प्राप्त करना + +आप Subgraph Studio में पृष्ठ के शीर्ष पर “API Keys” मेनू के तहत API Keys बना सकते हैं: + +![API keys](/img/Api-keys-screenshot.png) + +### सबग्राफ की स्थिति की निगरानी करें + +Once you upgrade, you can access and manage your सबग्राफ in [सबग्राफ Studio](https://thegraph.com/studio/) और सभी सबग्राफ को [The Graph Explorer](https://thegraph.com/networks/) में एक्सप्लोर कर सकते हैं। + +### Additional Resources + +- तेजी से एक नया सबग्राफ बनाने और प्रकाशित करने के लिए, [Quick Start](/subgraphs/quick-start/) देखें। +- अपने सबग्राफ को बेहतर प्रदर्शन के लिए अनुकूलित और कस्टमाइज़ करने के सभी तरीकों का पता लगाने के लिए, [यहाँ और पढ़ें](/developing/creating-a-subgraph/)। diff --git a/website/src/pages/hi/subgraphs/querying/best-practices.mdx b/website/src/pages/hi/subgraphs/querying/best-practices.mdx index 3dd4ad1007d4..b0be1bc4c135 100644 --- a/website/src/pages/hi/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/hi/subgraphs/querying/best-practices.mdx @@ -1,10 +1,10 @@ --- -title: सर्वोत्तम प्रथाओं को क्वेरी करना +title: Querying Best Practices --- The Graph ब्लॉकचेन से डेटा क्वेरी करने का एक विकेन्द्रीकृत तरीका प्रदान करता है। इसका डेटा एक GraphQL API के माध्यम से एक्सपोज़ किया जाता है, जिससे इसे GraphQL भाषा के साथ क्वेरी करना आसान हो जाता है। -GraphQL भाषा के आवश्यक नियमों और सर्वोत्तम प्रथाओं को सीखें ताकि आप अपने subgraph को अनुकूलित कर सकें। +GraphQL भाषा के आवश्यक नियम और Best Practices सीखें ताकि आप अपने Subgraph को optimize कर सकें। --- @@ -14,7 +14,7 @@ GraphQL भाषा के आवश्यक नियमों और सर REST API के विपरीत, एक रेखांकन API एक स्कीमा पर बनाया गया है जो परिभाषित करता है कि कौन से प्रश्न किए जा सकते हैं। -For example, a query to get a token using the `token` query will look as follows: +उदाहरण के लिए, `token` क्वेरी का उपयोग करके एक टोकन प्राप्त करने के लिए की गई क्वेरी इस प्रकार होगी: ```graphql query GetToken($id: ID!) { @@ -25,7 +25,7 @@ query GetToken($id: ID!) { } ``` -which will return the following predictable JSON response (_when passing the proper `$id` variable value_): +जो निम्नलिखित पूर्वानुमानित JSON प्रतिक्रिया लौटाएगा (जब उचित `$id` variable value_ पास किया जाएगा): ```json { @@ -36,9 +36,9 @@ which will return the following predictable JSON response (_when passing the pro } ``` -GraphQL queries use the GraphQL language, which is defined upon [a specification](https://spec.graphql.org/). +GraphQL क्वेरीज़ GraphQL भाषा का उपयोग करती हैं, जो कि [एक स्पेसिफिकेशन](https://spec.graphql.org/) पर परिभाषित है। -The above `GetToken` query is composed of multiple language parts (replaced below with `[...]` placeholders): +उपरोक्त `GetToken` क्वेरी कई भाषाओं के भागों से बनी है (नीचे `[...]` प्लेसहोल्डर के साथ प्रतिस्थापित): ```graphql query [operationName]([variableName]: [variableType]) { @@ -52,31 +52,32 @@ query [operationName]([variableName]: [variableType]) { ## GraphQL क्वेरी लिखने के नियम -- Each `queryName` must only be used once per operation. -- Each `field` must be used only once in a selection (we cannot query `id` twice under `token`) -- Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/subgraphs/explorer/). +- प्रत्येक `queryName` को प्रत्येक ऑपरेशन में केवल एक बार ही उपयोग किया जाना चाहिए। +- प्रत्येक `field` का चयन में केवल एक बार ही उपयोग किया जा सकता है (हम `token` के अंतर्गत id को दो बार क्वेरी नहीं कर सकते)। +- कुछ field या क्वेरी (जैसे tokens) जटिल प्रकार के परिणाम लौटाते हैं, जिनके लिए उप-फ़ील्ड का चयन आवश्यक होता है। जब अपेक्षित हो तब चयन न देना (या जब अपेक्षित न हो - उदाहरण के लिए, id पर चयन देना) एक त्रुटि उत्पन्न करेगा। किसी फ़ील्ड के प्रकार को जानने के लिए, कृपया [Graph Explorer](/subgraphs/explorer/) देखें। - किसी तर्क को असाइन किया गया कोई भी चर उसके प्रकार से मेल खाना चाहिए। - चरों की दी गई सूची में, उनमें से प्रत्येक अद्वितीय होना चाहिए। - सभी परिभाषित चर का उपयोग किया जाना चाहिए। > ध्यान दें: इन नियमों का पालन न करने पर The Graph API से त्रुटि होगी। -For a complete list of rules with code examples, check out [GraphQL Validations guide](/resources/migration-guides/graphql-validations-migration-guide/). +पूरी नियमों की सूची और कोड उदाहरणों के लिए GraphQL Validations guide देखें: +(https://thegraph.com/resources/migration-guides/graphql-validations-migration-guide/) ### एक ग्राफ़क्यूएल एपीआई के लिए एक प्रश्न भेजना GraphQL एक भाषा और प्रथाओं का सेट है जो HTTP के माध्यम से संचालित होता है। -It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). +इसका मतलब है कि आप एक GraphQL API को मानक `fetch` (स्थानीय रूप से या `@whatwg-node/fetch` या `isomorphic-fetch` के माध्यम से) का उपयोग करके क्वेरी कर सकते हैं। -However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: +हालांकि, जैसा कि ["Querying from an Application"](/subgraphs/querying/from-an-application/) में उल्लेख किया गया है, यह अनुशंसित है कि `graph-client` का उपयोग किया जाए, जो निम्नलिखित अद्वितीय विशेषताओं का समर्थन करता है: -- क्रॉस-चेन सबग्राफ हैंडलिंग: एक ही क्वेरी में कई सबग्राफ से पूछताछ -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Cross-chain Subgraph Handling: एक ही query में multiple Subgraphs से data प्राप्त करना +- [स्वचालित ब्लॉक ट्रैकिंग](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [स्वचालित Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - पूरी तरह से टाइप किया गया परिणाम -Here's how to query The Graph with `graph-client`: +The Graph के साथ `graph-client` का उपयोग करके क्वेरी करने का तरीका: ```tsx import { execute } from '../.graphclient' @@ -100,7 +101,7 @@ async function main() { main() ``` -More GraphQL client alternatives are covered in ["Querying from an Application"](/subgraphs/querying/from-an-application/). +More GraphQL क्लाइंट विकल्पों को ["Querying from an Application"](/subgraphs/querying/from-an-application/) में कवर किया गया है। --- @@ -122,12 +123,12 @@ query GetToken { ` ``` -While the above snippet produces a valid GraphQL query, **it has many drawbacks**: +जबकि उपरोक्त स्निपेट एक मान्य GraphQL क्वेरी उत्पन्न करता है, **इसमें कई कमियाँ हैं:** -- it makes it **harder to understand** the query as a whole -- developers are **responsible for safely sanitizing the string interpolation** -- not sending the values of the variables as part of the request parameters **prevent possible caching on server-side** -- it **prevents tools from statically analyzing the query** (ex: Linter, or type generations tools) +- यह संपूर्ण क्वेरी को समझना **और कठिन बना देता है।** +- डेवलपर्स स्ट्रिंग **इंटरपोलेशन को सुरक्षित रूप से सैनिटाइज़ करने के लिए जिम्मेदार होते हैं** +- रिक्वेस्ट पैरामीटर्स के रूप में वेरिएबल्स के मान न भेजने से **सर्वर-साइड पर संभावित कैशिंग को रोका जा सकता है** +- यह **टूल्स को क्वेरी का स्टैटिक रूप से विश्लेषण करने से रोकता है** (उदाहरण: Linter या टाइप जेनरेशन टूल्स) इसी कारण, यह अनुशंसा की जाती है कि हमेशा क्वेरीज़ को स्थिर स्ट्रिंग्स के रूप में लिखा जाए। @@ -151,18 +152,18 @@ const result = await execute(query, { }) ``` -Doing so brings **many advantages**: +ऐसा करने से **कई लाभ** होते हैं: -- **Easy to read and maintain** queries -- The GraphQL **server handles variables sanitization** -- **Variables can be cached** at server-level -- **Queries can be statically analyzed by tools** (more on this in the following sections) +- **आसानी से पढ़ने और बनाए रखने योग्य** क्वेरीज़ +- GraphQL **सर्वर वेरिएबल्स की स्वच्छता को संभालता है** +- **वेरिएबल्स को सर्वर-स्तर पर कैश** किया जा सकता है +- **क्वेरीज़ को उपकरणों द्वारा स्थिर रूप से विश्लेषण किया जा सकता है** (अधिक जानकारी निम्नलिखित अनुभागों में) - ### स्टेटिक क्वेरीज़ में फ़ील्ड्स को शर्तानुसार कैसे शामिल करें -You might want to include the `owner` field only on a particular condition. +आप `owner` फ़ील्ड को केवल एक विशेष शर्त पर शामिल करना चाह सकते हैं। -For this, you can leverage the `@include(if:...)` directive as follows: +आप इसके लिए `@include(if:...)` निर्देश का उपयोग कर सकते हैं जैसे कि निम्नलिखित: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -185,7 +186,7 @@ const result = await execute(query, { }) ``` -> Note: The opposite directive is `@skip(if: ...)`. +> नोट: विपरीत निर्देश `@skip(if: ...)` है। ### आप जो चाहते हैं वह मांगें @@ -193,10 +194,10 @@ GraphQL अपने “Ask for what you want” टैगलाइन के इस कारण, GraphQL में सभी उपलब्ध फ़ील्ड्स को बिना उन्हें व्यक्तिगत रूप से सूचीबद्ध किए प्राप्त करने का कोई तरीका नहीं है। -- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- GraphQL APIs query करते समय, हमेशा वो fields की query करने की सोचें जो वास्तव में use होंगे। - सुनिश्चित करें कि क्वेरीज़ केवल उतने ही एंटिटीज़ लाएँ जितनी आपको वास्तव में आवश्यकता है। डिफ़ॉल्ट रूप से, क्वेरीज़ एक संग्रह में 100 एंटिटीज़ लाएँगी, जो आमतौर पर उपयोग में लाई जाने वाली मात्रा से अधिक होती है, जैसे कि उपयोगकर्ता को प्रदर्शित करने के लिए। यह न केवल एक क्वेरी में शीर्ष-स्तरीय संग्रहों पर लागू होता है, बल्कि एंटिटीज़ के नेस्टेड संग्रहों पर भी अधिक लागू होता है। -For example, in the following query: +उदाहरण के लिए, निम्नलिखित क्वेरी में: ```graphql query listTokens { @@ -211,15 +212,16 @@ query listTokens { } ``` -The response could contain 100 transactions for each of the 100 tokens. +प्रतिक्रिया में प्रत्येक 100 टोकनों के लिए 100 लेन-देन(transaction) हो सकते हैं। -If the application only needs 10 transactions, the query should explicitly set `first: 10` on the transactions field. +यदि application को केवल 10 लेन-देन(transaction) की आवश्यकता है, तो क्वेरी को लेनदेन फ़ील्ड पर स्पष्ट रूप से first: 10 सेट करना चाहिए। ### एक ही क्वेरी का उपयोग करके कई रिकॉर्ड्स का अनुरोध करें -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +डिफ़ॉल्ट रूप से, Subgraphs में एक record के लिए singular entity होती है। कई records प्राप्त करने के लिए, plural entities और filter का उपयोग करें: +where: {id_in:[X,Y,Z]} या where: {volume_gt:100000} -Example of inefficient querying: +अप्रभावी क्वेरी करने का उदाहरण: ```graphql query SingleRecord { @@ -236,7 +238,7 @@ query SingleRecord { } ``` -Example of optimized querying: +इष्टतम क्वेरी करने का उदाहरण: ```graphql query ManyRecords { @@ -249,7 +251,7 @@ query ManyRecords { ### एकल अनुरोध में कई क्वेरियों को संयोजित करें। -Your application might require querying multiple types of data as follows: +आपका application निम्नलिखित प्रकार के डेटा को क्वेरी करने की आवश्यकता हो सकती है: - ```graphql import { execute } from "your-favorite-graphql-client" @@ -279,9 +281,9 @@ const [tokens, counters] = Promise.all( ) ``` -While this implementation is totally valid, it will require two round trips with the GraphQL API. +जबकि यह कार्यान्वयन पूरी तरह से मान्य है, यह GraphQL API के साथ दो राउंड ट्रिप की आवश्यकता होगी। -Fortunately, it is also valid to send multiple queries in the same GraphQL request as follows: +सौभाग्य से, एक ही GraphQL अनुरोध में कई क्वेरी भेजना भी मान्य है, जैसा कि नीचे दिया गया है: ```graphql import { execute } from "your-favorite-graphql-client" @@ -302,13 +304,13 @@ query GetTokensandCounters { const { result: { tokens, counters } } = execute(query) ``` -This approach will **improve the overall performance** by reducing the time spent on the network (saves you a round trip to the API) and will provide a **more concise implementation**. +यह तरीका कुल मिलाकर प्रदर्शन में सुधार करेगा क्योंकि यह नेटवर्क पर बिताया गया समय कम करेगा (API के लिए एक राउंड ट्रिप बचाता है) और एक अधिक संक्षिप्त कार्यान्वयन प्रदान करेगा। ### लीवरेज ग्राफक्यूएल फ़्रैगमेंट -A helpful feature to write GraphQL queries is GraphQL Fragment. +GraphQL क्वेरी लिखने में सहायक एक सुविधा है GraphQL Fragment। -Looking at the following query, you will notice that some fields are repeated across multiple Selection-Sets (`{ ... }`): +निम्नलिखित क्वेरी को देखने पर, आप देखेंगे कि कुछ फ़ील्ड्स कई चयन-सेट्स ({ ... }) में दोहराए गए हैं: ```graphql query { @@ -328,12 +330,12 @@ query { } ``` -Such repeated fields (`id`, `active`, `status`) bring many issues: +ऐसे दोहराए गए फ़ील्ड (id, active, status) कई समस्याएँ लाते हैं: - बड़ी क्वेरीज़ पढ़ने में कठिन होती हैं। -- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- जब ऐसे टूल्स का उपयोग किया जाता है जो क्वेरी के आधार पर TypeScript टाइप्स उत्पन्न करते हैं (इस पर अंतिम अनुभाग में और अधिक), newDelegate और oldDelegate दो अलग-अलग इनलाइन इंटरफेस के रूप में परिणत होंगे। -A refactored version of the query would be the following: +एक पुनर्गठित संस्करण का प्रश्न निम्नलिखित होगा: ```graphql query { @@ -357,15 +359,15 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. +GraphQL में fragment का उपयोग पढ़ने की सुविधा बढ़ाएगा (विशेष रूप से बड़े स्तर पर) और बेहतर TypeScript प्रकारों की पीढ़ी का परिणाम देगा। -When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). +जब टाइप्स जेनरेशन टूल का उपयोग किया जाता है, तो उपरोक्त क्वेरी एक सही 'DelegateItemFragment' टाइप उत्पन्न करेगी (अंतिम "Tools" अनुभाग देखें)। ### ग्राफकॉल फ्रैगमेंट क्या करें और क्या न करें ### Fragment base must be a type -A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: +एक फ़्रैगमेंट गैर-लागू प्रकार पर आधारित नहीं हो सकता, संक्षेप में, **ऐसे प्रकार पर जिसमें फ़ील्ड नहीं होते हैं।** ```graphql fragment MyFragment on BigInt { @@ -373,11 +375,11 @@ fragment MyFragment on BigInt { } ``` -`BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. +BigInt एक **स्केलर** (मूल "plain" type) है जिसे किसी फ़्रैगमेंट के आधार के रूप में उपयोग नहीं किया जा सकता। #### How to spread a Fragment -Fragments are defined on specific types and should be used accordingly in queries. +फ्रैगमेंट विशिष्ट प्रकारों पर परिभाषित किए जाते हैं और उन्हें क्वेरी में उपयुक्त रूप से उपयोग किया जाना चाहिए। उदाहरण: @@ -400,19 +402,19 @@ fragment VoteItem on Vote { } ``` -`newDelegate` and `oldDelegate` are of type `Transcoder`. +`newDelegate` और `oldDelegate` प्रकार के `Transcoder` हैं। -It is not possible to spread a fragment of type `Vote` here. +यहाँ `Vote` प्रकार के एक खंड को फैलाना संभव नहीं है। -#### Define Fragment as an atomic business unit of data +#### Fragment को data की एक atomic business unit के रूप में define करें। -GraphQL `Fragment`s must be defined based on their usage. +GraphQL `Fragments` को उनके उपयोग के आधार पर परिभाषित किया जाना चाहिए। -For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. +अधिकांश उपयोग मामलों के लिए, एक प्रकार पर एक फ़्रैगमेंट परिभाषित करना (दोहराए गए फ़ील्ड उपयोग या प्रकार निर्माण के मामले में) पर्याप्त होता है। -Here is a rule of thumb for using fragments: +यहाँ एक सामान्य नियम है फ्रैगमेंट्स का उपयोग करने के लिए: -- When fields of the same type are repeated in a query, group them in a `Fragment`. +- जब समान प्रकार के फ़ील्ड किसी क्वेरी में दोहराए जाते हैं, तो उन्हें` Fragment` में समूहित करें। - जब समान लेकिन भिन्न फ़ील्ड्स को दोहराया जाता है, तो कई फ़्रैगमेंट्स बनाएं, उदाहरण के लिए: ```graphql @@ -436,35 +438,35 @@ fragment VoteWithPoll on Vote { --- -## The Essential Tools +## मूलभूत उपकरण ### ग्राफक्यूएल वेब-आधारित खोजकर्ता -Iterating over queries by running them in your application can be cumbersome. For this reason, don't hesitate to use [Graph Explorer](https://thegraph.com/explorer) to test your queries before adding them to your application. Graph Explorer will provide you a preconfigured GraphQL playground to test your queries. +क्वेरीज़ को अपने application में चलाकर उनका पुनरावर्तन करना कठिन हो सकता है। इसी कारण, अपनी क्वेरीज़ को अपने application में जोड़ने से पहले उनका परीक्षण करने के लिए बिना किसी संकोच के [Graph Explorer](https://thegraph.com/explorer) का उपयोग करें। Graph Explorer आपको एक पूर्व-कॉन्फ़िगर किया हुआ GraphQL प्लेग्राउंड प्रदान करेगा, जहाँ आप अपनी क्वेरीज़ का परीक्षण कर सकते हैं। -If you are looking for a more flexible way to debug/test your queries, other similar web-based tools are available such as [Altair](https://altairgraphql.dev/) and [GraphiQL](https://graphiql-online.com/graphiql). +यदि आप अपनी क्वेरीज़ को डिबग/परखने के लिए एक अधिक लचीला तरीका ढूंढ रहे हैं, तो अन्य समान वेब-आधारित टूल उपलब्ध हैं जैसे [Altair](https://altairgraphql.dev/) और [GraphiQL](https://graphiql-online.com/graphiql) ### ग्राफक्यूएल लाइनिंग -In order to keep up with the mentioned above best practices and syntactic rules, it is highly recommended to use the following workflow and IDE tools. +उपरोक्त सर्वोत्तम प्रथाओं और वाक्य रचना नियमों का पालन करने के लिए, निम्नलिखित वर्कफ़्लो और IDE टूल्स का उपयोग करना अत्यधिक अनुशंसित है। **GraphQL ESLint** -[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) will help you stay on top of GraphQL best practices with zero effort. +[GraphQL ESLint](https://the-guild.dev/graphql/eslint/docs/getting-started) आपकी बिना किसी अतिरिक्त प्रयास के GraphQL सर्वोत्तम प्रथाओं का पालन करने में मदद करेगा। -[Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as: +["operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) कॉन्फ़िगरेशन सेटअप करने से आवश्यक नियम लागू होंगे जैसे:- -- `@graphql-eslint/fields-on-correct-type`: is a field used on a proper type? -- `@graphql-eslint/no-unused variables`: should a given variable stay unused? +- `@graphql-eslint/fields-on-correct-type`: क्या कोई फ़ील्ड सही प्रकार पर उपयोग की गई है? +- `@graphql-eslint/no-unused variables`: क्या दिया गया चर अनुपयोगी रहना चाहिए? - और अधिक! -This will allow you to **catch errors without even testing queries** on the playground or running them in production! +यह आपको बिना प्लेग्राउंड पर क्वेरी का परीक्षण किए या उन्हें प्रोडक्शन में चलाए बिना ही त्रुटियों को पकड़ने की अनुमति देगा! ### आईडीई प्लगइन्स -**VSCode and GraphQL** +**VSCode और GraphQL** -The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: +[GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) आपके विकास वर्कफ़्लो में एक बेहतरीन जोड़ है जिससे आपको यह प्राप्त होता है: - सिंटैक्स हाइलाइटिंग - ऑटो-कंप्लीट सुझाव @@ -472,15 +474,15 @@ The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemNa - निबंध - फ्रैगमेंट्स और इनपुट टाइप्स के लिए परिभाषा पर जाएं। -If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. +यदि आप graphql-eslint का उपयोग कर रहे हैं, तो [ESLint VSCode एक्सटेंशन](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) आपके कोड में त्रुटियों और चेतावनियों को इनलाइन सही तरीके से देखने के लिए आवश्यक है। -**WebStorm/Intellij and GraphQL** +**WebStorm/Intellij और GraphQL** -The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: +[JS GraphQL प्लगइन](https://plugins.jetbrains.com/plugin/8097-graphql/) आपके GraphQL के साथ काम करने के अनुभव को काफी बेहतर बनाएगा, जिससे आपको निम्नलिखित सुविधाएँ मिलेंगी: - सिंटैक्स हाइलाइटिंग - ऑटो-कंप्लीट सुझाव - स्कीमा के खिलाफ मान्यता - निबंध -For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. +इस विषय पर अधिक जानकारी के लिए, [WebStorm लेख](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) देखें, जिसमें इस प्लगइन की सभी प्रमुख विशेषताओं को प्रदर्शित किया गया है। diff --git a/website/src/pages/hi/subgraphs/querying/distributed-systems.mdx b/website/src/pages/hi/subgraphs/querying/distributed-systems.mdx index 0f530ebacba4..848c75c82f9d 100644 --- a/website/src/pages/hi/subgraphs/querying/distributed-systems.mdx +++ b/website/src/pages/hi/subgraphs/querying/distributed-systems.mdx @@ -29,22 +29,22 @@ title: वितरित प्रणाली ## अद्यतन डेटा के लिए मतदान -The Graph provides the `block: { number_gte: $minBlock }` API, which ensures that the response is for a single block equal or higher to `$minBlock`. If the request is made to a `graph-node` instance and the min block is not yet synced, `graph-node` will return an error. If `graph-node` has synced min block, it will run the response for the latest block. If the request is made to an Edge & Node Gateway, the Gateway will filter out any Indexers that have not yet synced min block and make the request for the latest block the Indexer has synced. +The Graph block: `{ number_gte: $minBlock }` API प्रदान करता है, जो यह सुनिश्चित करता है कि प्रतिक्रिया एक ही ब्लॉक के लिए होगी जो `$minBlock` के बराबर या उससे अधिक होगा। यदि अनुरोध `ग्राफ-नोड` instances पर किया जाता है और न्यूनतम ब्लॉक अभी तक सिंक नहीं हुआ है, तो graph-node एक त्रुटि लौटाएगा। यदि `ग्राफ-नोड` ने न्यूनतम ब्लॉक को सिंक कर लिया है, तो यह नवीनतम ब्लॉक के लिए प्रतिक्रिया चलाएगा। यदि अनुरोध Edge & Node Gateway को किया जाता है, तो Gateway उन Indexers को फ़िल्टर कर देगा जिन्होंने अभी तक न्यूनतम ब्लॉक को सिंक नहीं किया है और उस नवीनतम ब्लॉक के लिए अनुरोध करेगा जिसे Indexer ने सिंक किया है। -We can use `number_gte` to ensure that time never travels backward when polling for data in a loop. Here is an example: +हम `number_gte` का उपयोग यह सुनिश्चित करने के लिए कर सकते हैं कि डेटा को लूप में पोल करते समय समय कभी पीछे न जाए। यहाँ एक उदाहरण है -```javascript -/// Updates the protocol.paused variable to the latest -/// known value in a loop by fetching it using The Graph. +````javascript +/// एक लूप में नवीनतम ज्ञात मान को लाने के लिए The Graph का उपयोग करके +/// protocol.paused वेरिएबल को अपडेट करता है। async function updateProtocolPaused() { - // It's ok to start with minBlock at 0. The query will be served - // using the latest block available. Setting minBlock to 0 is the - // same as leaving out that argument. + // minBlock को 0 से शुरू करना ठीक है। क्वेरी को + // नवीनतम उपलब्ध ब्लॉक का उपयोग करके परोसा जाएगा। minBlock को 0 सेट करना + // उसी के समान है जैसे इस आर्गुमेंट को छोड़ देना। let minBlock = 0 for (;;) { - // Schedule a promise that will be ready once - // the next Ethereum block will likely be available. + // एक प्रॉमिस शेड्यूल करें जो तभी तैयार होगी जब + // अगला Ethereum ब्लॉक उपलब्ध होने की संभावना होगी। const nextBlock = new Promise((f) => { setTimeout(f, 14000) }) @@ -65,30 +65,31 @@ async function updateProtocolPaused() { const response = await graphql(query, variables) minBlock = response._meta.block.number - // TODO: Do something with the response data here instead of logging it. + // TODO: यहाँ डेटा के साथ कुछ करें, केवल इसे लॉग करने के बजाय। console.log(response.protocol.paused) - // Sleep to wait for the next block + // अगले ब्लॉक की प्रतीक्षा करने के लिए स्लीप करें await nextBlock } } ``` +```` ## संबंधित वस्तुओं का एक सेट लाया जा रहा है एक अन्य उपयोग-मामला एक बड़े सेट को पुनः प्राप्त कर रहा है, या अधिक सामान्यतः, कई अनुरोधों में संबंधित वस्तुओं को पुनः प्राप्त कर रहा है। मतदान के मामले के विपरीत (जहां वांछित स्थिरता समय में आगे बढ़ने के लिए थी), वांछित स्थिरता समय में एक बिंदु के लिए है। -Here we will use the `block: { hash: $blockHash }` argument to pin all of our results to the same block. +यहां हम सभी परिणामों को एक ही ब्लॉक पर पिन करने के लिए `block: { hash: $blockHash }` आर्गुमेंट का उपयोग करेंगे। ```javascript -/// Gets a list of domain names from a single block using pagination +/// पृष्ठांकन का उपयोग करके एकल ब्लॉक से डोमेन नामों की सूची प्राप्त करता है async function getDomainNames() { - // Set a cap on the maximum number of items to pull. + // खींचे जाने वाले अधिकतम आइटम की एक सीमा निर्धारित करें। let pages = 5 const perPage = 1000 - // The first query will get the first page of results and also get the block - // hash so that the remainder of the queries are consistent with the first. + // पहली क्वेरी पहले पृष्ठ के परिणाम प्राप्त करेगी और ब्लॉक हैश भी प्राप्त करेगी + // ताकि शेष क्वेरी पहले के अनुरूप हों। const listDomainsQuery = ` query ListDomains($perPage: Int!) { domains(first: $perPage) { @@ -107,9 +108,9 @@ async function getDomainNames() { let blockHash = data._meta.block.hash let query - // Continue fetching additional pages until either we run into the limit of - // 5 pages total (specified above) or we know we have reached the last page - // because the page has fewer entities than a full page. + // अतिरिक्त पृष्ठों को तब तक प्राप्त करना जारी रखें जब तक कि हम या तो 5 पृष्ठों की सीमा तक न पहुँच जाएँ + // (ऊपर निर्दिष्ट) या हमें यह पता चल जाए कि हम अंतिम पृष्ठ तक पहुँच चुके हैं क्योंकि + // पृष्ठ में पूर्ण पृष्ठ की तुलना में कम इकाइयाँ हैं। while (data.domains.length == perPage && --pages) { let lastID = data.domains[data.domains.length - 1].id query = ` @@ -122,7 +123,7 @@ async function getDomainNames() { data = await graphql(query, { perPage, lastID, blockHash }) - // Accumulate domain names into the result + // परिणाम में डोमेन नामों को संचित करें for (domain of data.domains) { result.push(domain.name) } diff --git a/website/src/pages/hi/subgraphs/querying/from-an-application.mdx b/website/src/pages/hi/subgraphs/querying/from-an-application.mdx index 77b510466231..4ebf7cf278db 100644 --- a/website/src/pages/hi/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/hi/subgraphs/querying/from-an-application.mdx @@ -1,53 +1,55 @@ --- title: एक एप्लिकेशन से क्वेरी करना +sidebarTitle: App से Query करना --- -Learn how to query The Graph from your application. +अपने application से The Graph को क्वेरी करना सीखें। -## Getting GraphQL Endpoints +## GraphQL एंडपॉइंट प्राप्त करना -During the development process, you will receive a GraphQL API endpoint at two different stages: one for testing in Subgraph Studio, and another for making queries to The Graph Network in production. +विकास प्रक्रिया के दौरान, आपको दो अलग-अलग चरणों में एक GraphQL API endpoint प्राप्त होगा: एक परीक्षण के लिए सबग्राफ Studio में, और दूसरा उत्पादन में The Graph Network से क्वेरी करने के लिए। -### Subgraph Studio Endpoint +### सबग्राफ Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +अपने Subgraph को Subgraph Studio पर deploy करने के बाद, आपको एक endpoint मिलेगा जो इस प्रकार दिखेगा: +(https://api.thegraph.com/subgraphs/name/YOUR_SUBGRAPH_NAME) ``` https://api.studio.thegraph.com/query/// ``` -> This endpoint is intended for testing purposes **only** and is rate-limited. +> यह एंडपॉइंट केवल परीक्षण उद्देश्यों के लिए है **और** इसकी अनुरोध सीमा निर्धारित है। -### The Graph Network Endpoint +### The Graph Network एंडपॉइंट -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +अपने Subgraph को नेटवर्क पर publish करने के बाद, आपको एक endpoint मिलेगा जो इस प्रकार दिखेगा: ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> यह endpoint नेटवर्क पर सक्रिय उपयोग के लिए बनाया गया है। यह आपको विभिन्न **GraphQL client libraries** का उपयोग करके Subgraph से query करने और अपनी application को indexed data से भरने की अनुमति देता है। -## Using Popular GraphQL Clients +## लोकप्रिय GraphQL क्लाइंट्स का उपयोग ### Graph Client -The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: +The Graph अपना खुद का GraphQL क्लाइंट, graph-client प्रदान कर रहा है, जो अद्वितीय विशेषताओं का समर्थन करता है, जैसे: -- क्रॉस-चेन सबग्राफ हैंडलिंग: एक ही क्वेरी में कई सबग्राफ से पूछताछ -- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) -- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) +- Cross-chain Subgraph Handling: एक ही query में multiple Subgraphs से data प्राप्त करना +- [स्वचालित ब्लॉक ट्रैकिंग](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) +- [स्वचालित Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - पूरी तरह से टाइप किया गया परिणाम -> Note: `graph-client` is integrated with other popular GraphQL clients such as Apollo and URQL, which are compatible with environments such as React, Angular, Node.js, and React Native. As a result, using `graph-client` will provide you with an enhanced experience for working with The Graph. +> नोट: `graph-client` अन्य लोकप्रिय GraphQL क्लाइंट जैसे Apollo और URQL के साथ एकीकृत है, जो React, Angular, Node.js और React Native जैसे परिवेशों के अनुकूल हैं। परिणामस्वरूप, `graph-client` का उपयोग करने से The Graph के साथ काम करने के लिए आपको एक उन्नत अनुभव मिलेगा। -### Fetch Data with Graph Client +### Graph Client के साथ डेटा प्राप्त करें -Let's look at how to fetch data from a subgraph with `graph-client`: +आइए देखें कि **`graph-client`** का उपयोग करके Subgraph से डेटा कैसे प्राप्त किया जाता है: #### स्टेप 1 -Install The Graph Client CLI in your project: +अपने प्रोजेक्ट में The Graph Client CLI इंस्टॉल करें: ```sh yarn add -D @graphprotocol/client-cli @@ -57,7 +59,7 @@ npm install --save-dev @graphprotocol/client-cli #### चरण 2 -Define your query in a `.graphql` file (or inlined in your `.js` or `.ts` file): +अपनी क्वेरी को एक `.graphql` फ़ाइल में परिभाषित करें (या अपने `.js` या `.ts` फ़ाइल में इनलाइन करें): ```graphql query ExampleQuery { @@ -86,7 +88,7 @@ query ExampleQuery { #### चरण 3 -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +एक कॉन्फ़िगरेशन फ़ाइल (जिसका नाम `.graphclientrc.yml` हो) बनाएं और इसे आपके GraphQL endpointकी ओर इंगित करें, जो The Graph द्वारा प्रदान किए गए हैं, उदाहरण के लिए: ```yaml # .graphclientrc.yml @@ -104,17 +106,17 @@ documents: - ./src/example-query.graphql ``` -#### Step 4 +#### स्टेप 4 -Run the following The Graph Client CLI command to generate typed and ready to use JavaScript code: +निम्नलिखित The Graph Client CLI कमांड चलाएँ ताकि टाइप किए गए और उपयोग के लिए तैयार JavaScript कोड उत्पन्न हो सके:- ```sh graphclient build ``` -#### Step 5 +#### स्टेप 5 -Update your `.ts` file to use the generated typed GraphQL documents: +अपनी `.ts` फ़ाइल को उत्पन्न किए गए टाइप किए गए GraphQL दस्तावेज़ों का उपयोग करने के लिए अपडेट करें:: ```tsx import React, { useEffect } from 'react' @@ -152,27 +154,27 @@ function App() { export default App ``` -> **Important Note:** `graph-client` is perfectly integrated with other GraphQL clients such as Apollo client, URQL, or React Query; you can [find examples in the official repository](https://github.com/graphprotocol/graph-client/tree/main/examples). However, if you choose to go with another client, keep in mind that **you won't be able to use Cross-chain Subgraph Handling or Automatic Pagination, which are core features for querying The Graph**. +> **महत्वपूर्ण नोट**: graph-client अन्य GraphQL क्लाइंट जैसे Apollo client, URQL, या React Query के साथ पूरी तरह से एकीकृत है; आप [आधिकारिक रिपॉजिटरी में उदाहरण देख सकते हैं](https://github.com/graphprotocol/graph-client/tree/main/examples)। हालाँकि, **यदि आप किसी अन्य क्लाइंट का चयन करते हैं, तो ध्यान रखें कि आप क्रॉस-चेन सबग्राफ Handling या Automatic Pagination का उपयोग नहीं कर पाएंगे, जो The Graph को क्वेरी करने के लिए मुख्य विशेषताएँ हैं।** ### Apollo Client -[Apollo client](https://www.apollographql.com/docs/) is a common GraphQL client on front-end ecosystems. It's available for React, Angular, Vue, Ember, iOS, and Android. +[Apollo client](https://www.apollographql.com/docs/) एक सामान्य GraphQL क्लाइंट है जो फ्रंट-एंड इकोसिस्टम में उपयोग किया जाता है। यह React, Angular, Vue, Ember, iOS और Android के लिए उपलब्ध है। -Although it's the heaviest client, it has many features to build advanced UI on top of GraphQL: +हालाँकि यह सबसे भारी क्लाइंट है, इसमें कई विशेषताएँ हैं जो GraphQL के ऊपर उन्नत UI बनाने के लिए उपलब्ध हैं: -- Advanced error handling +- उन्नत त्रुटि प्रबंधन - पृष्ठ पर अंक लगाना -- Data prefetching -- Optimistic UI -- Local state management +- डेटा प्रीफेचिंग +- आशावादी UI +- लोकल स्टेट प्रबंधन -### Fetch Data with Apollo Client +### Apollo Client के साथ डेटा प्राप्त करें -Let's look at how to fetch data from a subgraph with Apollo client: +आइए देखें कि **Apollo Client** का उपयोग करके Subgraph से डेटा कैसे प्राप्त किया जाता है: #### स्टेप 1 -Install `@apollo/client` and `graphql`: +`@apollo/client` और `graphql` को इंस्टॉल करें: ```sh npm install @apollo/client graphql @@ -180,7 +182,7 @@ npm install @apollo/client graphql #### चरण 2 -Query the API with the following code: +API से निम्नलिखित कोड के साथ क्वेरी करें: ```javascript import { ApolloClient, InMemoryCache, gql } from '@apollo/client' @@ -215,7 +217,7 @@ client #### चरण 3 -To use variables, you can pass in a `variables` argument to the query: +आप वेरिएबल्स का उपयोग करने के लिए, क्वेरी में `variables` आर्गुमेंट पास कर सकते हैं: ```javascript const tokensQuery = ` @@ -246,22 +248,22 @@ client }) ``` -### URQL Overview +### URQL अवलोकन -[URQL](https://formidable.com/open-source/urql/) is available within Node.js, React/Preact, Vue, and Svelte environments, with some more advanced features: +[URQL](https://formidable.com/open-source/urql/) Node.js, React/Preact, Vue और Svelte वातावरण के भीतर उपलब्ध है, जिसमें कुछ अधिक उन्नत सुविधाएँ शामिल हैं: - - Flexible cache system - एक्स्टेंसिबल डिज़ाइन (इसके शीर्ष पर नई क्षमताओं को जोड़ना आसान) - लाइटवेट बंडल (अपोलो क्लाइंट की तुलना में ~ 5x हल्का) - फ़ाइल अपलोड और ऑफ़लाइन मोड के लिए समर्थन -### Fetch data with URQL +### URQL के साथ डेटा प्राप्त करें -Let's look at how to fetch data from a subgraph with URQL: +आइए देखें कि **URQL** का उपयोग करके Subgraph से डेटा कैसे प्राप्त किया जाता है: #### स्टेप 1 -Install `urql` and `graphql`: +`urql` और `graphql` को इंस्टॉल करें: ```sh npm install urql graphql @@ -269,7 +271,7 @@ npm install urql graphql #### चरण 2 -Query the API with the following code: +API से निम्नलिखित कोड के साथ क्वेरी करें: ```javascript import { createClient } from 'urql' diff --git a/website/src/pages/hi/subgraphs/querying/graph-client/README.md b/website/src/pages/hi/subgraphs/querying/graph-client/README.md index 416cadc13c6f..1844a10f1970 100644 --- a/website/src/pages/hi/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/hi/subgraphs/querying/graph-client/README.md @@ -14,25 +14,25 @@ This library is intended to simplify the network aspect of data consumption for > The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! -| Status | Feature | Notes | +| स्थिति | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## शुरू करना You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -48,7 +48,7 @@ npm install --save-dev @graphprotocol/client-cli > The CLI is installed as dev dependency since we are using it to produce optimized runtime artifacts that can be loaded directly from your app! -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +एक कॉन्फ़िगरेशन फ़ाइल (जिसका नाम `.graphclientrc.yml` हो) बनाएं और इसे आपके GraphQL endpointकी ओर इंगित करें, जो The Graph द्वारा प्रदान किए गए हैं, उदाहरण के लिए: ```yml # .graphclientrc.yml @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### उदाहरण You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/hi/subgraphs/querying/graph-client/live.md b/website/src/pages/hi/subgraphs/querying/graph-client/live.md index e6f726cb4352..624e17162567 100644 --- a/website/src/pages/hi/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/hi/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## शुरू करना Start by adding the following configuration to your `.graphclientrc.yml` file: @@ -12,7 +12,7 @@ plugins: defaultInterval: 1000 ``` -## Usage +## उपयोग Set the default update interval you wish to use, and then you can apply the following GraphQL `@directive` over your GraphQL queries: diff --git a/website/src/pages/hi/subgraphs/querying/graphql-api.mdx b/website/src/pages/hi/subgraphs/querying/graphql-api.mdx index ecfc90819e64..fd8d40d4c505 100644 --- a/website/src/pages/hi/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/hi/subgraphs/querying/graphql-api.mdx @@ -6,19 +6,19 @@ The Graph में उपयोग किए जाने वाले GraphQL ## GraphQL क्या है? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) एक API के लिए क्वेरी भाषा है और मौजूदा डेटा के साथ उन क्वेरियों को निष्पादित करने के लिए एक रनटाइम है। The Graph, GraphQL का उपयोग करके Subgraphs से क्वेरी करता है। -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To समझने के लिए कि GraphQL बड़ी भूमिका कैसे निभाता है, [developing](/subgraphs/developing/introduction/) और [creating a Subgraph](/developing/creating-a-subgraph/) की समीक्षा करें। ## GraphQL के साथ क्वेरीज़ -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +आपकी Subgraph schema में `Entities` नामक प्रकारों को परिभाषित किया जाता है। प्रत्येक `Entity` प्रकार के लिए, शीर्ष-स्तरीय Query प्रकार पर `entity` और `entities` फ़ील्ड जेनरेट की जाएंगी। -> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. +> ध्यान दें: 'queries' को The Graph का उपयोग करते समय 'graphql' क्वेरी के शीर्ष पर शामिल करने की आवश्यकता नहीं है। ### उदाहरण -Query for a single `Token` entity defined in your schema: +एकल 'Token' एंटिटी के लिए क्वेरी करें जो आपके स्कीमा में परिभाषित है ```graphql { @@ -29,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> Note: When querying for a single entity, the `id` field is required, and it must be written as a string. +> नोट: जब किसी एकल entities के लिए क्वेरी की जा रही हो, तो 'id' फ़ील्ड आवश्यक है, और इसे एक स्ट्रिंग के रूप में लिखा जाना चाहिए। -Query all `Token` entities: +सभी 'Token' entities को क्वेरी करें: ```graphql { @@ -42,12 +42,12 @@ Query all `Token` entities: } ``` -### Sorting +### Her translation means sorting out जब आप एक संग्रह के लिए क्वेरी कर रहे हों, तो आप: -- Use the `orderBy` parameter to sort by a specific attribute. -- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. +- 'orderBy' पैरामीटर का उपयोग किसी विशिष्ट गुण द्वारा सॉर्ट करने के लिए करें। +- 'orderDirection' का उपयोग सॉर्ट दिशा निर्दिष्ट करने के लिए करें, 'asc' के लिए आरोही या 'desc' के लिए अवरोही। #### उदाहरण @@ -62,7 +62,7 @@ Query all `Token` entities: #### नेस्टेड इकाई छँटाई के लिए उदाहरण -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Graph Node ['v0.30.0'](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) के अनुसार, entities को nested entities के आधार पर क्रमबद्ध किया जा सकता है। निम्नलिखित उदाहरण में टोकन उनके मालिक के नाम के अनुसार क्रमबद्ध किए गए हैं: @@ -77,18 +77,18 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> वर्तमान में, आप '@entity' और '@derivedFrom' फ़ील्ड्स पर एक-स्तरीय गहरे 'String' या 'ID' प्रकारों द्वारा क्रमबद्ध कर सकते हैं। अफसोस,[इंटरफेस द्वारा एक-स्तरीय गहरे entities पर क्रमबद्ध करना](https://github.com/graphprotocol/graph-node/pull/4058), ऐसे फ़ील्ड्स द्वारा क्रमबद्ध करना जो एरेज़ और नेस्टेड entities हैं, अभी तक समर्थित नहीं है। ### पृष्ठ पर अंक लगाना जब एक संग्रह के लिए क्वेरी की जाती है, तो यह सबसे अच्छा होता है: -- Use the `first` parameter to paginate from the beginning of the collection. - - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. -- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. +- संग्रह की `शुरुआत` से पेजिनेट करने के लिए first पैरामीटर का उपयोग करें। + - डिफ़ॉल्ट सॉर्ट आदेश `ID` के अनुसार आरोही अल्फ़ान्यूमेरिक क्रम में होता है, **न** कि निर्माण समय के अनुसार। +- `skip` पैरामीटर का उपयोग entities को स्किप करने और पेजिनेट करने के लिए करें। instancesके लिए, first:100 पहले 100 entities दिखाता है और first:100, skip:100 अगले 100 entities दिखाता है। +- `skip` मानों का उपयोग queries में करने से बचें क्योंकि ये सामान्यतः खराब प्रदर्शन करते हैं। एक बड़ी संख्या में आइटम प्राप्त करने के लिए, पिछले उदाहरण में दिखाए गए अनुसार किसी गुण के आधार पर entities के माध्यम से पेज करना सबसे अच्छा होता है। -#### Example using `first` +#### उदाहरण जो `first` का उपयोग करता है पहले 10 टोकन पूछें: @@ -101,11 +101,11 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +संग्रह के मध्य में स्थित entities के समूहों के लिए queries करने के लिए, `skip` पैरामीटर को `first` पैरामीटर के साथ उपयोग किया जा सकता है, ताकि संग्रह की शुरुआत से निर्धारित संख्या में entities को छोड़ दिया जा सके। -#### Example using `first` and `skip` +#### `first` और `skip` का उपयोग करते हुए उदाहरण -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +कलेक्शन की शुरुआत से 10 स्थानों के बाद 10 `Token` entities को queries करें। ```graphql { @@ -116,7 +116,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example using `first` and `id_ge` +#### उदाहरण 'first' और 'id_ge' का उपयोग करते हुए। यदि एक क्लाइंट को बड़ी संख्या में एंटिटीज़ पुनर्प्राप्त करने की आवश्यकता है, तो एट्रिब्यूट पर आधारित क्वेरी बनाना और उस एट्रिब्यूट द्वारा फ़िल्टर करना अधिक प्रभावशाली है। उदाहरण के लिए, एक क्लाइंट इस क्वेरी का उपयोग करके बड़ी संख्या में टोकन पुनर्प्राप्त कर सकता है: @@ -129,16 +129,16 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +पहली बार, यह queries को `lastID = ""`, के साथ भेजेगा, और subsequent requests के लिए यह `lastID` को पिछले अनुरोध में आखिरी entity के `id` attribute के रूप में सेट करेगा। यह तरीका increasing 'skip' मानों का उपयोग करने की तुलना में काफी बेहतर प्रदर्शन करेगा। ### छनन -- You can use the `where` parameter in your queries to filter for different properties. -- You can filter on multiple values within the `where` parameter. +- आप अपनी क्वेरी में विभिन्न गुणों को फ़िल्टर करने के लिए 'where' पैरामीटर का उपयोग कर सकते हैं। +- आप 'where' पैरामीटर के भीतर कई मानों पर फ़िल्टर कर सकते हैं। -#### Example using `where` +#### उदाहरण 'where' का उपयोग करते हुए -Query challenges with `failed` outcome: +'failed' परिणाम वाली क्वेरी चुनौतियाँ: ```graphql { @@ -152,7 +152,7 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +आप मूल्य तुलना के लिए '_gt' , '_lte' जैसे प्रत्ययों का उपयोग कर सकते हैं। #### श्रेणी फ़िल्टरिंग के लिए उदाहरण @@ -168,9 +168,9 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### ब्लॉक फ़िल्टरिंग के लिए उदाहरण -You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. +आप उन इकाइयों entities को भी फ़िल्टर कर सकते हैं जिन्हें किसी निर्दिष्ट ब्लॉक में या उसके बाद अपडेट किया गया था, '_change_block(number_gte: Int)' के साथ। -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +यह उपयोगी हो सकता है यदि आप केवल उन entities को लाना चाहते हैं जो बदल गई हैं, उदाहरण के लिए, पिछली बार जब आपने पोल किया था तब से। या वैकल्पिक रूप से, यह जांचने या डिबग करने के लिए उपयोगी हो सकता है कि आपकी Subgraph में entities कैसे बदल रही हैं (यदि इसे एक ब्लॉक फ़िल्टर के साथ जोड़ा जाए, तो आप केवल उन्हीं entities को अलग कर सकते हैं जो एक विशिष्ट ब्लॉक में बदली हैं)। ```graphql { @@ -184,7 +184,7 @@ This can be useful if you are looking to fetch only entities which have changed, #### नेस्टेड इकाई फ़िल्टरिंग के लिए उदाहरण -Filtering on the basis of nested entities is possible in the fields with the `_` suffix. +नेस्टेड इकाइयों के आधार पर फ़िल्टरिंग उन फ़ील्ड्स में संभव है जिनके अंत में '_' प्रत्यय होता है। यह उपयोगी हो सकता है यदि आप केवल उन संस्थाओं को लाना चाहते हैं जिनके चाइल्ड-स्तरीय निकाय प्रदान की गई शर्तों को पूरा करते हैं। @@ -202,11 +202,11 @@ Filtering on the basis of nested entities is possible in the fields with the `_` #### लॉजिकल ऑपरेटर्स -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria. +ग्राफ-नोड ['v0.30.0'](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) से, आप एक ही 'where' आर्गुमेंट में कई पैरामीटर्स को समूहित कर सकते हैं और 'and' या 'or' ऑपरेटर्स का उपयोग करके एक से अधिक मानदंडों के आधार पर परिणामों को फ़िल्टर कर सकते हैं। -##### `AND` Operator +##### `AND` ऑपरेटर -The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +निम्नलिखित उदाहरण उन चुनौतियों को फ़िल्टर करता है जिनका ```outcome`` succeeded``` है और जिनका `number` `100` या उससे अधिक है। ```graphql { @@ -220,7 +220,7 @@ The following example filters for challenges with `outcome` `succeeded` and `num } ``` -> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. +> **सिंटैक्टिक शुगर**: आप उपरोक्त को queriesसरल बना सकते हैं `and` ऑपरेटर को हटाकर और उप-वाक्यांश को कॉमा से अलग करके पास करके। > > ```graphql > { @@ -234,9 +234,9 @@ The following example filters for challenges with `outcome` `succeeded` and `num > } > ``` -##### `OR` Operator +##### `OR` ऑपरेटर। -The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +निम्नलिखित उदाहरण उन चुनौतियों को फ़िल्टर करता है जिनका `outcome` `succeeded` है या जिनका `number` `100` या उससे अधिक है। ```graphql { @@ -250,7 +250,7 @@ The following example filters for challenges with `outcome` `succeeded` or `numb } ``` -> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries. +> **नोट**: queries बनाते समय, `or` ऑपरेटर के उपयोग से होने वाले प्रदर्शन प्रभावों पर विचार करना महत्वपूर्ण है। हालांकि `or` खोज परिणामों को व्यापक बनाने के लिए एक उपयोगी उपकरण हो सकता है, लेकिन इसके कुछ महत्वपूर्ण लागतें भी होती हैं। `or` के साथ मुख्य समस्या यह है कि यह queries को धीमा कर सकता है। इसका कारण यह है कि `or` के उपयोग से डेटाबेस को कई इंडेक्स स्कैन करने पड़ते हैं, जो एक समय-सापेक्ष प्रक्रिया हो सकती है। इन समस्याओं से बचने के लिए, यह अनुशंसा की जाती है कि डेवलपर्स or के बजाय and ऑपरेटर का उपयोग करें जब भी संभव हो। यह अधिक सटीक फ़िल्टरिंग की अनुमति देता है और तेज़, अधिक सटीक queries प्रदान कर सकता है। #### सभी फ़िल्टर @@ -279,19 +279,19 @@ _not_ends_with _not_ends_with_nocase ``` -> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types. +> कुछ प्रत्यय केवल विशिष्ट प्रकारों के लिए समर्थित होते हैं। उदाहरण के लिए, `Boolean` केवल` _not, _in`, और `_not_`in का समर्थन करता है, लेकिन _ केवल ऑब्जेक्ट और इंटरफेस प्रकारों के लिए उपलब्ध है। -In addition, the following global filters are available as part of `where` argument: +इसके अलावा, `where` आर्ग्यूमेंट के हिस्से के रूप में निम्नलिखित वैश्विक फ़िल्टर उपलब्ध हैं: ```graphql _change_block(number_gte: Int) ``` -### Time-travel queries +### समय-यात्रा क्वेरी -You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +आप न केवल नवीनतम ब्लॉक के लिए, जो डिफ़ॉल्ट होता है, बल्कि अतीत के किसी भी मनमाने ब्लॉक के लिए भी अपनी entities की स्थिति को queries कर सकते हैं। जिस ब्लॉक पर queries होनी चाहिए, उसे या तो उसके ब्लॉक नंबर या उसके ब्लॉक हैश द्वारा निर्दिष्ट किया जा सकता है, इसके लिए queries के शीर्ष स्तर के फ़ील्ड्स में block आर्ग्यूमेंट शामिल किया जाता है। -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +ऐसे queries का परिणाम समय के साथ नहीं बदलेगा, यानी किसी निश्चित पिछले ब्लॉक परqueries करने से हमेशा वही परिणाम मिलेगा, चाहे इसे कभी भी निष्पादित किया जाए। इसका एकमात्र अपवाद यह है कि यदि आप किसी ऐसे ब्लॉक पर queries करते हैं जो chain के हेड के बहुत करीब है, तो परिणाम बदल सकता है यदि वह ब्लॉक मुख्य chain पर **not** होता है और chain का पुनर्गठन हो जाता है। एक बार जब किसी ब्लॉक को अंतिम (final) माना जा सकता है, तो queries का परिणाम नहीं बदलेगा। > Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. @@ -309,7 +309,7 @@ The result of such a query will not change over time, i.e., querying at a certai } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +यह queries `Challenge` entities और उनके संबद्ध `Application` entities को लौटाएगी, जैसा कि वे ब्लॉक संख्या 8,000,000 के प्रोसेस होने के ठीक बाद मौजूद थे। #### उदाहरण @@ -325,26 +325,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +यह queries `Challenge` entities और उनसे संबंधित `Application` entities को वापस करेगी, जैसा कि वे दिए गए हैश वाले ब्लॉक को प्रोसेस करने के तुरंत बाद मौजूद थीं। ### पूर्ण पाठ खोज प्रश्न -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields एक अभिव्यक्तिपूर्ण टेक्स्ट खोज API प्रदान करते हैं जिसे Subgraph schema में जोड़ा जा सकता है और अनुकूलित किया जा सकता है। Fulltext search को अपने Subgraph में जोड़ने के लिए [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) देखें। -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +फ़ुलटेक्स्ट सर्च क्वेरीज़ में एक आवश्यक फ़ील्ड होता है, ' text ', जिसमें सर्च शब्द प्रदान किए जाते हैं। इस ' text ' सर्च फ़ील्ड में उपयोग करने के लिए कई विशेष फ़ुलटेक्स्ट ऑपरेटर उपलब्ध हैं। पूर्ण पाठ खोज ऑपरेटर: -| प्रतीक | ऑपरेटर | Description | -| --- | --- | --- | -| `&` | `And` | सभी प्रदान किए गए शब्दों को शामिल करने वाली संस्थाओं के लिए एक से अधिक खोज शब्दों को फ़िल्टर में संयोजित करने के लिए | -| | | `Or` | या ऑपरेटर द्वारा अलग किए गए एकाधिक खोज शब्दों वाली क्वेरी सभी संस्थाओं को प्रदान की गई शर्तों में से किसी से मेल के साथ वापस कर देगी | -| `<->` | `Follow by` | दो शब्दों के बीच की दूरी निर्दिष्ट करें। | -| `:*` | `Prefix` | उन शब्दों को खोजने के लिए उपसर्ग खोज शब्द का उपयोग करें जिनके उपसर्ग मेल खाते हैं (2 वर्ण आवश्यक हैं।) | +| प्रतीक | ऑपरेटर | Description | +| ------ | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | सभी प्रदान किए गए शब्दों को शामिल करने वाली संस्थाओं के लिए एक से अधिक खोज शब्दों को फ़िल्टर में संयोजित करने के लिए | +| | | ' Or' | या ऑपरेटर द्वारा अलग किए गए एकाधिक खोज शब्दों वाली क्वेरी सभी संस्थाओं को प्रदान की गई शर्तों में से किसी से मेल के साथ वापस कर देगी | +| `<->` | ' द्वारा अनुसरण करें ' | दो शब्दों के बीच की दूरी निर्दिष्ट करें। | +| `:*` | ' उपसर्ग ' | उन शब्दों को खोजने के लिए उपसर्ग खोज शब्द का उपयोग करें जिनके उपसर्ग मेल खाते हैं (2 वर्ण आवश्यक हैं।) | #### उदाहरण -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +' or 'ऑपरेटर का उपयोग करके, यह क्वेरी उन ब्लॉग एंटिटीज़ को फ़िल्टर करेगी जिनके पूर्ण-पाठ (fulltext) फ़ील्ड में "anarchism" या "crumpet" में से किसी एक के विभिन्न रूप शामिल हैं। ```graphql { @@ -357,7 +357,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +' follow by ' ऑपरेटर पूर्ण-पाठ दस्तावेज़ों में विशिष्ट दूरी पर स्थित शब्दों को निर्दिष्ट करता है। निम्नलिखित क्वेरी उन सभी ब्लॉगों को लौटाएगी जिनमें "विकेंद्रीकृत" के विभिन्न रूप "philosophy" के बाद आते हैं। ```graphql { @@ -370,7 +370,7 @@ The `follow by` operator specifies a words a specific distance apart in the full } ``` -Combine fulltext operators to make more complex filters. With a pretext search operator combined with a follow by this example query will match all blog entities with words that start with "lou" followed by "music". +अधिक जटिल फिल्टर बनाने के लिए फुलटेक्स्ट ऑपरेटरों को मिलाएं। इस उदाहरण क्वेरी के अनुसरण के साथ एक बहाना खोज ऑपरेटर संयुक्त रूप से सभी ब्लॉग संस्थाओं को उन शब्दों से मिलाएगा जो "लू" से शुरू होते हैं और उसके बाद "संगीत"। ```graphql { @@ -391,11 +391,11 @@ Graph Node अपने द्वारा प्राप्त GraphQL क् आपके डेटा स्रोतों का स्कीमा, अर्थात् उपलब्ध प्रश्न करने के लिए संस्थाओं की प्रकार, मान और उनके बीच के संबंध, GraphQL Interface Definition Language (IDL)(https://facebook.github.io/graphql/draft/#sec-Type-System) के माध्यम से परिभाषित किए गए हैं। -GraphQL स्कीमा आम तौर पर queries, subscriptions और mutations के लिए रूट प्रकार परिभाषित करते हैं। The Graph केवल queries का समर्थन करता है। आपके सबग्राफ के लिए रूट Query प्रकार स्वचालित रूप से उस GraphQL स्कीमा से उत्पन्न होता है जो आपके सबग्राफ manifest(/developing/creating-a-subgraph/#components-of-a-subgraph) में शामिल होता है। +GraphQL स्कीमाएँ आमतौर पर queries, subscriptions और mutations के लिए रूट टाइप्स को परिभाषित करती हैं। The Graph केवल queries को सपोर्ट करता है। आपके Subgraph के लिए रूट Query टाइप अपने आप उत्पन्न हो जाता है, जो कि आपके [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph) में शामिल GraphQL स्कीमा से आता है। > ध्यान दें: हमारा एपीआई म्यूटेशन को उजागर नहीं करता है क्योंकि डेवलपर्स से उम्मीद की जाती है कि वे अपने एप्लिकेशन से अंतर्निहित ब्लॉकचेन के खिलाफ सीधे लेन-देन(transaction) जारी करेंगे। -### Entities +### इकाइयां आपके स्कीमा में जिन भी GraphQL प्रकारों में @entity निर्देश होते हैं, उन्हें संस्थाएँ (entities) माना जाएगा और उनमें एक ID फ़ील्ड होना चाहिए। @@ -403,7 +403,7 @@ GraphQL स्कीमा आम तौर पर queries, subscriptions और ### सबग्राफ मेटाडेटा -सभी सबग्राफमें एक स्वचालित रूप से जनरेट किया गया _Meta_ ऑब्जेक्ट होता है, जो Subgraph मेटाडेटा तक पहुँच प्रदान करता है। इसे इस प्रकार क्वेरी किया जा सकता है: +सभी Subgraph में एक स्वचालित रूप से उत्पन्न `_Meta_` ऑब्जेक्ट होता है, जो Subgraph मेटाडाटा तक पहुंच प्रदान करता है। इसे निम्नलिखित तरीके से क्वेरी किया जा सकता है: ```graphQL { @@ -419,14 +419,14 @@ GraphQL स्कीमा आम तौर पर queries, subscriptions और } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +यदि कोई ब्लॉक प्रदान किया जाता है, तो मेटाडेटा उस ब्लॉक के अनुसार होगा, यदि नहीं, तो नवीनतम इंडेक्स किया गया ब्लॉक उपयोग किया जाएगा। यदि प्रदान किया जाता है, तो ब्लॉक को Subgraph के प्रारंभिक ब्लॉक के बाद और सबसे हाल ही में इंडेक्स किए गए ब्लॉक के बराबर या उससे कम होना चाहिए। deployment एक विशिष्ट ID है, जो subgraph.yaml फ़ाइल के IPFS CID के अनुरूप है। -block नवीनतम ब्लॉक के बारे में जानकारी प्रदान करता है (किसी भी ब्लॉक सीमाओं को ध्यान में रखते हुए जो कि \_meta में पास की जाती हैं): +block नवीनतम ब्लॉक के बारे में जानकारी प्रदान करता है (किसी भी ब्लॉक सीमाओं को ध्यान में रखते हुए जो कि _meta में पास की जाती हैं): - हैश: ब्लॉक का हैश - नंबर: ब्लॉक नंबर -- टाइमस्टैम्प: ब्लॉक का टाइमस्टैम्प, यदि उपलब्ध हो (यह वर्तमान में केवल ईवीएम नेटवर्क को इंडेक्स करने वाले सबग्राफ के लिए उपलब्ध है) +- टाइमस्टैम्प: यदि उपलब्ध हो, तो ब्लॉक का टाइमस्टैम्प (यह वर्तमान में केवल EVM नेटवर्क को इंडेक्स करने वाले Subgraphs के लिए उपलब्ध है) -hasIndexingErrors एक बूलियन है जो यह पहचानता है कि क्या सबग्राफ ने किसी पिछले ब्लॉक पर इंडेक्सिंग त्रुटियों का सामना किया था। +`hasIndexingErrors` एक boolean है जो यह पहचानता है कि Subgraph को किसी पिछले block पर Indexing errors का सामना करना पड़ा था। diff --git a/website/src/pages/hi/subgraphs/querying/introduction.mdx b/website/src/pages/hi/subgraphs/querying/introduction.mdx index 2b9f3f02ff49..f18dd5c441ad 100644 --- a/website/src/pages/hi/subgraphs/querying/introduction.mdx +++ b/website/src/pages/hi/subgraphs/querying/introduction.mdx @@ -3,30 +3,31 @@ title: ग्राफ़ को क्वेरी करना sidebarTitle: Introduction --- -To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer). +To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer)। -## अवलोकन +## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +जब कोई Subgraph **The Graph Network** पर publish किया जाता है, तो आप **Graph Explorer** में उसके Subgraph details page पर जा सकते हैं और **"Query"** टैब का उपयोग करके प्रत्येक Subgraph के लिए deployed **GraphQL API** को explore कर सकते हैं। ## विशिष्टताएँ -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +The Graph Network पर प्रकाशित प्रत्येक Subgraph का **Graph Explorer** में एक unique query URL होता है, जिससे आप सीधे queries कर सकते हैं। इसे खोजने के लिए, **Subgraph details page** पर जाएं और शीर्ष दाएँ कोने में **"Query"** बटन पर क्लिक करें। -![Query Subgraph Button](/img/query-button-screenshot.png) +![Query सबग्राफ बटन](/img/query-button-screenshot.png) -![Query Subgraph URL](/img/query-url-screenshot.png) +![Query सबग्राफ URL](/img/query-url-screenshot.png) -You will notice that this query URL must use a unique API key. You can create and manage your API keys in [Subgraph Studio](https://thegraph.com/studio), under the "API Keys" section. Learn more about how to use Subgraph Studio [here](/deploying/subgraph-studio/). +आप देखेंगे कि इस क्वेरी URL के लिए एक अद्वितीय API कुंजी का उपयोग करना आवश्यक है। आप अपनी API कुंजियों को [सबग्राफ Studio](https://thegraph.com/studio) में "API Keys" अनुभाग के अंतर्गत बना और प्रबंधित कर सकते हैं। सबग्राफ Studio का उपयोग करने के तरीके के बारे में अधिक जानें [यहाँ](/deploying/subgraph-studio/)। -Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). +सबग्राफ Studio उपयोगकर्ता एक निःशुल्क योजना से शुरू करते हैं, जो उन्हें प्रति माह 100,000 क्वेरी करने की अनुमति देती है। अतिरिक्त क्वेरी Growth Plan पर उपलब्ध हैं, जो अतिरिक्त क्वेरी के लिए उपयोग-आधारित मूल्य निर्धारण प्रदान करता है, जिसे क्रेडिट कार्ड या Arbitrum पर GRT के माध्यम से भुगतान किया जा सकता है। आप बिलिंग के बारे में अधिक जान सकते हैं [यहाँ](/subgraphs/billing/)। -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Subgraph की entities को query करने के लिए पूरी जानकारी के लिए **Query API** देखें:\ +> `https://thegraph.com/docs/en/subgraphs/querying/graphql-api/` > -> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. +> Note: यदि आपको Graph Explorer URL पर GET request के साथ 405 errors मिलती हैं, तो कृपया इसके बजाय POST request पर switch करें। ### Additional Resources -- Use [GraphQL querying best practices](/subgraphs/querying/best-practices/). -- To query from an application, click [here](/subgraphs/querying/from-an-application/). -- View [querying examples](https://github.com/graphprotocol/query-examples/tree/main). +- [GraphQL क्वेरी करने के सर्वोत्तम अभ्यास](/subgraphs/querying/best-practices/)। +- application से क्वेरी करने के लिए, [यहाँ](/subgraphs/querying/from-an-application/) क्लिक करें। +- ऊपर दिए गए उदाहरणों को देखें [querying examples](https://github.com/graphprotocol/query-examples/tree/main)। diff --git a/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx index 4f36f067d89d..257bce21d38c 100644 --- a/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/hi/subgraphs/querying/managing-api-keys.mdx @@ -1,34 +1,34 @@ --- -title: अपनी एपीआई कुंजियों का प्रबंधन +title: API Keys को प्रबंधित करना --- -## अवलोकन +## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +Subgraphs को query करने के लिए API keys आवश्यक होते हैं। ये यह सुनिश्चित करते हैं कि application services के बीच कनेक्शन वैध और अधिकृत हैं, साथ ही एंड यूज़र और डिवाइस की पहचान को प्रमाणित करते हैं। -### Create and Manage API Keys +### API Keys बनाएं और प्रबंधित करें -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Subgraph Studio पर जाएं: https://thegraph.com/studio/ और API Keys टैब पर क्लिक करें ताकि आप अपने विशेष Subgraphs के लिए API keys बना और प्रबंधित कर सकें। -The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. +"API keys" तालिका मौजूदा API keys को सूचीबद्ध करती है और आपको उन्हें प्रबंधित या हटाने की अनुमति देती है। प्रत्येक कुंजी के लिए, आप इसकी स्थिति, वर्तमान अवधि के लिए लागत, वर्तमान अवधि के लिए खर्च सीमा और कुल क्वेरी संख्या देख सकते हैं। -You can click the "three dots" menu to the right of a given API key to: +आप दिए गए API key के दाईं ओर स्थित "तीन बिंदु" मेनू पर क्लिक करके: - Rename API key - Regenerate API key - Delete API key -- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +- Manage spending limit: यह USD में दी गई API key के लिए एक optional monthly spending limit है। यह limit per billing period (calendar month) के लिए है। -### API Key Details +### API Keysविवरण -You can click on an individual API key to view the Details page: +Details page देखने के लिए आप individual API key पर click कर सकते हैं: -1. Under the **Overview** section, you can: +1. **अवलोकन** अनुभाग के अंतर्गत, आप: - अपना कुंजी नाम संपादित करें - एपीआई कुंजियों को पुन: उत्पन्न करें - आंकड़ों के साथ एपीआई कुंजी का वर्तमान उपयोग देखें: - प्रश्नों की संख्या - जीआरटी की राशि खर्च की गई -2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: +2. नीचे **Security** अनुभाग में, आप अपनी पसंद के अनुसार सुरक्षा सेटिंग्स को सक्रिय कर सकते हैं। विशेष रूप से, आप: - अपनी API कुंजी का उपयोग करने के लिए प्राधिकृत डोमेन नाम देखें और प्रबंधित करें - - सबग्राफ असाइन करें जिन्हें आपकी एपीआई कुंजी से पूछा जा सकता है + - अपने API key के साथ जिन Subgraphs को query किया जा सकता है, उन्हें असाइन करें। diff --git a/website/src/pages/hi/subgraphs/querying/python.mdx b/website/src/pages/hi/subgraphs/querying/python.mdx index 22e9b71da321..687a1a693024 100644 --- a/website/src/pages/hi/subgraphs/querying/python.mdx +++ b/website/src/pages/hi/subgraphs/querying/python.mdx @@ -3,9 +3,9 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds एक सहज Python लाइब्रेरी है जो Subgraph को क्वेरी करने के लिए बनाई गई है, जिसे [Playgrounds](https://playgrounds.network/) द्वारा विकसित किया गया है। यह आपको सीधे Python डेटा वातावरण से Subgraph डेटा को कनेक्ट करने की अनुमति देता है, जिससे आप [pandas](https://pandas.pydata.org/) जैसी लाइब्रेरी का उपयोग करके डेटा विश्लेषण कर सकते हैं! -Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. +Subgrounds GraphQL queries के निर्माण के लिए एक सरल Pythonic API प्रदान करता है, pagination जैसे कठिन workflows को स्वचालित करता है, और नियंत्रित schema परिवर्तनों के माध्यम से उन्नत users को strong बनाता है। ## शुरू करना @@ -17,24 +17,25 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +एक बार इंस्टॉल करने के बाद, आप नीचे दिए गए क्वेरी के साथ subgrounds का परीक्षण कर सकते हैं। नीचे दिया गया उदाहरण Aave v2 प्रोटोकॉल के लिए एक Subgraph प्राप्त करता है और TVL (Total Value Locked) के आधार पर शीर्ष 5 बाजारों को क्रमबद्ध करता है, उनके नाम और उनका TVL (USD में) चुनता है और डेटा को एक pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) के रूप में लौटाता है। ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Subgraph लोड करें aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# Construct the query +# क्वेरी बनाएँ latest_markets = aave_v2.Query.markets( orderBy=aave_v2.Market.totalValueLockedUSD, orderDirection='desc', first=5, ) -# Return query to a dataframe + +# क्वेरी को DataFrame में बदलें sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, @@ -45,10 +46,10 @@ sg.query_df([ Subgrounds is built and maintained by the [Playgrounds](https://playgrounds.network/) team and can be accessed on the [Playgrounds docs](https://docs.playgrounds.network/subgrounds). -Since subgrounds has a large feature set to explore, here are some helpful starting places: +चूंकि subgrounds में तलाशने के लिए एक बड़ी सुविधा मौजूद है, इसलिए यहां कुछ उपयोगी शुरुआती स्थान दिए गए हैं: - [Getting Started with Querying](https://docs.playgrounds.network/subgrounds/getting_started/basics/) - - A good first step for how to build queries with subgrounds. + - Subgrounds के साथ queries कैसे बनाएं, इसके लिए एक अच्छा पहला कदम। - [Building Synthetic Fields](https://docs.playgrounds.network/subgrounds/getting_started/synthetic_fields/) - A gentle introduction to defining synthetic fields that transform data defined from the schema. - [Concurrent Queries](https://docs.playgrounds.network/subgrounds/getting_started/async/) diff --git a/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/hi/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/hi/subgraphs/quick-start.mdx b/website/src/pages/hi/subgraphs/quick-start.mdx index 719252575cc2..cbf3550a3170 100644 --- a/website/src/pages/hi/subgraphs/quick-start.mdx +++ b/website/src/pages/hi/subgraphs/quick-start.mdx @@ -1,25 +1,25 @@ --- -title: जल्दी शुरू +title: Quick Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +The Graph पर आसानी से एक [सबग्राफ](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) को बनाना, प्रकाशित करना और क्वेरी करना सीखें। -## Prerequisites +## पूर्वावश्यकताएँ - एक क्रिप्टो वॉलेट -- A smart contract address on a [supported network](/supported-networks/) -- [Node.js](https://nodejs.org/) installed -- A package manager of your choice (`npm`, `yarn` or `pnpm`) +- एक स्मार्ट contract पता एक [supported network](/supported-networks/) पर। +- [Node.js](https://nodejs.org/) इंस्टॉल किया गया +- आपकी पसंद का एक पैकेज मैनेजर (`npm`, `yarn` या `pnpm`) -## How to Build a Subgraph +## सबग्राफ कैसे बनाएं -### 1. Create a subgraph in Subgraph Studio +### 1. सबग्राफ Studio में एक सबग्राफ बनाएँ - [Subgraph Studio](https://thegraph.com/studio/) पर जाएँ और अपने वॉलेट को कनेक्ट करें। -Subgraph Studio आपको सबग्राफ़ बनाने, प्रबंधित करने, तैनात करने और प्रकाशित करने की सुविधा देता है, साथ ही API कुंजी बनाने और प्रबंधित करने की भी अनुमति देता है। +सबग्राफ Studio आपको Subgraphs बनाने, प्रबंधित करने, तैनात करने और प्रकाशित करने की सुविधा देता है, साथ ही API कुंजी बनाने और प्रबंधित करने की सुविधा भी प्रदान करता है। -"एक सबग्राफ बनाएं" पर क्लिक करें। सबग्राफ का नाम टाइटल केस में रखनाrecommended है: "सबग्राफ नाम चेन नाम"। +"Create a सबग्राफ" पर क्लिक करें। यह अनुशंसा की जाती है कि सबग्राफ का नाम टाइटल केस में रखा जाए: "सबग्राफ Name Chain Name"। ### 2. ग्राफ़ सीएलआई स्थापित करें @@ -37,56 +37,56 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. अपना Subgraph इनिशियलाइज़ करें +### अपने सबग्राफ को प्रारंभ करें -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> आप अपने विशिष्ट Subgraph के लिए कमांड Subgraph Studio के Subgraph पेज पर पा सकते हैं। -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +`graph init` कमांड स्वचालित रूप से आपके contract की घटनाओं के आधार पर एक सबग्राफ का खाका तैयार करेगा। -निम्नलिखित आदेश एक मौजूदा अनुबंध से आपके Subgraph को प्रारंभ करता है: +निम्नलिखित कमांड एक मौजूदा contract से आपका सबग्राफ प्रारंभ करता है: ```sh graph init ``` -If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. +यदि आपका contract उस ब्लॉकस्कैनर पर वेरीफाई किया गया है जहाँ यह डिप्लॉय किया गया है (जैसे [Etherscan](https://etherscan.io/)), तो ABI अपने आप CLI में क्रिएट हो जाएगा। -जब आप अपने subgraph को प्रारंभ करते हैं, CLI आपसे निम्नलिखित जानकारी मांगेगा: +जब आप अपने सबग्राफ को प्रारंभ करते हैं, तो CLI आपसे निम्नलिखित जानकारी मांगेगा: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. -- **Contract address**: Locate the smart contract address you’d like to query data from. -- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. -- **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. -- **Add another contract** (optional): You can add another contract. +- **प्रोटोकॉल**: वह प्रोटोकॉल चुनें जिससे आपका सबग्राफ डेटा को indexing करेगा। +- **सबग्राफ slug**: अपने सबग्राफ के लिए एक नाम बनाएं। आपका सबग्राफ slug आपके सबग्राफ के लिए एक पहचानकर्ता है। +- **निर्देशिका**: अपनी सबग्राफ बनाने के लिए एक निर्देशिका चुनें। +- \*\*Ethereum नेटवर्क (वैकल्पिक): आपको यह निर्दिष्ट करने की आवश्यकता हो सकती है कि आपका Subgraph किस EVM-संगत नेटवर्क से डेटा को इंडेक्स करेगा। +- **contract एड्रेस**: उस स्मार्ट contract एड्रेस को खोजें जिससे आप डेटा क्वेरी करना चाहते हैं। +- **ABI**: यदि ABI स्वतः नहीं भरा जाता है, तो आपको इसे JSON फ़ाइल के रूप में मैन्युअल रूप से इनपुट करना होगा। +- **Start Block**: आपको स्टार्ट ब्लॉक इनपुट करना चाहिए ताकि ब्लॉकचेन डेटा की सबग्राफ indexing को ऑप्टिमाइज़ किया जा सके। स्टार्ट ब्लॉक को खोजने के लिए उस ब्लॉक को ढूंढें जहां आपका contract डिप्लॉय किया गया था। +- **contract का नाम**: अपने contract का नाम दर्ज करें। +- **contract इवेंट्स को entities के रूप में इंडेक्स करें**: इसे true पर सेट करने की सलाह दी जाती है, क्योंकि यह हर उत्सर्जित इवेंट के लिए स्वचालित रूप से आपके सबग्राफ में मैपिंग जोड़ देगा। +- **एक और contract जोड़ें** (वैकल्पिक): आप एक और contract जोड़ सकते हैं। -अपने सबग्राफ को इनिशियलाइज़ करते समय क्या अपेक्षा की जाए, इसके उदाहरण के लिए निम्न स्क्रीनशॉट देखें: +इसका एक उदाहरण देखने के लिए निम्नलिखित स्क्रीनशॉट देखें कि जब आप अपना सबग्राफ इनिशियलाइज़ करते हैं तो क्या अपेक्षा करें: -![Subgraph command](/img/CLI-Example.png) +![सबग्राफ कमांड](/img/CLI-Example.png) -### 4. Edit your subgraph +### अपना सबग्राफ संपादित करें -पिछले चरण में `init` कमांड एक स्कैफोल्ड Subgraph बनाता है जिसे आप अपने Subgraph को बनाने के लिए प्रारंभिक बिंदु के रूप में उपयोग कर सकते हैं। +`init` कमांड पिछले चरण में एक प्रारंभिक सबग्राफ बनाता है जिसे आप अपने सबग्राफ को बनाने के लिए एक शुरुआती बिंदु के रूप में उपयोग कर सकते हैं। -जब आप Subgraph में बदलाव करते हैं, तो आप मुख्य रूप से तीन फाइलों के साथ काम करेंगे: +सबग्राफ में परिवर्तन करते समय, आप मुख्य रूप से तीन फ़ाइलों के साथ काम करेंगे: -- Manifest (subgraph.yaml) - मेनिफेस्ट परिभाषित करता है कि आपका Subgraph किस डेटा सोर्स को अनुक्रमित करेगा -- Schema (schema.graphql) - ग्राफक्यूएल स्कीमा परिभाषित करता है कि आप Subgraph से कौन सा डेटा प्राप्त करना चाहते हैं +- मैनिफेस्ट (`subgraph.yaml`) - यह निर्धारित करता है कि आपका सबग्राफ किन डेटा स्रोतों को इंडेक्स करेगा। +- Schema (`schema.graphql`) - यह परिभाषित करता है कि आप सबग्राफ से कौन सा डेटा प्राप्त करना चाहते हैं। - असेंबलीस्क्रिप्ट मैपिंग (mapping.ts) - यह वह कोड है जो स्कीमा में परिभाषित इकाई के लिए आपके डेटा सोर्स से डेटा का अनुवाद करता है। -अपने उपग्राफ को लिखने के लिए विस्तृत विवरण के लिए, [सबग्राफ बनाना](/developing/creating-a-subgraph/) देखें। +आपके सबग्राफ को लिखने के विस्तृत विवरण के लिए, [Creating a सबग्राफ देखें](/developing/creating-a-subgraph/)। -### 5. अपने Subgraph का परीक्षण करें +### 5. अपना Subgraph डिप्लॉय करें -> Remember, deploying is not the same as publishing. +> तैनाती करना प्रकाशन के समान नहीं है। -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +जब आप किसी सबग्राफ को तैनात (deploy) करते हैं, तो आप इसे [सबग्राफ Studio](https://thegraph.com/studio/) पर अपलोड करते हैं, जहाँ आप इसका परीक्षण, स्टेजिंग और समीक्षा कर सकते हैं। तैनात किए गए सबग्राफ का Indexing [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/) द्वारा किया जाता है, जो Edge & Node द्वारा संचालित एक एकल Indexer है, न कि The Graph Network में मौजूद कई विकेंद्रीकृत Indexers द्वारा। एक तैनात (deployed) सबग्राफ का उपयोग निःशुल्क है, यह दर-सीमित (rate-limited) होता है, सार्वजनिक रूप से दृश्य (visible) नहीं होता, और इसे मुख्य रूप से विकास (development), स्टेजिंग और परीक्षण (testing) उद्देश्यों के लिए डिज़ाइन किया गया है। -एक बार आपका सबग्राफ लिखे जाने के बाद, निम्नलिखित कमांड चलाएँ: +एक बार जब आपका सबग्राफ लिखा जा चुका हो, तो निम्नलिखित कमांड चलाएँ: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -अपने सबग्राफ को प्रमाणित और तैनात करें। तैनाती key सबग्राफ स्टूडियो में सबग्राफ पेज पर पाई जा सकती है। +अपने सबग्राफ को प्रमाणित करें और तैनात करें। तैनाती कुंजी को सबग्राफ Studio में सबग्राफ के पृष्ठ पर पाया जा सकता है। ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -107,39 +107,39 @@ graph deploy ``` ```` -The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. +The CLI एक संस्करण लेबल के लिए पूछेगा। यह दृढ़ता से सिफारिश की जाती है कि [semantic versioning](https://semver.org/) का उपयोग करें, जैसे 0.0.1। -### 6. अपने Subgraph का परीक्षण करें +### 6. अपने सबग्राफ की समीक्षा करें -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +यदि आप अपना सबग्राफ प्रकाशित करने से पहले उसका परीक्षण करना चाहते हैं, तो आप [सबग्राफ Studio](https://thegraph.com/studio/) का उपयोग करके निम्नलिखित कर सकते हैं: - एक नमूना क्वेरी चलाएँ। -- अपने Subgraph का विश्लेषण करने के लिए डैशबोर्ड में जानकारी देखें। -- लॉग आपको बताएंगे कि क्या आपके Subgraph में कोई त्रुटि है। एक ऑपरेशनल Subgraph के लॉग इस तरह दिखेंगे: +- अपने डैशबोर्ड में अपने सबग्राफ का विश्लेषण करें ताकि जानकारी की जांच की जा सके। +- डैशबोर्ड पर लॉग्स की जाँच करें ताकि यह देखा जा सके कि आपके सबग्राफ में कोई त्रुटि है या नहीं। एक सक्रिय सबग्राफ के लॉग इस प्रकार दिखेंगे: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. अपने Subgraph को ग्राफ़ के The Graph Network पर प्रकाशित करें +### 7. अपने सबग्राफ को The Graph नेटवर्क पर प्रकाशित करें -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +जब आपका सबग्राफ प्रोडक्शन वातावरण के लिए तैयार हो जाता है, तो आप इसे विकेंद्रीकृत नेटवर्क पर प्रकाशित कर सकते हैं। प्रकाशित करना एक ऑनचेन क्रिया है जो निम्नलिखित करता है: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- यह आपके सबग्राफ को विकेंद्रीकृत [Indexers](/indexing/overview/) द्वारा The Graph Network पर अनुक्रमित किए जाने के लिए उपलब्ध कराता है। +- यह आपकी दर सीमा को हटा देता है और आपके सबग्राफ को [Graph Explorer](https://thegraph.com/explorer/) में सार्वजनिक रूप से खोजने योग्य और क्वेरी करने योग्य बनाता है। +- यह आपके सबग्राफ को [Curators](/resources/roles/curating/) के लिए उपलब्ध कराता है ताकि वे इसे क्यूरेट कर सकें। -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> अधिक मात्रा में GRT को आप और अन्य लोग आपके सबग्राफ पर क्यूरेट करते हैं, तो अधिक Indexers को आपके सबग्राफ को इंडेक्स करने के लिए प्रोत्साहित किया जाएगा, जिससे सेवा की गुणवत्ता में सुधार होगा, विलंबता (latency) कम होगी, और आपके सबग्राफ के लिए नेटवर्क की पुनरावृत्ति (redundancy) बढ़ेगी। #### Subgraph Studio से प्रकाशित -अपने subgraph को प्रकाशित करने के लिए, डैशबोर्ड में Publish बटन पर क्लिक करें। +अपने सबग्राफ को प्रकाशित करने के लिए, डैशबोर्ड में Publish बटन पर क्लिक करें। -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![सबग्राफ Studio पर एक Subgraph प्रकाशित करें](/img/publish-sub-transfer.png) -उस नेटवर्क का चयन करें जिस पर आप अपना Subgraph प्रकाशित करना चाहते हैं। +अपने सबग्राफ को प्रकाशित करने के लिए उस नेटवर्क का चयन करें, जिसे आप चुनना चाहते हैं। #### Publishing from the CLI -Version 0.73.0 के अनुसार, आप अपने subgraph को graph-cli के साथ भी publish कर सकते हैं। +जैसा कि संस्करण 0.73.0 में है, अब आप अपने सबग्राफ को Graph CLI के साथ प्रकाशित कर सकते हैं। `graph-cli` खोलें। @@ -157,32 +157,32 @@ graph publish ``` ```` -3. एक विंडो खुलेगी, जो आपको अपनी वॉलेट कनेक्ट करने, मेटाडेटा जोड़ने, और अपने अंतिम Subgraph को आपकी पसंद के नेटवर्क पर डिप्लॉय करने की अनुमति देगी। +3. एक विंडो खुलेगी, जिससे आप अपना वॉलेट कनेक्ट कर सकते हैं, मेटाडेटा जोड़ सकते हैं और अपने फ़ाइनलाइज़ किए गए सबग्राफ को अपनी पसंद के नेटवर्क पर डिप्लॉय कर सकते हैं। ![cli-ui](/img/cli-ui.png) -To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +अपने परिनियोजन को अनुकूलित करने के लिए, [Publishing a सबग्राफ](/subgraphs/developing/publishing/publishing-a-subgraph/) देखें। -#### Adding signal to your subgraph +#### सिग्नल को अपने Subgraph में जोड़ना -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. Indexers को अपने सबग्राफ से क्वेरी करने के लिए आकर्षित करने हेतु, आपको इसमें GRT क्यूरेशन सिग्नल जोड़ना चाहिए। - - यह कार्रवाई सेवा की गुणवत्ता में सुधार करती है, विलंबता को कम करती है, और आपके Subgraph के लिए नेटवर्क की पुनरावृत्ति और उपलब्धता को बढ़ाती है। + - यह कार्रवाई सेवा की गुणवत्ता में सुधार करती है, विलंबता को कम करती है, और आपके सबग्राफ के लिए नेटवर्क की पुनरावृत्ति और उपलब्धता को बढ़ाती है। 2. यदि इंडेक्सिंग पुरस्कारों के लिए योग्य हैं, तो Indexers संकेतित राशि के आधार पर GRT पुरस्कार प्राप्त करते हैं। - - कम से कम 3,000 GRT का चयन करना अनुशंसित है ताकि 3 Indexer को आकर्षित किया जा सके। Subgraph फ़ीचर उपयोग और समर्थित नेटवर्क के आधार पर पुरस्कार पात्रता की जांच करें। + - यह अनुशंसा की जाती है कि कम से कम 3,000 GRT को क्यूरेट किया जाए ताकि 3 Indexers को आकर्षित किया जा सके। सबग्राफ फीचर उपयोग और समर्थित नेटवर्क के आधार पर पुरस्कार पात्रता की जांच करें। -To learn more about curation, read [Curating](/resources/roles/curating/). +Curation के बारे में और जानने के लिए, [Curating](/resources/roles/curating/) पढ़ें. -गैस लागत को बचाने के लिए, आप इसे प्रकाशित करते समय अपने Subgraph को उसी लेनदेन में क्यूरेट कर सकते हैं, इस विकल्प का चयन करके: +गैस लागत बचाने के लिए, आप अपने सबग्राफ को उसी लेनदेन में प्रकाशित कर सकते हैं जिसमें आप इसे क्यूरेट कर रहे हैं, बस इस विकल्प का चयन करें: -![Subgraph publish](/img/studio-publish-modal.png) +![सबग्राफ प्रकाशित ](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. अपने सबग्राफ से क्वेरी करें -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +आप अब प्रति माह 100,000 निःशुल्क क्वेरी तक उपयोग कर सकते हैं अपने सबग्राफ के साथ The Graph Network पर! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +आप अपने सबग्राफ को उसके Query URL पर GraphQL क्वेरी भेजकर क्वेरी कर सकते हैं, जिसे आप Query बटन पर क्लिक करके प्राप्त कर सकते हैं। -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +आपके सबग्राफ से डेटा क्वेरी करने के बारे में अधिक जानकारी के लिए, [Querying The Graph](/subgraphs/querying/introduction/) पढ़ें। diff --git a/website/src/pages/hi/substreams/_meta-titles.json b/website/src/pages/hi/substreams/_meta-titles.json index 6262ad528c3a..83856f5ffbb5 100644 --- a/website/src/pages/hi/substreams/_meta-titles.json +++ b/website/src/pages/hi/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "विकसित करना" } diff --git a/website/src/pages/hi/substreams/developing/dev-container.mdx b/website/src/pages/hi/substreams/developing/dev-container.mdx index bd4acf16eec7..1e265f9ad332 100644 --- a/website/src/pages/hi/substreams/developing/dev-container.mdx +++ b/website/src/pages/hi/substreams/developing/dev-container.mdx @@ -9,9 +9,9 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. -## Prerequisites +## पूर्व आवश्यकताएँ - Ensure Docker and VS Code are up-to-date. @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/hi/substreams/developing/sinks.mdx b/website/src/pages/hi/substreams/developing/sinks.mdx index 18a9c557bef2..d618182e2447 100644 --- a/website/src/pages/hi/substreams/developing/sinks.mdx +++ b/website/src/pages/hi/substreams/developing/sinks.mdx @@ -1,21 +1,21 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. -## अवलोकन +## Overview Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks > Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. - [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. +- [Subgraph](./sps/introduction.mdx): Configure an API to meet your data needs and host it on The Graph Network. - [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. - [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. - [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| नाम | समर्थन | Maintainer | Source Code | +| ---------- | ------ | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| नाम | समर्थन | Maintainer | Source Code | +| ---------- | ------ | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/hi/substreams/developing/solana/account-changes.mdx b/website/src/pages/hi/substreams/developing/solana/account-changes.mdx index 0fb6b35739bd..4282ec4c49c5 100644 --- a/website/src/pages/hi/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/hi/substreams/developing/solana/account-changes.mdx @@ -11,13 +11,13 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. ## शुरू करना -### Prerequisites +### आवश्यक शर्तें Before you begin, ensure that you have the following: diff --git a/website/src/pages/hi/substreams/developing/solana/transactions.mdx b/website/src/pages/hi/substreams/developing/solana/transactions.mdx index c4e038438bba..c298b89b60fe 100644 --- a/website/src/pages/hi/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/hi/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### सबग्राफ 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/hi/substreams/introduction.mdx b/website/src/pages/hi/substreams/introduction.mdx index 627898326c47..0bd1ea21c9f6 100644 --- a/website/src/pages/hi/substreams/introduction.mdx +++ b/website/src/pages/hi/substreams/introduction.mdx @@ -7,13 +7,13 @@ sidebarTitle: Introduction To start coding right away, check out the [Substreams Quick Start](/substreams/quick-start/). -## अवलोकन +## Overview Substreams is a powerful parallel blockchain indexing technology designed to enhance performance and scalability within The Graph Network. ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/hi/substreams/publishing.mdx b/website/src/pages/hi/substreams/publishing.mdx index 5905f69b0f07..41eed47b59d1 100644 --- a/website/src/pages/hi/substreams/publishing.mdx +++ b/website/src/pages/hi/substreams/publishing.mdx @@ -1,19 +1,19 @@ --- title: Publishing a Substreams Package -sidebarTitle: Publishing +sidebarTitle: प्रकाशित करना --- Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). -## अवलोकन +## Overview ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package -### Prerequisites +### आवश्यक शर्तें - You must have the Substreams CLI installed. - You must have a Substreams package (`.spkg`) that you want to publish. @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/hi/substreams/quick-start.mdx b/website/src/pages/hi/substreams/quick-start.mdx index 2a54c6032f1a..c4a0d5be8e23 100644 --- a/website/src/pages/hi/substreams/quick-start.mdx +++ b/website/src/pages/hi/substreams/quick-start.mdx @@ -1,11 +1,11 @@ --- -title: Substreams Quick Start -sidebarTitle: जल्दी शुरू +title: सबस्ट्रीम्स क्विक स्टार्ट +sidebarTitle: Quick Start --- Discover how to utilize ready-to-use substream packages or develop your own. -## अवलोकन +## Overview Integrating Substreams can be quick and easy. They are permissionless, and you can [obtain a key here](https://thegraph.market/) without providing personal information to start streaming on-chain data. diff --git a/website/src/pages/hi/supported-networks.json b/website/src/pages/hi/supported-networks.json index ccbcdaebb037..4c7477968eac 100644 --- a/website/src/pages/hi/supported-networks.json +++ b/website/src/pages/hi/supported-networks.json @@ -1,5 +1,5 @@ { - "name": "Name", + "name": "नाम", "id": "ID", "subgraphs": "सबग्राफ", "substreams": "सबस्ट्रीम", diff --git a/website/src/pages/hi/supported-networks.mdx b/website/src/pages/hi/supported-networks.mdx index dcf8e9852101..9ddc02928dbf 100644 --- a/website/src/pages/hi/supported-networks.mdx +++ b/website/src/pages/hi/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: समर्थित नेटवर्क hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - सबग्राफ स्टूडियो निर्भर करता है अंतर्निहित प्रौद्योगिकियों की स्थिरता और विश्वसनीयता पर, जैसे JSON-RPC, फायरहोस और सबस्ट्रीम्स एंडपॉइंट्स। - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/hi/token-api/_meta-titles.json b/website/src/pages/hi/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/hi/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/hi/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/hi/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/hi/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/hi/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/hi/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/hi/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/hi/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/hi/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/hi/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/hi/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/hi/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/hi/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/hi/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/hi/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/hi/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/hi/token-api/faq.mdx b/website/src/pages/hi/token-api/faq.mdx new file mode 100644 index 000000000000..5d8d28b2e970 --- /dev/null +++ b/website/src/pages/hi/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## आम + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/hi/token-api/mcp/claude.mdx b/website/src/pages/hi/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..7174103725e8 --- /dev/null +++ b/website/src/pages/hi/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## आवश्यक शर्तें + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## विन्यास + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/hi/token-api/mcp/cline.mdx b/website/src/pages/hi/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..39d4715e1186 --- /dev/null +++ b/website/src/pages/hi/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## आवश्यक शर्तें + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## विन्यास + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/hi/token-api/mcp/cursor.mdx b/website/src/pages/hi/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..d8e9a09816fa --- /dev/null +++ b/website/src/pages/hi/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## आवश्यक शर्तें + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## विन्यास + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/hi/token-api/monitoring/get-health.mdx b/website/src/pages/hi/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/hi/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/hi/token-api/monitoring/get-networks.mdx b/website/src/pages/hi/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/hi/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/hi/token-api/monitoring/get-version.mdx b/website/src/pages/hi/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/hi/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/hi/token-api/quick-start.mdx b/website/src/pages/hi/token-api/quick-start.mdx new file mode 100644 index 000000000000..a381a3c8565c --- /dev/null +++ b/website/src/pages/hi/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Quick Start +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## आवश्यक शर्तें + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/it/about.mdx b/website/src/pages/it/about.mdx index 3060784eac83..62f0bf4d3c61 100644 --- a/website/src/pages/it/about.mdx +++ b/website/src/pages/it/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Un grafico che spiega come The Graph utilizza Graph Node per servire le query ai consumatori di dati](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Il flusso segue questi passi: 1. Una dapp aggiunge dati a Ethereum attraverso una transazione su uno smart contract. 2. Lo smart contract emette uno o più eventi durante l'elaborazione della transazione. -3. Graph Node scansiona continuamente Ethereum alla ricerca di nuovi blocchi e dei dati del vostro subgraph che possono contenere. -4. Graph Node trova gli eventi Ethereum per il vostro subgraph in questi blocchi ed esegue i gestori di mappatura che avete fornito. La mappatura è un modulo WASM che crea o aggiorna le entità di dati che Graph Node memorizza in risposta agli eventi Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. La dapp effettua query del Graph Node per ottenere dati indicizzati dalla blockchain, utilizzando il [ GraphQL endpoint del nodo](https://graphql.org/learn/). Il Graph Node a sua volta traduce le query GraphQL in query per il suo archivio dati sottostante, al fine di recuperare questi dati, sfruttando le capacità di indicizzazione dell'archivio. La dapp visualizza questi dati in una ricca interfaccia utente per gli utenti finali, che li utilizzano per emettere nuove transazioni su Ethereum. Il ciclo si ripete. ## I prossimi passi -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/it/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/it/archived/arbitrum/arbitrum-faq.mdx index 4b6ef7df03fc..5c4dc7fa3aa3 100644 --- a/website/src/pages/it/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/it/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Sicurezza ereditata da Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. La comunità di The Graph ha deciso di procedere con Arbitrum l'anno scorso dopo l'esito della discussione [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ Per sfruttare l'utilizzo di The Graph su L2, utilizza il selettore a discesa per ![Selettore a discesa per cambiare a Arbitrum](/img/arbitrum-screenshot-toggle.png) -## In quanto sviluppatore di subgraph, consumatore di dati, Indexer, Curator o Delegator, cosa devo fare ora? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Tutto è stato testato accuratamente e un piano di contingenza è in atto per garantire una transizione sicura e senza intoppi. I dettagli possono essere trovati [qui](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/it/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/it/archived/arbitrum/l2-transfer-tools-faq.mdx index bc5a9ac711c5..0dd870395760 100644 --- a/website/src/pages/it/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/it/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con Gli Strumenti di Trasferimento L2 utilizzano il meccanismo nativo di Arbitrum per inviare messaggi da L1 a L2. Questo meccanismo è chiamato "retryable ticket" e viene utilizzato da tutti i bridge di token nativi, incluso il bridge GRT di Arbitrum. Puoi leggere ulteriori dettagli sui retryable tickets nella [documentazione di Arbitrum](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Quando trasferisci i tuoi asset (subgraph, stake, delegation o curation) su L2, un messaggio viene inviato tramite il bridge GRT di Arbitrum, che crea un "retryable ticket" su L2. Lo strumento di trasferimento include un valore in ETH nella transazione, che viene utilizzato per 1) pagare la creazione del ticket e 2) coprire il costo del gas per eseguire il ticket su L2. Tuttavia, poiché i prezzi del gas potrebbero variare nel tempo fino a quando il ticket non è pronto per l'esecuzione su L2, è possibile che questo tentativo di auto-esecuzione fallisca. Quando ciò accade, il bridge Arbitrum manterrà il "retryable ticket" attivo per un massimo di 7 giorni, e chiunque può riprovare a "riscattare" il ticket (il che richiede un wallet con un po' di ETH trasferiti su Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Questo è ciò che chiamiamo il passaggio "Conferma" in tutti gli strumenti di trasferimento: in molti casi verrà eseguito automaticamente, poiché l'auto-esecuzione ha spesso successo, ma è importante che tu verifichi che sia andato a buon fine. Se non è andato a buon fine e nessuna riprova ha successo entro 7 giorni, il bridge Arbitrum scarterà il "retryable ticket" e i tuoi asset (subgraph, stake, delegation o curation) andranno persi e non potranno essere recuperati. I core devs di The Graph hanno un sistema di monitoraggio per rilevare queste situazioni e cercare di riscattare i ticket prima che sia troppo tardi, ma alla fine è tua responsabilità assicurarti che il trasferimento venga completato in tempo. Se hai difficoltà a confermare la tua transazione, ti preghiamo di contattarci utilizzando [questo modulo](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) e i core devs saranno pronti ad aiutarti. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## Traserimento del Subgraph -### Come faccio a trasferire un mio subgraph? +### How do I transfer my Subgraph? -Per fare un trasferimento del tuo subgraph, dovrai completare i seguenti passaggi: +To transfer your Subgraph, you will need to complete the following steps: 1. Inizializza il trasferimento su Ethereum mainnet 2. Aspetta 20 minuti per la conferma -3. Conferma il trasferimento del subgraph su Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Termina la pubblicazione del subgraph su Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Aggiorna l'URL della Query (raccomandato) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Da dove devo inizializzare il mio trasferimento? -Puoi inizializzare il tuo trasferimento da [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) o dalla pagina di dettaglio di qualsiasi subgraph. Clicca sul bottone "Trasferisci Subgraph" sulla pagina di dettaglio del subgraph e inizia il trasferimento. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Quanto devo aspettare per il completamento del trasferimento del mio subgraph +### How long do I need to wait until my Subgraph is transferred Il tempo di trasferimento richiede circa 20 minuti. Il bridge Arbitrum sta lavorando in background per completare automaticamente il trasferimento. In alcuni casi, i costi del gas potrebbero aumentare e dovrai confermare nuovamente la transazione. -### I miei subgraph saranno ancora rintracciabili dopo averli trasferiti su L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Il tuo subgraph sarà rintracciabile solo sulla rete su cui è stata pubblicata. Ad esempio, se il tuo subgraph è su Arbitrum One, potrai trovarlo solo su Explorer su Arbitrum One e non sarai in grado di trovarlo su Ethereum. Assicurati di avere selezionato Arbitrum One nel tasto in alto nella pagina per essere sicuro di essere sulla rete corretta. Dopo il transfer, il subgraph su L1 apparirà come deprecato. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Il mio subgraph deve essere pubblicato per poterlo trasferire? +### Does my Subgraph need to be published to transfer it? -Per usufruire dello strumento di trasferimento del subgraph, il tuo subgraph deve già essere pubblicato sulla mainnet di Ethereum e deve possedere alcuni segnali di curation di proprietà del wallet che possiede il subgraph. Se il tuo subgraph non è stato pubblicato, è consigliabile pubblicarlo direttamente su Arbitrum One: le commissioni di gas associate saranno considerevolmente più basse. Se desideri trasferire un subgraph pubblicato ma l'account proprietario non inserito nessun segnale di curation su di esso, puoi segnalare una piccola quantità (ad esempio 1 GRT) da quell'account; assicurati di selezionare il segnale "auto-migrante". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Cosa succede alla versione del mio subgraph sulla mainnet di Ethereum dopo il trasferimento su Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Dopo aver trasferito il tuo subgraph su Arbitrum, la versione sulla mainnet di Ethereum sarà deprecata. Ti consigliamo di aggiornare l'URL della query entro 48 ore. Tuttavia, è previsto un periodo di tolleranza che mantiene funzionante l'URL sulla mainnet in modo che il supporto per eventuali dApp di terze parti possa essere aggiornato. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Dopo il trasferimento, devo anche pubblicare di nuovo su Arbitrum? @@ -80,21 +80,21 @@ Dopo la finestra di trasferimento di 20 minuti, dovrai confermare il trasferimen ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Is publishing and versioning the same on L2 as Ethereum Ethereum mainnet? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Will my subgraph's curation move with my subgraph? +### Will my Subgraph's curation move with my Subgraph? -If you've chosen auto-migrating signal, 100% of your own curation will move with your subgraph to Arbitrum One. All of the subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 subgraph. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Can I move my subgraph back to Ethereum mainnet after I transfer? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Once transferred, your Ethereum mainnet version of this subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Why do I need bridged ETH to complete my transfer? @@ -206,19 +206,19 @@ To transfer your curation, you will need to complete the following steps: \*If necessary - i.e. you are using a contract address. -### How will I know if the subgraph I curated has moved to L2? +### How will I know if the Subgraph I curated has moved to L2? -When viewing the subgraph details page, a banner will notify you that this subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the subgraph details page of any subgraph that has moved. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### What if I do not wish to move my curation to L2? -When a subgraph is deprecated you have the option to withdraw your signal. Similarly, if a subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### How do I know my curation successfully transferred? Signal details will be accessible via Explorer approximately 20 minutes after the L2 transfer tool is initiated. -### Can I transfer my curation on more than one subgraph at a time? +### Can I transfer my curation on more than one Subgraph at a time? There is no bulk transfer option at this time. @@ -266,7 +266,7 @@ It will take approximately 20 minutes for the L2 transfer tool to complete trans ### Do I have to index on Arbitrum before I transfer my stake? -You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to subgraphs on L2, index them, and present POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Can Delegators move their delegation before I move my indexing stake? diff --git a/website/src/pages/it/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/it/archived/arbitrum/l2-transfer-tools-guide.mdx index 549618bfd7c3..4a34da9bad0e 100644 --- a/website/src/pages/it/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/it/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph has made it easy to move to L2 on Arbitrum One. For each protocol part Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## How to transfer your subgraph to Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefits of transferring your subgraphs +## Benefits of transferring your Subgraphs The Graph's community and core devs have [been preparing](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) to move to Arbitrum over the past year. Arbitrum, a layer 2 or "L2" blockchain, inherits the security from Ethereum but provides drastically lower gas fees. -When you publish or upgrade your subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your subgraphs to Arbitrum, any future updates to your subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your subgraph, increasing the rewards for Indexers on your subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Understanding what happens with signal, your L1 subgraph and query URLs +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -When you choose to transfer the subgraph, this will convert all of the subgraph's curation signal to GRT. This is equivalent to "deprecating" the subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the subgraph, where they will be used to mint signal on your behalf. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. If a subgraph owner does not transfer their subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -As soon as the subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the subgraph. However, there will be Indexers that will 1) keep serving transferred subgraphs for 24 hours, and 2) immediately start indexing the subgraph on L2. Since these Indexers already have the subgraph indexed, there should be no need to wait for the subgraph to sync, and it will be possible to query the L2 subgraph almost immediately. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries to the L2 subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Choosing your L2 wallet -When you published your subgraph on mainnet, you used a connected wallet to create the subgraph, and this wallet owns the NFT that represents this subgraph and allows you to publish updates. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -When transferring the subgraph to Arbitrum, you can choose a different wallet that will own this subgraph NFT on L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. If you're using a "regular" wallet like MetaMask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same owner address as in L1. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the subgraph will be lost and cannot be recovered.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparing for the transfer: bridging some ETH -Transferring the subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Since gas fees on Arbitrum are lower, you should only need a small amount. It is recommended that you start at a low threshold (0.e.g. 01 ETH) for your transaction to be approved. -## Finding the subgraph Transfer Tool +## Finding the Subgraph Transfer Tool -You can find the L2 Transfer Tool when you're looking at your subgraph's page on Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -It is also available on Explorer if you're connected with the wallet that owns a subgraph and on that subgraph's page on Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Clicking on the Transfer to L2 button will open the transfer tool where you can ## Step 1: Starting the transfer -Before starting the transfer, you must decide which address will own the subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Also please note transferring the subgraph requires having a nonzero amount of signal on the subgraph with the same account that owns the subgraph; if you haven't signaled on the subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 subgraph (see "Understanding what happens with signal, your L1 subgraph and query URLs" above for more details on what goes on behind the scenes). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Step 2: Waiting for the subgraph to get to L2 +## Step 2: Waiting for the Subgraph to get to L2 -After you start the transfer, the message that sends your L1 subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. @@ -80,7 +80,7 @@ Once this wait time is over, Arbitrum will attempt to auto-execute the transfer ## Step 3: Confirming the transfer -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your subgraph to L2 will be pending and require a retry within 7 days. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. @@ -88,33 +88,33 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Step 4: Finishing the transfer on L2 -At this point, your subgraph and GRT have been received on Arbitrum, but the subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -This will publish the subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Step 5: Updating the query URL -Your subgraph has been successfully transferred to Arbitrum! To query the subgraph, the new URL will be : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note that the subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the subgraph has been synced on L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## How to transfer your curation to Arbitrum (L2) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Choosing your L2 wallet @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/it/archived/sunrise.mdx b/website/src/pages/it/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/it/archived/sunrise.mdx +++ b/website/src/pages/it/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/it/global.json b/website/src/pages/it/global.json index f0bd80d9715b..c69d5fd49d85 100644 --- a/website/src/pages/it/global.json +++ b/website/src/pages/it/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Descrizione", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Descrizione", + "liveResponse": "Live Response", + "example": "Esempio" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/it/index.json b/website/src/pages/it/index.json index 787097b1fbc4..f243894b47b5 100644 --- a/website/src/pages/it/index.json +++ b/website/src/pages/it/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -37,10 +37,86 @@ }, "supportedNetworks": { "title": "Supported Networks", + "details": "Network Details", + "services": "Services", + "type": "Tipo", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Documentazione", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { "base": "The Graph supports {0}. To add a new network, {1}", "networks": "networks", "completeThisForm": "complete this form" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Name", + "id": "ID", + "subgraphs": "Subgraphs", + "substreams": "Substreams", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Substreams", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Billing", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { @@ -80,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/it/indexing/chain-integration-overview.mdx b/website/src/pages/it/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/it/indexing/chain-integration-overview.mdx +++ b/website/src/pages/it/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/it/indexing/new-chain-integration.mdx b/website/src/pages/it/indexing/new-chain-integration.mdx index e45c4b411010..c401fa57b348 100644 --- a/website/src/pages/it/indexing/new-chain-integration.mdx +++ b/website/src/pages/it/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, in a JSON-RPC batch request -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ ### 2. Firehose Integration @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/it/indexing/overview.mdx b/website/src/pages/it/indexing/overview.mdx index 7a4a5525e2d0..4fd39637b806 100644 --- a/website/src/pages/it/indexing/overview.mdx +++ b/website/src/pages/it/indexing/overview.mdx @@ -7,7 +7,7 @@ Gli Indexer sono operatori di nodi di The Graph Network che fanno staking di Gra Il GRT che viene fatto staking nel protocollo è soggetto a un periodo di scongelamento e può essere ridotto se gli Indexer sono malintenzionati e servono dati errati alle applicazioni o se indicizzano in modo errato. Gli Indexer guadagnano anche ricompense per le stake delegate dai Delegator, per contribuire alla rete. -Gli Indexer selezionano i subgraph da indicizzare in base al segnale di curation del subgraph, dove i Curator fanno staking di GRT per indicare quali subgraph sono di alta qualità e dovrebbero essere prioritari. I consumatori (ad esempio, le applicazioni) possono anche impostare i parametri per cui gli Indexer elaborano le query per i loro subgraph e stabilire le preferenze per le tariffe di query. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/it/indexing/supported-network-requirements.mdx b/website/src/pages/it/indexing/supported-network-requirements.mdx index 7eed955d1013..58979cc9f911 100644 --- a/website/src/pages/it/indexing/supported-network-requirements.mdx +++ b/website/src/pages/it/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| La rete | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| La rete | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/it/indexing/tap.mdx b/website/src/pages/it/indexing/tap.mdx index 8604a92b41e7..384ed571abd5 100644 --- a/website/src/pages/it/indexing/tap.mdx +++ b/website/src/pages/it/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Panoramica -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/it/indexing/tooling/graph-node.mdx b/website/src/pages/it/indexing/tooling/graph-node.mdx index b77c651c0bd2..3fef49ce3bf5 100644 --- a/website/src/pages/it/indexing/tooling/graph-node.mdx +++ b/website/src/pages/it/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node è il componente che indica i subgraph e rende i dati risultanti disponibili per l'interrogazione tramite API GraphQL. È quindi centrale per lo stack degli indexer, ed inoltre il corretto funzionamento di Graph Node è cruciale per il buon funzionamento di un indexer di successo. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### Database PostgreSQL -È l'archivio principale del Graph Node, in cui vengono memorizzati i dati dei subgraph, i metadati sui subgraph e i dati di rete che non dipendono dal subgraph, come la cache dei blocchi e la cache eth_call. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Clienti della rete Per indicizzare una rete, Graph Node deve accedere a un cliente di rete tramite un'API JSON-RPC compatibile con EVM. Questo RPC può connettersi a un singolo cliente o può essere una configurazione più complessa che bilancia il carico su più clienti. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### Nodi IPFS -I metadati di distribuzione del subgraph sono memorizzati sulla rete IPFS. The Graph Node accede principalmente al nodo IPFS durante la distribuzione del subgraph per recuperare il manifest del subgraph e tutti i file collegati. Gli indexer di rete non devono ospitare un proprio nodo IPFS. Un nodo IPFS per la rete è ospitato su https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Server di metriche Prometheus @@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit Quando è in funzione, Graph Node espone le seguenti porte: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. ## Configurazione avanzata del Graph Node -Nella sua forma più semplice, Graph Node può essere utilizzato con una singola istanza di Graph Node, un singolo database PostgreSQL, un nodo IPFS e i client di rete richiesti dai subgraph da indicizzare. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,39 +114,39 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Graph Node multipli -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Si noti che più Graph Node possono essere configurati per utilizzare lo stesso database, che può essere scalato orizzontalmente tramite sharding. #### Regole di distribuzione -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Esempio di configurazione della regola di distribuzione: ```toml [deployment] [[deployment.rule]] -match = { name = "(vip|importante)/.*" } +match = { name = "(vip|important)/.*" } shard = "vip" indexers = [ "index_node_vip_0", "index_node_vip_1" ] [[deployment.rule]] match = { network = "kovan" } -# Nessun shard, quindi usiamo lo shard predefinito chiamato "primario". -indicizzatori = [ "index_node_kovan_0" ] +# No shard, so we use the default shard called 'primary' +indexers = [ "index_node_kovan_0" ] [[deployment.rule]] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# Non c'è nessun "match", quindi qualsiasi sottografo corrisponde -shard = [ "sharda", "shardb" ] -indicizzatori = [ +# There's no 'match', so any Subgraph matches +shards = [ "sharda", "shardb" ] +indexers = [ "index_node_community_0", "index_node_community_1", "index_node_community_2", "index_node_community_3", "index_node_community_4", - "indice_nodo_comunità_5" + "index_node_community_5" ] ``` @@ -167,11 +167,11 @@ Ogni nodo il cui --node-id corrisponde all'espressione regolare sarà impostato Per la maggior parte dei casi d'uso, un singolo database Postgres è sufficiente per supportare un'istanza del graph-node. Quando un'istanza del graph-node supera un singolo database Postgres, è possibile suddividere l'archiviazione dei dati del graph-node su più database Postgres. Tutti i database insieme formano lo store dell'istanza del graph-node. Ogni singolo database è chiamato shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Lo sharding diventa utile quando il database esistente non riesce a reggere il carico che Graph Node gli impone e quando non è più possibile aumentare le dimensioni del database. -> In genere è meglio creare un singolo database il più grande possibile, prima di iniziare con gli shard. Un'eccezione è rappresentata dai casi in cui il traffico di query è suddiviso in modo molto disomogeneo tra i subgraph; in queste situazioni può essere di grande aiuto tenere i subgraph ad alto volume in uno shard e tutto il resto in un altro, perché questa configurazione rende più probabile che i dati per i subgraph ad alto volume rimangano nella cache interna del database e non vengano sostituiti da dati non necessari per i subgraph a basso volume. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. Per quanto riguarda la configurazione delle connessioni, iniziare con max_connections in postgresql.conf impostato a 400 (o forse anche a 200) e osservare le metriche di Prometheus store_connection_wait_time_ms e store_connection_checkout_count. Tempi di attesa notevoli (qualsiasi cosa superiore a 5 ms) indicano che le connessioni disponibili sono troppo poche; tempi di attesa elevati possono anche essere causati da un database molto occupato (come un elevato carico della CPU). Tuttavia, se il database sembra altrimenti stabile, tempi di attesa elevati indicano la necessità di aumentare il numero di connessioni. Nella configurazione, il numero di connessioni che ogni istanza del graph-node può utilizzare è un limite massimo e Graph Node non manterrà aperte le connessioni se non ne ha bisogno. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporto di più reti -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Reti multiple - Fornitori multipli per rete (questo può consentire di suddividere il carico tra i fornitori e di configurare nodi completi e nodi di archivio, con Graph Node che preferisce i fornitori più economici se un determinato carico di lavoro lo consente). @@ -225,11 +225,11 @@ Gli utenti che gestiscono una configurazione di indicizzazione scalare con una c ### Gestione del Graph Node -Dato un Graph Node (o più Graph Nodes!) in funzione, la sfida consiste nel gestire i subgraph distribuiti tra i nodi. Graph Node offre una serie di strumenti che aiutano a gestire i subgraph. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Lavorare con i subgraph +### Working with Subgraphs #### Stato dell'indicizzazione API -Disponibile sulla porta 8030/graphql per impostazione predefinita, l'API dello stato di indicizzazione espone una serie di metodi per verificare lo stato di indicizzazione di diversi subgraph, controllare le prove di indicizzazione, ispezionare le caratteristiche dei subgraph e altro ancora. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ Il processo di indicizzazione si articola in tre parti distinte: - Elaborare gli eventi in ordine con i gestori appropriati (questo può comportare la chiamata alla chain per lo stato e il recupero dei dati dall'archivio) - Scrivere i dati risultanti nell'archivio -Questi stadi sono collegati tra loro (cioè possono essere eseguiti in parallelo), ma dipendono l'uno dall'altro. Se i subgraph sono lenti da indicizzare, la causa dipende dal subgraph specifico. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Cause comuni di lentezza dell'indicizzazione: @@ -276,24 +276,24 @@ Cause comuni di lentezza dell'indicizzazione: - Il fornitore stesso è in ritardo rispetto alla testa della chain - Lentezza nell'acquisizione di nuove ricevute dal fornitore alla testa della chain -Le metriche di indicizzazione dei subgraph possono aiutare a diagnosticare la causa principale della lentezza dell'indicizzazione. In alcuni casi, il problema risiede nel subgraph stesso, ma in altri, il miglioramento dei provider di rete, la riduzione della contesa del database e altri miglioramenti della configurazione possono migliorare notevolmente le prestazioni dell'indicizzazione. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### I subgraph falliti +#### Failed Subgraphs -Durante l'indicizzazione, i subgraph possono fallire se incontrano dati inaspettati, se qualche componente non funziona come previsto o se c'è un bug nei gestori di eventi o nella configurazione. Esistono due tipi generali di errore: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Guasti deterministici: si tratta di guasti che non possono essere risolti con tentativi di risposta - Fallimenti non deterministici: potrebbero essere dovuti a problemi con il provider o a qualche errore imprevisto di Graph Node. Quando si verifica un errore non deterministico, Graph Node riprova i gestori che non hanno funzionato, riducendo il tempo a disposizione. -In alcuni casi, un errore può essere risolto dall'indexer (ad esempio, se l'errore è dovuto alla mancanza del tipo di provider giusto, l'aggiunta del provider richiesto consentirà di continuare l'indicizzazione). In altri casi, invece, è necessario modificare il codice del subgraph. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Cache dei blocchi e delle chiamate -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Se si sospetta un'incongruenza nella cache a blocchi, come ad esempio un evento di ricezione tx mancante: @@ -304,7 +304,7 @@ Se si sospetta un'incongruenza nella cache a blocchi, come ad esempio un evento #### Problemi ed errori di query -Una volta che un subgraph è stato indicizzato, gli indexer possono aspettarsi di servire le query attraverso l'endpoint di query dedicato al subgraph. Se l'indexer spera di servire un volume significativo di query, è consigliabile un nodo di query dedicato; in caso di volumi di query molto elevati, gli indexer potrebbero voler configurare shard di replica in modo che le query non abbiano un impatto sul processo di indicizzazione. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Tuttavia, anche con un nodo di query dedicato e le repliche, alcune query possono richiedere molto tempo per essere eseguite e, in alcuni casi, aumentare l'utilizzo della memoria e avere un impatto negativo sul tempo di query per gli altri utenti. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analisi delle query -Le query problematiche emergono spesso in due modi. In alcuni casi, sono gli stessi utenti a segnalare la lentezza di una determinata query. In questo caso, la sfida consiste nel diagnosticare la ragione della lentezza, sia che si tratti di un problema generale, sia che si tratti di un problema specifico di quel subgraph o di quella query. E poi, naturalmente, risolverlo, se possibile. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In altri casi, il fattore scatenante potrebbe essere l'elevato utilizzo della memoria su un nodo di query, nel qual caso la sfida consiste nell'identificare la query che causa il problema. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Rimozione dei subgraph +#### Removing Subgraphs > Si tratta di una nuova funzionalità, che sarà disponibile in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/it/indexing/tooling/graphcast.mdx b/website/src/pages/it/indexing/tooling/graphcast.mdx index 6d0cd00b7784..366d38044fd6 100644 --- a/website/src/pages/it/indexing/tooling/graphcast.mdx +++ b/website/src/pages/it/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Attualmente, il costo per trasmettere informazioni ad altri partecipanti alla re L'SDK (Software Development Kit) di Graphcast consente agli sviluppatori di creare radio, che sono applicazioni alimentate da gossip che gli indexer possono eseguire per servire un determinato scopo. Intendiamo inoltre creare alcune radio (o fornire supporto ad altri sviluppatori/team che desiderano creare radio) per i seguenti casi d'uso: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conduzione di aste e coordinamento per la sincronizzazione warp di subgraph, substream e dati Firehose da altri indexer. -- Autodichiarazione sulle analisi delle query attive, compresi i volumi delle richieste di subgraph, i volumi delle commissioni, ecc. -- Autodichiarazione sull'analisi dell'indicizzazione, compresi i tempi di indicizzazione dei subgraph, i costi del gas per i gestori, gli errori di indicizzazione riscontrati, ecc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Autodichiarazione delle informazioni sullo stack, tra cui la versione del graph-node, la versione di Postgres, la versione del client Ethereum, ecc. ### Scopri di più diff --git a/website/src/pages/it/resources/benefits.mdx b/website/src/pages/it/resources/benefits.mdx index 01393da864a1..48c8f909359d 100644 --- a/website/src/pages/it/resources/benefits.mdx +++ b/website/src/pages/it/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Confronto costi | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensile del server\* | $350 al mese | $0 | -| Costi di query | $0+ | $0 per month | -| Tempo di progettazione | $400 al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | -| Query al mese | Limitato alle capacità di infra | 100,000 (Free Plan) | -| Costo per query | $0 | $0 | -| Infrastructure | Centralizzato | Decentralizzato | -| Ridondanza geografica | $750+ per nodo aggiuntivo | Incluso | -| Tempo di attività | Variabile | 99.9%+ | -| Costo totale mensile | $750+ | $0 | +| Confronto costi | Self Hosted | The Graph Network | +| :--------------------------------: | :-------------------------------------: | :---------------------------------------------------------------------------: | +| Costo mensile del server\* | $350 al mese | $0 | +| Costi di query | $0+ | $0 per month | +| Tempo di progettazione | $400 al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | +| Query al mese | Limitato alle capacità di infra | 100,000 (Free Plan) | +| Costo per query | $0 | $0 | +| Infrastructure | Centralizzato | Decentralizzato | +| Ridondanza geografica | $750+ per nodo aggiuntivo | Incluso | +| Tempo di attività | Variabile | 99.9%+ | +| Costo totale mensile | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Confronto costi | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensile del server\* | $350 al mese | $0 | -| Costi di query | $500 al mese | $120 per month | -| Tempo di progettazione | $800 al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | -| Query al mese | Limitato alle capacità di infra | ~3,000,000 | -| Costo per query | $0 | $0.00004 | -| Infrastructure | Centralizzato | Decentralizzato | -| Costi di ingegneria | $200 all'ora | Incluso | -| Ridondanza geografica | $1.200 di costi totali per nodo aggiuntivo | Incluso | -| Tempo di attività | Variabile | 99.9%+ | -| Costo totale mensile | $1,650+ | $120 | +| Confronto costi | Self Hosted | The Graph Network | +| :--------------------------------: | :----------------------------------------: | :---------------------------------------------------------------------------: | +| Costo mensile del server\* | $350 al mese | $0 | +| Costi di query | $500 al mese | $120 per month | +| Tempo di progettazione | $800 al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | +| Query al mese | Limitato alle capacità di infra | ~3,000,000 | +| Costo per query | $0 | $0.00004 | +| Infrastructure | Centralizzato | Decentralizzato | +| Costi di ingegneria | $200 all'ora | Incluso | +| Ridondanza geografica | $1.200 di costi totali per nodo aggiuntivo | Incluso | +| Tempo di attività | Variabile | 99.9%+ | +| Costo totale mensile | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Confronto costi | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensile del server\* | $1100 al mese, per nodo | $0 | -| Costi di query | $4000 | $1,200 per month | -| Numero di nodi necessari | 10 | Non applicabile | -| Tempo di progettazione | $6.000 o più al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | -| Query al mese | Limitato alle capacità di infra | ~30,000,000 | -| Costo per query | $0 | $0.00004 | -| Infrastructure | Centralizzato | Decentralizzato | -| Ridondanza geografica | $1.200 di costi totali per nodo aggiuntivo | Incluso | -| Tempo di attività | Variabile | 99.9%+ | -| Costo totale mensile | $11,000+ | $1,200 | +| Confronto costi | Self Hosted | The Graph Network | +| :--------------------------------: | :-----------------------------------------: | :---------------------------------------------------------------------------: | +| Costo mensile del server\* | $1100 al mese, per nodo | $0 | +| Costi di query | $4000 | $1,200 per month | +| Numero di nodi necessari | 10 | Non applicabile | +| Tempo di progettazione | $6.000 o più al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | +| Query al mese | Limitato alle capacità di infra | ~30,000,000 | +| Costo per query | $0 | $0.00004 | +| Infrastructure | Centralizzato | Decentralizzato | +| Ridondanza geografica | $1.200 di costi totali per nodo aggiuntivo | Incluso | +| Tempo di attività | Variabile | 99.9%+ | +| Costo totale mensile | $11,000+ | $1,200 | \*inclusi i costi per il backup: $50-$100 al mese @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -La curation del segnale su un subgraph è opzionale, una tantum, a costo zero (ad esempio, $1.000 in segnale possono essere curati su un subgraph e successivamente ritirati, con un potenziale di guadagno nel processo). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/it/resources/glossary.mdx b/website/src/pages/it/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/it/resources/glossary.mdx +++ b/website/src/pages/it/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/it/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/it/resources/migration-guides/assemblyscript-migration-guide.mdx index fd2e5c45f39d..b7b38cd0593d 100644 --- a/website/src/pages/it/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/it/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: Guida alla migrazione di AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Ciò consentirà agli sviluppatori di subgraph di utilizzare le nuove caratteristiche del linguaggio AS e della libreria standard. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Caratteristiche @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## Come aggiornare? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,14 +52,14 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` 2. Update the `graph-cli` you're using to the `latest` version by running: ```bash -# se è installato globalmente +# se è installato globalmente npm install --global @graphprotocol/graph-cli@latest # o nel proprio subgraph, se è una dipendenza di dev @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -Se non si è sicuri di quale scegliere, si consiglia di utilizzare sempre la versione sicura. Se il valore non esiste, si potrebbe fare una dichiarazione if anticipata con un ritorno nel gestore del subgraph. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Shadowing della variabile @@ -132,7 +132,7 @@ in assembly/index.ts(4,3) ### Confronti nulli -Eseguendo l'aggiornamento sul subgraph, a volte si possono ottenere errori come questi: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // non dà errori in fase di compilazione come dovrebbe ``` -Abbiamo aperto un problema sul compilatore AssemblyScript per questo, ma per il momento se fate questo tipo di operazioni nelle vostre mappature di subgraph, dovreste modificarle in modo da fare un controllo di null prima di esse. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Verrà compilato ma si interromperà in fase di esecuzione, perché il valore non è stato inizializzato, quindi assicuratevi che il vostro subgraph abbia inizializzato i suoi valori, in questo modo: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/it/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/it/resources/migration-guides/graphql-validations-migration-guide.mdx index 067bf445e437..cfc30766450e 100644 --- a/website/src/pages/it/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/it/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Guida alla migrazione delle validazione GraphQL +title: GraphQL Validations Migration Guide --- Presto `graph-node` supporterà il 100% di copertura delle specifiche [Specifiche delle validation GraphQL] (https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ Per essere conformi a tali validation, seguire la guida alla migrazione. È possibile utilizzare lo strumento di migrazione CLI per trovare eventuali problemi nelle operazioni GraphQL e risolverli. In alternativa, è possibile aggiornare l'endpoint del client GraphQL per utilizzare l'endpoint `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Testare le query con questo endpoint vi aiuterà a trovare i problemi nelle vostre query. -> Non è necessario migrare tutti i subgraph; se si utilizza [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) o [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), questi garantiscono già la validità delle query. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Strumento CLI di migrazione diff --git a/website/src/pages/it/resources/roles/curating.mdx b/website/src/pages/it/resources/roles/curating.mdx index 330a80715730..a449b5b9fcc0 100644 --- a/website/src/pages/it/resources/roles/curating.mdx +++ b/website/src/pages/it/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Come segnalare -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Un curator può scegliere di segnalare su una versione specifica del subgraph, oppure può scegliere di far migrare automaticamente il proprio segnale alla versione di produzione più recente di quel subgraph. Entrambe le strategie sono valide e hanno i loro pro e contro. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. La migrazione automatica del segnale alla più recente versione di produzione può essere utile per garantire l'accumulo di tariffe di query. Ogni volta che si effettua una curation, si paga una tassa di curation del 1%. Si pagherà anche una tassa di curation del 0,5% per ogni migrazione. Gli sviluppatori di subgraph sono scoraggiati dal pubblicare frequentemente nuove versioni: devono pagare una tassa di curation del 0,5% su tutte le quote di curation auto-migrate. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Rischi 1. Il mercato delle query è intrinsecamente giovane per The Graph e c'è il rischio che la vostra %APY possa essere inferiore a quella prevista a causa delle dinamiche di mercato nascenti. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Un subgraph può fallire a causa di un bug. Un subgraph fallito non matura commissioni della query. Di conseguenza, si dovrà attendere che lo sviluppatore risolva il bug e distribuisca una nuova versione. - - Se siete iscritti alla versione più recente di un subgraph, le vostre quote di partecipazione migreranno automaticamente a quella nuova versione. Questo comporta una tassa di curation di 0,5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## FAQ sulla curation ### 1. Quale % delle tariffe di query guadagnano i curator? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Come si fa a decidere quali subgraph sono di alta qualità da segnalare? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Qual è il costo dell'aggiornamento di un subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. Con quale frequenza posso aggiornare il mio subgraph? +### 4. How often can I update my Subgraph? -Si suggerisce di non aggiornare i subgraph troppo frequentemente. Si veda la domanda precedente per maggiori dettagli. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Posso vendere le mie quote di curation? diff --git a/website/src/pages/it/resources/roles/delegating/undelegating.mdx b/website/src/pages/it/resources/roles/delegating/undelegating.mdx index c3e31e653941..6a361c508450 100644 --- a/website/src/pages/it/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/it/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## Additional Resources diff --git a/website/src/pages/it/resources/subgraph-studio-faq.mdx b/website/src/pages/it/resources/subgraph-studio-faq.mdx index 66453e221c08..3aaffa3bd2b9 100644 --- a/website/src/pages/it/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/it/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: FAQ di Subgraph Studio ## 1. Che cos'è Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Come si crea una chiave API? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th Dopo aver creato una chiave API, nella sezione Sicurezza è possibile definire i domini che possono eseguire query di una specifica chiave API. -## 5. Posso trasferire il mio subgraph a un altro proprietario? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Si noti che non sarà più possibile vedere o modificare il subgraph nel Studio una volta trasferito. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Come posso trovare gli URL di query per i subgraph se non sono lo sviluppatore del subgraph che voglio usare? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Si ricorda che è possibile creare una chiave API ed eseguire query del qualsiasi subgraph pubblicato sulla rete, anche se si costruisce un subgraph da soli. Queste query tramite la nuova chiave API sono a pagamento, come tutte le altre sulla rete. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/it/resources/tokenomics.mdx b/website/src/pages/it/resources/tokenomics.mdx index c342b803f911..c869fcb1a9da 100644 --- a/website/src/pages/it/resources/tokenomics.mdx +++ b/website/src/pages/it/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Panoramica -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curator - Trovare i migliori subgraph per gli Indexer +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexer - Struttura portante dei dati della blockchain @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creare un subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Eseguire query di un subgraph esistente +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/it/sps/introduction.mdx b/website/src/pages/it/sps/introduction.mdx index 62359b0a7ab0..0e5be69aa0c3 100644 --- a/website/src/pages/it/sps/introduction.mdx +++ b/website/src/pages/it/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduzione --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Panoramica -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/it/sps/sps-faq.mdx b/website/src/pages/it/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/it/sps/sps-faq.mdx +++ b/website/src/pages/it/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/it/sps/triggers.mdx b/website/src/pages/it/sps/triggers.mdx index 072d7ba9d194..711dcaa6423a 100644 --- a/website/src/pages/it/sps/triggers.mdx +++ b/website/src/pages/it/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Panoramica -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/it/sps/tutorial.mdx b/website/src/pages/it/sps/tutorial.mdx index fb9c4e1c7b5c..06a271e30ff1 100644 --- a/website/src/pages/it/sps/tutorial.mdx +++ b/website/src/pages/it/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Iniziare @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/it/subgraphs/_meta-titles.json b/website/src/pages/it/subgraphs/_meta-titles.json index 0556abfc236c..3fd405eed29a 100644 --- a/website/src/pages/it/subgraphs/_meta-titles.json +++ b/website/src/pages/it/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", + "guides": "How-to Guides", "best-practices": "Best Practices" } diff --git a/website/src/pages/it/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/it/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/it/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/it/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/it/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/it/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/it/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/it/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/it/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/it/subgraphs/best-practices/grafting-hotfix.mdx index 62edf8926555..ab6bd38a1247 100644 --- a/website/src/pages/it/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/it/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Panoramica -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/it/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/it/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/it/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/it/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/it/subgraphs/best-practices/pruning.mdx b/website/src/pages/it/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/it/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/it/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/it/subgraphs/best-practices/timeseries.mdx b/website/src/pages/it/subgraphs/best-practices/timeseries.mdx index 112e062e6187..1586f8edb6ff 100644 --- a/website/src/pages/it/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/it/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Panoramica @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/it/subgraphs/billing.mdx b/website/src/pages/it/subgraphs/billing.mdx index c9f380bb022c..ec654ca63f55 100644 --- a/website/src/pages/it/subgraphs/billing.mdx +++ b/website/src/pages/it/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/it/subgraphs/cookbook/arweave.mdx b/website/src/pages/it/subgraphs/cookbook/arweave.mdx index 2372025621d1..e59abffa383f 100644 --- a/website/src/pages/it/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/it/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Building Subgraphs on Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are To be able to build and deploy Arweave Subgraphs, you need two packages: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Subgraph's components -There are three components of a subgraph: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ Defines the data sources of interest, and how they should be processed. Arweave Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ``` $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet @@ -99,7 +99,7 @@ Arweave data sources support two types of handlers: ## Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript Mappings @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Querying an Arweave Subgraph -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here is an example subgraph for reference: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a subgraph index Arweave and other chains? +### Can a Subgraph index Arweave and other chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. ### Can I index the stored files on Arweave? Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). -### Can I identify Bundlr bundles in my subgraph? +### Can I identify Bundlr bundles in my Subgraph? This is not currently supported. @@ -188,7 +188,7 @@ The source.owner can be the user's public key or account address. ### What is the current encryption format? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/it/subgraphs/cookbook/enums.mdx b/website/src/pages/it/subgraphs/cookbook/enums.mdx index a10970c1539f..9f55ae07c54b 100644 --- a/website/src/pages/it/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/it/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/it/subgraphs/cookbook/grafting.mdx b/website/src/pages/it/subgraphs/cookbook/grafting.mdx index 57d5169830a7..d9abe0e70d2a 100644 --- a/website/src/pages/it/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/it/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Replace a Contract and Keep its History With Grafting --- -In this guide, you will learn how to build and deploy new subgraphs by grafting existing subgraphs. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## What is Grafting? -Grafting reuses the data from an existing subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. Also, it can be used when adding a feature to a subgraph that takes long to index from scratch. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -22,38 +22,38 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## Building an Existing Subgraph -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Grafting Manifest Definition -Grafting requires adding two new items to the original subgraph manifest: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Deploying the Base Subgraph -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the subgraph is indexing properly, you can quickly update the subgraph with grafting. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Deploying the Grafting Subgraph The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ It should return the following: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Additional Resources diff --git a/website/src/pages/it/subgraphs/cookbook/near.mdx b/website/src/pages/it/subgraphs/cookbook/near.mdx index 809574aa81cd..baa5bcc79157 100644 --- a/website/src/pages/it/subgraphs/cookbook/near.mdx +++ b/website/src/pages/it/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Building Subgraphs on NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## What is NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## What are NEAR subgraphs? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Block handlers: these are run on every new block - Receipt handlers: run every time a message is executed at a specified account @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Building a NEAR Subgraph -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -There are three aspects of subgraph definition: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ```bash $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Subgraph Manifest Definition -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ NEAR data sources support two types of handlers: ### Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript Mappings @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -The node configuration will depend on where the subgraph is being deployed. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ We will provide more information on running the above components soon. ## Querying a NEAR Subgraph -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### How does the beta work? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Can a subgraph index both NEAR and EVM chains? +### Can a Subgraph index both NEAR and EVM chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. -### Can subgraphs react to more specific triggers? +### Can Subgraphs react to more specific triggers? Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### Can NEAR subgraphs make view calls to NEAR accounts during mappings? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? This is not supported. We are evaluating whether this functionality is required for indexing. -### Can I use data source templates in my NEAR subgraph? +### Can I use data source templates in my NEAR Subgraph? This is not currently supported. We are evaluating whether this functionality is required for indexing. -### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### My question hasn't been answered, where can I get more help building NEAR subgraphs? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## Riferimenti diff --git a/website/src/pages/it/subgraphs/cookbook/polymarket.mdx b/website/src/pages/it/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/it/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/it/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/it/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/it/subgraphs/cookbook/secure-api-keys-nextjs.mdx index fba106e6eaf6..b247912c90e6 100644 --- a/website/src/pages/it/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/it/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## Panoramica -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/it/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/it/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..07d18977cda6 --- /dev/null +++ b/website/src/pages/it/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Panoramica + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Iniziare + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/it/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/it/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..4b50534fbe9d --- /dev/null +++ b/website/src/pages/it/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Introduzione + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Iniziare + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/it/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/it/subgraphs/cookbook/subgraph-debug-forking.mdx index 6610f19da66d..91aa7484d2ec 100644 --- a/website/src/pages/it/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/it/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Quick and Easy Subgraph Debugging Using Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## Ok, what is it? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## What?! How? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## Please, show me some code! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. The usual way to attempt a fix is: 1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Wait for it to sync-up. 4. If it breaks again go back to 1, otherwise: Hooray! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. If it breaks again, go back to 1, otherwise: Hooray! Now, you may have 2 questions: @@ -69,18 +69,18 @@ Now, you may have 2 questions: And I answer: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Forking is easy, no need to sweat: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! So, here is what I do: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/it/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/it/subgraphs/cookbook/subgraph-uncrashable.mdx index 0cc91a0fa2c3..a08e2a7ad8c9 100644 --- a/website/src/pages/it/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/it/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Safe Subgraph Code Generator --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Why integrate with Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. @@ -26,4 +26,4 @@ Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/it/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/it/subgraphs/cookbook/transfer-to-the-graph.mdx index 4c435d24f56c..ca66ccfd91f8 100644 --- a/website/src/pages/it/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/it/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Esempio -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Additional Resources -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/it/subgraphs/developing/creating/advanced.mdx b/website/src/pages/it/subgraphs/developing/creating/advanced.mdx index 94c7d1f0d42d..741d77c979d9 100644 --- a/website/src/pages/it/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Panoramica -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Errori non fatali -Gli errori di indicizzazione su subgraph già sincronizzati causano, per impostazione predefinita, il fallimento del subgraph e l'interruzione della sincronizzazione. In alternativa, i subgraph possono essere configurati per continuare la sincronizzazione in presenza di errori, ignorando le modifiche apportate dal gestore che ha provocato l'errore. In questo modo gli autori dei subgraph hanno il tempo di correggere i loro subgraph mentre le query continuano a essere servite rispetto al blocco più recente, anche se i risultati potrebbero essere incoerenti a causa del bug che ha causato l'errore. Si noti che alcuni errori sono sempre fatali. Per essere non fatale, l'errore deve essere noto come deterministico. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Per abilitare gli errori non fatali è necessario impostare il seguente flag di caratteristica nel manifesto del subgraph: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -I data source file sono una nuova funzionalità del subgraph per accedere ai dati fuori chain durante l'indicizzazione in modo robusto ed estendibile. I data source file supportano il recupero di file da IPFS e da Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Questo pone anche le basi per l'indicizzazione deterministica dei dati fuori chain e per la potenziale introduzione di dati arbitrari provenienti da HTTP. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ Questo creerà una nuova data source file, che interrogherà l'endpoint IPFS o A This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulazioni, state usando i data source file! -#### Distribuire i subgraph +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitazioni -I gestori e le entità di data source file sono isolati dalle altre entità del subgraph, assicurando che siano deterministici quando vengono eseguiti e garantendo che non ci sia contaminazione di data source basate sulla chain. Per essere precisi: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Le entità create di Data Source file sono immutabili e non possono essere aggiornate - I gestori di Data Source file non possono accedere alle entità di altre data source file - Le entità associate al Data Source file non sono accessibili ai gestori alla chain -> Sebbene questo vincolo non dovrebbe essere problematico per la maggior parte dei casi d'uso, potrebbe introdurre complessità per alcuni. Contattate via Discord se avete problemi a modellare i vostri dati basati su file in un subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Inoltre, non è possibile creare data source da una data source file, sia essa una data source onchain o un'altra data source file. Questa restrizione potrebbe essere eliminata in futuro. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Poiché l'innesto copia piuttosto che indicizzare i dati di base, è molto più veloce portare il subgraph al blocco desiderato rispetto all'indicizzazione da zero, anche se la copia iniziale dei dati può richiedere diverse ore per subgraph molto grandi. Mentre il subgraph innestato viene inizializzato, il Graph Node registra le informazioni sui tipi di entità già copiati. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/it/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/it/subgraphs/developing/creating/assemblyscript-mappings.mdx index 23271ae9c85c..8154b3d9555c 100644 --- a/website/src/pages/it/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Generazione del codice -Per rendere semplice e sicuro il lavoro con gli smart contract, gli eventi e le entità, la Graph CLI può generare tipi AssemblyScript dallo schema GraphQL del subgraph e dagli ABI dei contratti inclusi nelle data source. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Questo viene fatto con @@ -80,7 +80,7 @@ Questo viene fatto con graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/it/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx index 1d6fa48848b3..fb87d521d968 100644 --- a/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: API AssemblyScript --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ La libreria `@graphprotocol/graph-ts` fornisce le seguenti API: ### Versioni -La `apiVersion` nel manifest del subgraph specifica la versione dell'API di mappatura che viene eseguita da the Graph Node per un dato subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Versione | Note di rilascio | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Aggiunte le classi `TransactionReceipt` e `Log` ai tipi di Ethereum
Aggiunto il campo `receipt` all'oggetto Ethereum Event | -| 0.0.6 | Aggiunto il campo `nonce` all'oggetto Ethereum Transaction
Aggiunto `baseFeePerGas` all'oggetto Ethereum Block | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Aggiunto il campo `functionSignature` all'oggetto Ethereum SmartContractCall | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Aggiunto il campo `input` all'oggetto Ethereum Transaction | +| Versione | Note di rilascio | +| :------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Aggiunte le classi `TransactionReceipt` e `Log` ai tipi di Ethereum
Aggiunto il campo `receipt` all'oggetto Ethereum Event | +| 0.0.6 | Aggiunto il campo `nonce` all'oggetto Ethereum Transaction
Aggiunto `baseFeePerGas` all'oggetto Ethereum Block | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Aggiunto il campo `functionSignature` all'oggetto Ethereum SmartContractCall | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Aggiunto il campo `input` all'oggetto Ethereum Transaction | ### Tipi integrati @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' L'API `store` consente di caricare, salvare e rimuovere entità da e verso il Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creazione di entità @@ -282,8 +282,8 @@ A partire da `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 e `@graphpr The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ L'API di Ethereum fornisce l'accesso agli smart contract, alle variabili di stat #### Supporto per i tipi di Ethereum -Come per le entità, `graph codegen` genera classi per tutti gli smart contract e gli eventi utilizzati in un subgraph. Per questo, gli ABI dei contratti devono far parte dell'origine dati nel manifest del subgraph. In genere, i file ABI sono memorizzati in una cartella `abis/`. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -Con le classi generate, le conversioni tra i tipi di Ethereum e i [tipi incorporati](#built-in-types) avvengono dietro le quinte, in modo che gli autori dei subgraph non debbano preoccuparsene. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -L'esempio seguente lo illustra. Dato uno schema di subgraph come +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Accesso allo stato dello smart contract -Il codice generato da `graph codegen` include anche classi per gli smart contract utilizzati nel subgraph. Queste possono essere utilizzate per accedere alle variabili di stato pubbliche e per chiamare le funzioni del contratto nel blocco corrente. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. Un modello comune è quello di accedere al contratto da cui proviene un evento. Questo si ottiene con il seguente codice: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { Finché il `ERC20Contract` su Ethereum ha una funzione pubblica di sola lettura chiamata `symbol`, questa può essere chiamata con `.symbol()`. Per le variabili di stato pubbliche viene creato automaticamente un metodo con lo stesso nome. -Qualsiasi altro contratto che faccia parte del subgraph può essere importato dal codice generato e può essere legato a un indirizzo valido. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Gestione delle chiamate annullate @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. L'API `log` include le seguenti funzioni: @@ -590,7 +590,7 @@ L'API `log` include le seguenti funzioni: - `log.info(fmt: string, args: Array): void` - registra un messaggio informativo. - `log.warning(fmt: string, args: Array): void` - registra un avviso. - `log.error(fmt: string, args: Array): void` - registra un messaggio di errore. -- `log.critical(fmt: string, args: Array): void` - registra un messaggio critico _and_ termina il subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. L'API `log` accetta una stringa di formato e un array di valori stringa. Quindi sostituisce i segnaposto con i valori stringa dell'array. Il primo segnaposto `{}` viene sostituito dal primo valore dell'array, il secondo segnaposto `{}` viene sostituito dal secondo valore e così via. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) L'unico flag attualmente supportato è `json`, che deve essere passato a `ipfs.map`. Con il flag `json`, il file IPFS deve essere costituito da una serie di valori JSON, un valore per riga. La chiamata a `ipfs.map` leggerà ogni riga del file, la deserializzerà in un `JSONValue` e chiamerà il callback per ognuno di essi. Il callback può quindi utilizzare le operazioni sulle entità per memorizzare i dati dal `JSONValue`. Le modifiche alle entità vengono memorizzate solo quando il gestore che ha chiamato `ipfs.map` termina con successo; nel frattempo, vengono mantenute in memoria e la dimensione del file che `ipfs.map` può elaborare è quindi limitata. -In caso di successo, `ipfs.map` restituisce `void`. Se una qualsiasi invocazione del callback causa un errore, il gestore che ha invocato `ipfs.map` viene interrotto e il subgraph viene contrassegnato come fallito. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -770,44 +770,44 @@ Quando il tipo di un valore è certo, può essere convertito in un [tipo incorpo ### Riferimento alle conversioni di tipo -| Fonte(i) | Destinazione | Funzione di conversione | -| -------------------- | -------------------- | --------------------------- | -| Address | Bytes | none | -| Address | String | s.toHexString() | -| BigDecimal | String | s.toString() | -| BigInt | BigDecimal | s.toBigDecimal() | -| BigInt | String (hexadecimal) | s.toHexString() o s.toHex() | -| BigInt | String (unicode) | s.toString() | -| BigInt | i32 | s.toI32() | -| Boolean | Boolean | none | -| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | -| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | -| Bytes | String (hexadecimal) | s.toHexString() o s.toHex() | -| Bytes | String (unicode) | s.toString() | -| Bytes | String (base58) | s.toBase58() | -| Bytes | i32 | s.toI32() | -| Bytes | u32 | s.toU32() | -| Bytes | JSON | json.fromBytes(s) | -| int8 | i32 | none | -| int32 | i32 | none | -| int32 | BigInt | BigInt.fromI32(s) | -| uint24 | i32 | none | -| int64 - int256 | BigInt | none | -| uint32 - uint256 | BigInt | none | -| JSON | boolean | s.toBool() | -| JSON | i64 | s.toI64() | -| JSON | u64 | s.toU64() | -| JSON | f64 | s.toF64() | -| JSON | BigInt | s.toBigInt() | -| JSON | string | s.toString() | -| JSON | Array | s.toArray() | -| JSON | Object | s.toObject() | -| String | Address | Address.fromString(s) | -| Bytes | Address | Address.fromBytes(s) | -| String | BigInt | BigInt.fromString(s) | -| String | BigDecimal | BigDecimal.fromString(s) | -| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | -| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | +| Fonte(i) | Destinazione | Funzione di conversione | +| -------------------- | --------------------- | -------------------------------- | +| Address | Bytes | none | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | String (hexadecimal) | s.toHexString() o s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | none | +| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | String (hexadecimal) | s.toHexString() o s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | none | +| int32 | i32 | none | +| int32 | BigInt | BigInt.fromI32(s) | +| uint24 | i32 | none | +| int64 - int256 | BigInt | none | +| uint32 - uint256 | BigInt | none | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toI64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| Bytes | Address | Address.fromBytes(s) | +| String | BigInt | BigInt.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | ### Metadati della Data Source @@ -836,7 +836,7 @@ La classe base `Entity` e la classe figlia `DataSourceContext` hanno degli helpe ### DataSourceContext nel manifesto -La sezione `contesto` all'interno di `dataSources` consente di definire coppie chiave-valore accessibili nelle mappature dei subgraph. I tipi disponibili sono `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List` e `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Ecco un esempio YAML che illustra l'uso di vari tipi nella sezione `context`: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifica un elenco di elementi. Ogni elemento deve specificare il suo tipo e i suoi dati. - `BigInt`: Specifica un valore intero di grandi dimensioni. Deve essere quotato a causa delle sue grandi dimensioni. -Questo contesto è quindi accessibile nei file di mappatura dei subgraph, consentendo di ottenere subgraph più dinamici e configurabili. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/it/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/it/subgraphs/developing/creating/graph-ts/common-issues.mdx index 8d714dad8499..7c21ab8fc43b 100644 --- a/website/src/pages/it/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Problemi comuni di AssemblyScript --- -Ci sono alcuni problemi [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) in cui è comune imbattersi durante lo sviluppo di subgraph. La loro difficoltà di debug è variabile, ma conoscerli può essere d'aiuto. Quello che segue è un elenco non esaustivo di questi problemi: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - L'ambito non è ereditato nelle [closure functions](https://www.assemblyscript.org/status.html#on-closures), cioè le variabili dichiarate al di fuori delle closure functions non possono essere utilizzate. Spiegazione in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/it/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/it/subgraphs/developing/creating/install-the-cli.mdx index 4f4afcee006a..20770b2e37b7 100644 --- a/website/src/pages/it/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Installare the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Panoramica -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Per cominciare @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Create a Subgraph ### Da un contratto esistente -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### Da un subgraph di esempio -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is I file ABI devono corrispondere al vostro contratto. Esistono diversi modi per ottenere i file ABI: - Se state costruendo il vostro progetto, probabilmente avrete accesso alle ABI più recenti. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Versione | Note di rilascio | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/it/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/it/subgraphs/developing/creating/ql-schema.mdx index d3c22e25f97d..63cbca1acc72 100644 --- a/website/src/pages/it/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Panoramica -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| Tipo | Descrizione | -| --- | --- | -| `Bytes` | Byte array, rappresentato come una stringa esadecimale. Comunemente utilizzato per gli hash e gli indirizzi di Ethereum. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Tipo | Descrizione | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, rappresentato come una stringa esadecimale. Comunemente utilizzato per gli hash e gli indirizzi di Ethereum. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enum @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -Per le relazioni uno-a-molti, la relazione deve sempre essere memorizzata sul lato "uno" e il lato "molti" deve sempre essere derivato. Memorizzare la relazione in questo modo, piuttosto che memorizzare un array di entità sul lato "molti", migliorerà notevolmente le prestazioni sia per l'indicizzazione che per l'interrogazione del subgraph. In generale, la memorizzazione di array di entità dovrebbe essere evitata per quanto possibile. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Esempio @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Questo modo più elaborato di memorizzare le relazioni molti-a-molti si traduce in una minore quantità di dati memorizzati per il subgraph e quindi in un subgraph che spesso è molto più veloce da indicizzare e da effettuare query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Aggiungere commenti allo schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Lingue supportate diff --git a/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx index 6b6247b0ce50..5b0ac052a82d 100644 --- a/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Panoramica -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Versione | Note di rilascio | +| :------: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/it/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/it/subgraphs/developing/creating/subgraph-manifest.mdx index d8b9c415b293..4653f00cc455 100644 --- a/website/src/pages/it/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Panoramica -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). Le voci importanti da aggiornare per il manifesto sono: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Gestori di chiamate -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. I gestori di chiamate si attivano solo in uno dei due casi: quando la funzione specificata viene chiamata da un conto diverso dal contratto stesso o quando è contrassegnata come esterna in Solidity e chiamata come parte di un'altra funzione nello stesso contratto. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Definire un gestore di chiamate @@ -162,31 +162,31 @@ To define a call handler in your manifest, simply add a `callHandlers` array und ```yaml dataSources: - kind: ethereum/contract - name: Factory + name: Gravity network: mainnet source: - address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' - abi: Factory + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript - file: ./src/mappings/factory.ts entities: - - Directory + - Gravatar + - Transaction abis: - - name: Factory - file: ./abis/factory.json - eventHandlers: - - event: NewExchange(address,address) - handler: handleNewExchange + - name: Gravity + file: ./abis/Gravity.json + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar ``` The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. ### Funzione di mappatura -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Gestori di blocchi -Oltre a sottoscrivere eventi di contratto o chiamate di funzione, un subgraph può voler aggiornare i propri dati quando nuovi blocchi vengono aggiunti alla chain. A tale scopo, un subgraph può eseguire una funzione dopo ogni blocco o dopo i blocchi che corrispondono a un filtro predefinito. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Filtri supportati @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. L'assenza di un filtro per un gestore di blocchi garantisce che il gestore venga chiamato a ogni blocco. Una data source può contenere un solo gestore di blocchi per ogni tipo di filtro. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Filtro once @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Il gestore definito con il filtro once sarà chiamato una sola volta prima dell'esecuzione di tutti gli altri gestori. Questa configurazione consente al subgraph di utilizzare il gestore come gestore di inizializzazione, eseguendo compiti specifici all'inizio dell'indicizzazione. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Funzione di mappatura -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Blocchi di partenza -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Versione | Note di rilascio | +| :------: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/it/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/it/subgraphs/developing/creating/unit-testing-framework.mdx index 77496e8eb092..c3e791437e9f 100644 --- a/website/src/pages/it/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/it/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Per cominciare @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx index 0bcbe1eddc43..f8b9f74c6479 100644 --- a/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/it/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Distribuzione del subgraph su più reti +## Deploying the Subgraph to multiple networks -In alcuni casi, si desidera distribuire lo stesso subgraph su più reti senza duplicare tutto il suo codice. Il problema principale è che gli indirizzi dei contratti su queste reti sono diversi. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Politica di archiviazione dei subgraph di Subgraph Studio +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Ogni subgraph colpito da questa politica ha un'opzione per recuperare la versione in questione. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Verifica dello stato di salute del subgraph +## Checking Subgraph health -Se un subgraph si sincronizza con successo, è un buon segno che continuerà a funzionare bene per sempre. Tuttavia, nuovi trigger sulla rete potrebbero far sì che il subgraph si trovi in una condizione di errore non testata o che inizi a rimanere indietro a causa di problemi di prestazioni o di problemi con gli operatori dei nodi. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx index 6d7e019d9d6f..3a07d7d50b24 100644 --- a/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/it/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Creare e gestire le chiavi API per specifici subgraph +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### Come creare un subgraph nel Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Compatibilità del subgraph con The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Non deve utilizzare nessuna delle seguenti funzioni: - - ipfs.cat & ipfs.map - - Errori non fatali - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Archiviazione automatica delle versioni del subgraph -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/it/subgraphs/developing/developer-faq.mdx b/website/src/pages/it/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/it/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/it/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/it/subgraphs/developing/introduction.mdx b/website/src/pages/it/subgraphs/developing/introduction.mdx index 53060bdd4de4..70610ef84065 100644 --- a/website/src/pages/it/subgraphs/developing/introduction.mdx +++ b/website/src/pages/it/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/it/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/it/subgraphs/developing/managing/deleting-a-subgraph.mdx index 90a2eb4b7d33..b8c2330ca49d 100644 --- a/website/src/pages/it/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/it/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- I Curator non potranno più segnalare il subgraph. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/it/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/it/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/it/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/it/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 8706691669d1..1672a6619d13 100644 --- a/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/it/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Pubblicare un subgraph nella rete decentralizzata +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Aggiornamento dei metadati per un subgraph pubblicato +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/it/subgraphs/developing/subgraphs.mdx b/website/src/pages/it/subgraphs/developing/subgraphs.mdx index a5d5fa16fd8e..7e6c212622d1 100644 --- a/website/src/pages/it/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/it/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Ciclo di vita del subgraph -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/it/subgraphs/explorer.mdx b/website/src/pages/it/subgraphs/explorer.mdx index ef26a5b18543..5db7212c1fb0 100644 --- a/website/src/pages/it/subgraphs/explorer.mdx +++ b/website/src/pages/it/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Panoramica -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Segnala/non segnala i subgraph +- Signal/Un-signal on Subgraphs - Visualizza ulteriori dettagli, come grafici, ID di distribuzione corrente e altri metadati -- Cambia versione per esplorare le iterazioni passate del subgraph -- Consulta i subgraph tramite GraphQL -- Test dei subgraph nel playground -- Visualizza gli Indexer che stanno indicizzando su un determinato subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Statistiche del subgraph (allocazione, Curator, ecc.) -- Visualizza l'entità che ha pubblicato il subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - l'importo massimo di stake delegato che l'Indexer può accettare in modo produttivo. Uno stake delegato in eccesso non può essere utilizzato per l'allocazione o per il calcolo dei premi. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curator -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Scheda di subgraph -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Scheda di indicizzazione -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Questa sezione include anche i dettagli sui compensi netti degli Indexer e sulle tariffe nette di query. Verranno visualizzate le seguenti metriche: @@ -223,13 +223,13 @@ Tenete presente che questo grafico è scorrevole orizzontalmente, quindi se scor ### Scheda di curation -Nella scheda di Curation si trovano tutti i subgraph sui quali si sta effettuando una segnalazione (che consente di ricevere commissioni della query). La segnalazione consente ai curator di evidenziare agli Indexer quali subgraph sono di valore e affidabili, segnalando così la necessità di indicizzarli. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. All'interno di questa scheda è presente una panoramica di: -- Tutti i subgraph su cui si effettua la curation con i dettagli del segnale -- Totali delle quote per subgraph -- Ricompense della query per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Aggiornamento attuale dei dettagli ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/it/subgraphs/guides/arweave.mdx b/website/src/pages/it/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..e59abffa383f --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Building Subgraphs on Arweave +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. + +## What is Arweave? + +The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. + +Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## What are Arweave Subgraphs? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Building an Arweave Subgraph + +To be able to build and deploy Arweave Subgraphs, you need two packages: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## Subgraph's components + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. + +### 2. Schema - `schema.graphql` + +Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet + +Arweave data sources support two types of handlers: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> The source.owner can be the owner's address, or their Public Key. +> +> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Querying an Arweave Subgraph + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can I index the stored files on Arweave? + +Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). + +### Can I identify Bundlr bundles in my Subgraph? + +This is not currently supported. + +### How can I filter transactions to a specific account? + +The source.owner can be the user's public key or account address. + +### What is the current encryption format? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/it/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/it/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..42d80c795662 --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Panoramica + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +oppure + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/it/subgraphs/guides/enums.mdx b/website/src/pages/it/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..9f55ae07c54b --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Additional Resources + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/it/subgraphs/guides/grafting.mdx b/website/src/pages/it/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..d9abe0e70d2a --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Replace a Contract and Keep its History With Grafting +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## What is Grafting? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- It changes for which entity types an interface is implemented + +For more information, you can check: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Important Note on Grafting When Upgrading to the Network + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Why Is This Important? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Best Practices + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +By adhering to these guidelines, you minimize risks and ensure a smoother migration process. + +## Building an Existing Subgraph + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Grafting Manifest Definition + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## Deploying the Base Subgraph + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It returns something like this: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## Deploying the Grafting Subgraph + +The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It should return the following: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## Additional Resources + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/it/subgraphs/guides/near.mdx b/website/src/pages/it/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..baa5bcc79157 --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Building Subgraphs on NEAR +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## What is NEAR? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- Block handlers: these are run on every new block +- Receipt handlers: run every time a message is executed at a specified account + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. + +## Building a NEAR Subgraph + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### Subgraph Manifest Definition + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +NEAR data sources support two types of handlers: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## Deploying a NEAR Subgraph + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Local Graph Node (based on default configuration) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexing NEAR with a Local Graph Node + +Running a Graph Node that indexes NEAR has the following operational requirements: + +- NEAR Indexer Framework with Firehose instrumentation +- NEAR Firehose Component(s) +- Graph Node with Firehose endpoint configured + +We will provide more information on running the above components soon. + +## Querying a NEAR Subgraph + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### How does the beta work? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. + +### Will receipt handlers trigger for accounts and their sub-accounts? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +This is not supported. We are evaluating whether this functionality is required for indexing. + +### Can I use data source templates in my NEAR Subgraph? + +This is not currently supported. We are evaluating whether this functionality is required for indexing. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## Riferimenti + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/it/subgraphs/guides/polymarket.mdx b/website/src/pages/it/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/it/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/it/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..b247912c90e6 --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## Panoramica + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/it/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/it/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..51d882cda5e9 --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduzione + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Iniziare + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/it/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/it/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..91aa7484d2ec --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Quick and Easy Subgraph Debugging Using Forks +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## Ok, what is it? + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## What?! How? + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## Please, show me some code! + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +The usual way to attempt a fix is: + +1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. Wait for it to sync-up. +4. If it breaks again go back to 1, otherwise: Hooray! + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. Make a change in the mappings source, which you believe will solve the issue. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. If it breaks again, go back to 1, otherwise: Hooray! + +Now, you may have 2 questions: + +1. fork-base what??? +2. Forking who?! + +And I answer: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. Forking is easy, no need to sweat: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +So, here is what I do: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/it/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/it/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..a08e2a7ad8c9 --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Safe Subgraph Code Generator +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Why integrate with Subgraph Uncrashable? + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/it/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/it/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..ca66ccfd91f8 --- /dev/null +++ b/website/src/pages/it/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfer to The Graph +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### Esempio + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### Additional Resources + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/it/subgraphs/querying/best-practices.mdx b/website/src/pages/it/subgraphs/querying/best-practices.mdx index c797e432ac0b..d4bb8b226105 100644 --- a/website/src/pages/it/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/it/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Gestione dei subgraph a cross-chain: effettuare query di più subgraph in un'unica query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Risultato completamente tipizzato @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/it/subgraphs/querying/from-an-application.mdx b/website/src/pages/it/subgraphs/querying/from-an-application.mdx index d2ac36f09846..d5b632cd6f90 100644 --- a/website/src/pages/it/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/it/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Eseguire una query da un'applicazione +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Gestione dei subgraph a cross-chain: effettuare query di più subgraph in un'unica query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Risultato completamente tipizzato @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Step 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Step 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Step 1 diff --git a/website/src/pages/it/subgraphs/querying/graph-client/README.md b/website/src/pages/it/subgraphs/querying/graph-client/README.md index 416cadc13c6f..bcbf74973703 100644 --- a/website/src/pages/it/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/it/subgraphs/querying/graph-client/README.md @@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## Per cominciare You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### Esempi You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/it/subgraphs/querying/graph-client/live.md b/website/src/pages/it/subgraphs/querying/graph-client/live.md index e6f726cb4352..1a899ac2dcbf 100644 --- a/website/src/pages/it/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/it/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Per cominciare Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/it/subgraphs/querying/graphql-api.mdx b/website/src/pages/it/subgraphs/querying/graphql-api.mdx index 45100b8f6d68..29547f648dea 100644 --- a/website/src/pages/it/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/it/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -Questo può essere utile se si vuole recuperare solo le entità che sono cambiate, ad esempio dall'ultima volta che è stato effettuato il polling. In alternativa, può essere utile per indagare o fare il debug di come le entità stanno cambiando nel subgraph (se combinato con un filtro di blocco, è possibile isolare solo le entità che sono cambiate in un blocco specifico). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### Query di ricerca fulltext -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Operatori di ricerca fulltext: -| Simbolo | Operatore | Descrizione | -| --- | --- | --- | -| `&` | `And` | Per combinare più termini di ricerca in un filtro per le entità che includono tutti i termini forniti | -| | | `Or` | Le query con più termini di ricerca separati dall'operatore Or restituiranno tutte le entità con una corrispondenza tra i termini forniti | -| `<->` | `Follow by` | Specifica la distanza tra due parole. | -| `:*` | `Prefix` | Utilizzare il termine di ricerca del prefisso per trovare le parole il cui prefisso corrisponde (sono richiesti 2 caratteri.) | +| Simbolo | Operatore | Descrizione | +| ------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | Per combinare più termini di ricerca in un filtro per le entità che includono tutti i termini forniti | +| | | `Or` | Le query con più termini di ricerca separati dall'operatore Or restituiranno tutte le entità con una corrispondenza tra i termini forniti | +| `<->` | `Follow by` | Specifica la distanza tra due parole. | +| `:*` | `Prefix` | Utilizzare il termine di ricerca del prefisso per trovare le parole il cui prefisso corrisponde (sono richiesti 2 caratteri.) | #### Esempi @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Metadati del Subgraph -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Se viene fornito un blocco, i metadati si riferiscono a quel blocco, altrimenti viene utilizzato il blocco indicizzato più recente. Se fornito, il blocco deve essere successivo al blocco iniziale del subgraph e inferiore o uguale al blocco indicizzato più recente. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ Se viene fornito un blocco, i metadati si riferiscono a quel blocco, altrimenti - hash: l'hash del blocco - numero: il numero del blocco -- timestamp: il timestamp del blocco, se disponibile (attualmente è disponibile solo per i subgraph che indicizzano le reti EVM) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/it/subgraphs/querying/introduction.mdx b/website/src/pages/it/subgraphs/querying/introduction.mdx index c2ebb666bfce..26330f644563 100644 --- a/website/src/pages/it/subgraphs/querying/introduction.mdx +++ b/website/src/pages/it/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Panoramica -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx index ea42572de442..fc4ebe1f3daf 100644 --- a/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/it/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: Gestione delle chiavi API +title: Managing API keys --- ## Panoramica -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Importo di GRT speso 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Visualizzare e gestire i nomi di dominio autorizzati a utilizzare la chiave API - - Assegnare i subgraph che possono essere interrogati con la chiave API + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/it/subgraphs/querying/python.mdx b/website/src/pages/it/subgraphs/querying/python.mdx index 55cae50be8a9..c289ab7ea6b0 100644 --- a/website/src/pages/it/subgraphs/querying/python.mdx +++ b/website/src/pages/it/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds è una libreria Python intuitiva per query dei subgraph, realizzata da [Playgrounds](https://playgrounds.network/). Permette di collegare direttamente i dati dei subgraph a un ambiente dati Python, consentendo di utilizzare librerie come [pandas](https://pandas.pydata.org/) per eseguire analisi dei dati! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offre una semplice API Pythonic per la creazione di query GraphQL, automatizza i flussi di lavoro più noiosi come la paginazione, e dà agli utenti avanzati la possibilità di effettuare trasformazioni controllate dello schema. @@ -17,24 +17,24 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Una volta installato, è possibile testare subgrounds con la seguente query. L'esempio seguente prende un subgraph per il protocollo Aave v2 e effettua query dei primi 5 mercati ordinati per TVL (Total Value Locked), seleziona il loro nome e il loro TVL (in USD) e restituisce i dati come pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python -da subgrounds import Subgrounds +from subgrounds import Subgrounds sg = Subgrounds() -# Caricare il subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# Costruire la query +# Construct the query latest_markets = aave_v2.Query.markets( orderBy=aave_v2.Market.totalValueLockedUSD, orderDirection='desc', first=5, ) -# Restituire la query a un dataframe +# Return query to a dataframe sg.query_df([ latest_markets.name, latest_markets.totalValueLockedUSD, diff --git a/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/it/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/it/subgraphs/quick-start.mdx b/website/src/pages/it/subgraphs/quick-start.mdx index dc280ec699d3..a803ac8695fa 100644 --- a/website/src/pages/it/subgraphs/quick-start.mdx +++ b/website/src/pages/it/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Quick Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/it/substreams/developing/dev-container.mdx b/website/src/pages/it/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/it/substreams/developing/dev-container.mdx +++ b/website/src/pages/it/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/it/substreams/developing/sinks.mdx b/website/src/pages/it/substreams/developing/sinks.mdx index 4689f71ab6a2..a8868824aa83 100644 --- a/website/src/pages/it/substreams/developing/sinks.mdx +++ b/website/src/pages/it/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/it/substreams/developing/solana/account-changes.mdx b/website/src/pages/it/substreams/developing/solana/account-changes.mdx index 6f19b0c346e3..8a1bdb86a7b4 100644 --- a/website/src/pages/it/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/it/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/it/substreams/developing/solana/transactions.mdx b/website/src/pages/it/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/it/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/it/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/it/substreams/introduction.mdx b/website/src/pages/it/substreams/introduction.mdx index 9cda1108f1a6..a4c2a11de271 100644 --- a/website/src/pages/it/substreams/introduction.mdx +++ b/website/src/pages/it/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/it/substreams/publishing.mdx b/website/src/pages/it/substreams/publishing.mdx index d8904a49d38d..31a4461815a5 100644 --- a/website/src/pages/it/substreams/publishing.mdx +++ b/website/src/pages/it/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/it/supported-networks.mdx b/website/src/pages/it/supported-networks.mdx index 02e45c66ca42..ef2c28393033 100644 --- a/website/src/pages/it/supported-networks.mdx +++ b/website/src/pages/it/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Supported Networks hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/it/token-api/_meta-titles.json b/website/src/pages/it/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/it/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/it/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/it/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/it/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/it/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/it/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/it/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/it/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/it/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/it/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/it/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/it/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/it/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/it/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/it/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/it/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/it/token-api/faq.mdx b/website/src/pages/it/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/it/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/it/token-api/mcp/claude.mdx b/website/src/pages/it/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..12a036b6fc24 --- /dev/null +++ b/website/src/pages/it/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Configuration + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/it/token-api/mcp/cline.mdx b/website/src/pages/it/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..ef98e45939fe --- /dev/null +++ b/website/src/pages/it/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Configuration + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/it/token-api/mcp/cursor.mdx b/website/src/pages/it/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..658108d1337b --- /dev/null +++ b/website/src/pages/it/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Configuration + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/it/token-api/monitoring/get-health.mdx b/website/src/pages/it/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/it/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/it/token-api/monitoring/get-networks.mdx b/website/src/pages/it/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/it/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/it/token-api/monitoring/get-version.mdx b/website/src/pages/it/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/it/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/it/token-api/quick-start.mdx b/website/src/pages/it/token-api/quick-start.mdx new file mode 100644 index 000000000000..4653c3d41ac6 --- /dev/null +++ b/website/src/pages/it/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Quick Start +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/ja/about.mdx b/website/src/pages/ja/about.mdx index c867800369a3..b4462cd3c1c8 100644 --- a/website/src/pages/ja/about.mdx +++ b/website/src/pages/ja/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![グラフがグラフ ノードを使用してデータ コンシューマーにクエリを提供する方法を説明する図](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The diagram below provides more detailed information about the flow of data afte 1. Dapp は、スマート コントラクトのトランザクションを通じて Ethereum にデータを追加します。 2. スマートコントラクトは、トランザクションの処理中に 1 つまたは複数のイベントを発行します。 -3. Graph Node は、Ethereum の新しいブロックと、それに含まれる自分のサブグラフのデータを継続的にスキャンします。 -4. Graph Node は、これらのブロックの中からあなたのサブグラフの Ethereum イベントを見つけ出し、あなたが提供したマッピングハンドラーを実行します。 マッピングとは、イーサリアムのイベントに対応して Graph Node が保存するデータエンティティを作成または更新する WASM モジュールのことです。 +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Dapp は、ノードの [GraphQL エンドポイント](https://graphql.org/learn/) を使用して、ブロックチェーンからインデックス付けされたデータをグラフ ノードに照会します。グラフ ノードは、ストアのインデックス作成機能を利用して、このデータを取得するために、GraphQL クエリを基盤となるデータ ストアのクエリに変換します。 dapp は、このデータをエンドユーザー向けの豊富な UI に表示し、エンドユーザーはそれを使用して Ethereum で新しいトランザクションを発行します。サイクルが繰り返されます。 ## 次のステップ -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/ja/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ja/archived/arbitrum/arbitrum-faq.mdx index 3ab2bdbbf83b..cc0c098f0af1 100644 --- a/website/src/pages/ja/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/ja/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - イーサリアムから継承したセキュリティ -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Graph コミュニティは、[GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) の議論の結果を受けて、昨年 Arbitrum を進めることを決定しました。 @@ -39,7 +39,7 @@ L2でのThe Graphの活用には、このドロップダウンスイッチャー ![Arbitrum を切り替えるドロップダウン スイッチャー](/img/arbitrum-screenshot-toggle.png) -## サブグラフ開発者、データ消費者、インデクサー、キュレーター、デリゲーターは何をする必要がありますか? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto すべてが徹底的にテストされており、安全かつシームレスな移行を保証するための緊急時対応計画が整備されています。詳細は[here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20)をご覧ください。 -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-faq.mdx index 70999970ca9a..32be44b363b9 100644 --- a/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ EthereumやArbitrumのようなEVMブロックチェーン上のウォレット L2転送ツールは、アービトラムのネイティブメカニズムを使用してL1からL2にメッセージを送信します。このメカニズムは「再試行可能チケット」と呼ばれ、Arbitrum GRTブリッジを含むすべてのネイティブトークンブリッジで使用されます。再試行可能なチケットの詳細については、[アービトラムドキュメント](https://docs.arbitrum.io/arbos/l1 からl2へのメッセージング)を参照してください。 -資産(サブグラフ、ステーク、委任、またはキュレーション)をL2に転送する際、Arbitrum GRTブリッジを介してメッセージが送信され、L2でretryable ticketが作成されます。転送ツールにはトランザクションに一部のETHが含まれており、これは1)チケットの作成に支払われ、2)L2でのチケットの実行に必要なガスに使用されます。ただし、チケットがL2で実行可能になるまでの時間でガス料金が変動する可能性があるため、この自動実行試行が失敗することがあります。その場合、Arbitrumブリッジはretryable ticketを最大7日間保持し、誰でもそのチケットを「償還」しようと再試行できます(これにはArbitrumにブリッジされた一部のETHを持つウォレットが必要です)。 +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -これは、すべての転送ツールで「確認」ステップと呼んでいるものです。ほとんどの場合、自動実行は成功するため、自動的に実行されますが、確認が完了したことを確認するために戻ってチェックすることが重要です。成功せず、7日間で成功した再試行がない場合、Arbitrumブリッジはそのチケットを破棄し、あなたの資産(サブグラフ、ステーク、委任、またはキュレーション)は失われ、回復できません。The Graphのコア開発者は、これらの状況を検出し、遅すぎる前にチケットを償還しようとする監視システムを設置していますが、最終的には転送が時間内に完了することを確認する責任があなたにあります。トランザクションの確認に問題がある場合は、[this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) を使用して連絡し、コア開発者が助けてくれるでしょう。 +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### 委任/ステーク/キュレーション転送を開始しましたが、L2 まで転送されたかどうかわかりません。正しく転送されたことを確認するにはどうすればよいですか? @@ -36,43 +36,43 @@ L1トランザクションのハッシュを持っている場合(これはウ ## 部分グラフの転送 -### サブグラフを転送するにはどうすればよいですか? +### How do I transfer my Subgraph? -サブグラフを転送するには、次の手順を完了する必要があります。 +To transfer your Subgraph, you will need to complete the following steps: 1. イーサリアムメインネットで転送を開始する 2. 確認を待つために20分お待ちください。 -3. Arbitrum でサブグラフ転送を確認します\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Arbitrum でサブグラフの公開を完了する +4. Finish publishing Subgraph on Arbitrum 5. クエリ URL を更新 (推奨) -\*注意:7日以内に転送を確認する必要があります。それ以外の場合、サブグラフが失われる可能性があります。ほとんどの場合、このステップは自動的に実行されますが、Arbitrumでガス価格が急上昇した場合には手動で確認する必要があるかもしれません。このプロセス中に問題が発生した場合、サポートを受けるためのリソースが用意されています:support@thegraph.com に連絡するか、[Discord](https://discord.gg/graphprotocol)でお問い合わせください\。 +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### どこから転送を開始すればよいですか? -トランスファーを開始するには、[Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer)またはサブグラフの詳細ページからトランスファーを開始できます。サブグラフの詳細ページで「サブグラフを転送」ボタンをクリックしてトランスファーを開始してください。 +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### サブグラフが転送されるまでどれくらい待つ必要がありますか +### How long do I need to wait until my Subgraph is transferred トランスファーには約20分かかります。Arbitrumブリッジはバックグラウンドでブリッジトランスファーを自動的に完了します。一部の場合、ガス料金が急上昇する可能性があり、トランザクションを再度確認する必要があるかもしれません。 -### 私のサブグラフは L2 に転送した後も検出可能ですか? +### Will my Subgraph still be discoverable after I transfer it to L2? -あなたのサブグラフは、それが公開されたネットワーク上でのみ発見できます。たとえば、あなたのサブグラフがArbitrum Oneにある場合、それはArbitrum OneのExplorerでのみ見つけることができ、Ethereum上では見つけることはできません。正しいネットワークにいることを確認するために、ページの上部にあるネットワーク切り替えツールでArbitrum Oneを選択していることを確認してください。トランスファー後、L1サブグラフは非推奨として表示されます。 +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### 私のサブグラフを転送するには公開する必要がありますか? +### Does my Subgraph need to be published to transfer it? -サブグラフ転送ツールを活用するには、サブグラフがすでにEthereumメインネットに公開され、そのサブグラフを所有するウォレットが所有するキュレーション信号を持っている必要があります。サブグラフが公開されていない場合、Arbitrum Oneに直接公開することをお勧めします。関連するガス料金はかなり低くなります。公開されたサブグラフを転送したいが、所有者のアカウントがそれに対してキュレーション信号を出していない場合、そのアカウントから少額(たとえば1 GRT)の信号を送ることができます。必ず「auto-migrating(自動移行)」信号を選択してください。 +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Arbitrumへの転送後、Ethereumメインネットバージョンの私のサブグラフはどうなりますか? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -サブグラフをArbitrumに転送した後、Ethereumメインネットワークのバージョンは非推奨とされます。おすすめでは、48時間以内にクエリURLを更新することをお勧めしています。ただし、サードパーティのDAppサポートが更新されるために、メインネットワークのURLが機能し続ける猶予期間も設けられています。 +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### 転送後、Arbitrum上で再公開する必要がありますか? @@ -80,21 +80,21 @@ L1トランザクションのハッシュを持っている場合(これはウ ### 再公開中にエンドポイントでダウンタイムが発生しますか? -短期間のダウンタイムを経験する可能性は低いですが、L1でサブグラフをサポートしているインデクサーと、サブグラフが完全にL2でサポートされるまでインデクシングを続けるかどうかに依存することがあります。 +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### L2上での公開とバージョニングは、Ethereumメインネットと同じですか? -はい、Subgraph Studioで公開する際には、公開ネットワークとしてArbitrum Oneを選択してください。Studioでは、最新のエンドポイントが利用可能で、最新の更新されたサブグラフバージョンを指します。 +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### 私のサブグラフのキュレーションは、サブグラフと一緒に移動しますか? +### Will my Subgraph's curation move with my Subgraph? -自動移行信号を選択した場合、あなたのキュレーションの100%はサブグラフと一緒にArbitrum Oneに移行します。サブグラフのすべてのキュレーション信号は、転送時にGRTに変換され、あなたのキュレーション信号に対応するGRTがL2サブグラフ上で信号を発行するために使用されます。 +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -他のキュレーターは、自分の一部のGRTを引き出すか、それをL2に転送して同じサブグラフで信号を発行するかを選択できます。 +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### 転送後にサブグラフをEthereumメインネットに戻すことはできますか? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -一度転送されると、Ethereumメインネットワークのサブグラフバージョンは非推奨とされます。メインネットワークに戻りたい場合、再デプロイしてメインネットワークに再度公開する必要があります。ただし、Ethereumメインネットワークに戻すことは強く勧められていません。なぜなら、将来的にはインデクシングリワードが完全にArbitrum Oneで分配されるためです。 +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### なぜ転送を完了するためにブリッジされたETHが必要なのですか? @@ -206,19 +206,19 @@ Indexerに連絡できる場合、彼らにL2トランスファーツールを \*必要な場合 - つまり、契約アドレスを使用している場合。 -### 私がキュレーションしたサブグラフが L2 に移動したかどうかはどうすればわかりますか? +### How will I know if the Subgraph I curated has moved to L2? -サブグラフの詳細ページを表示すると、このサブグラフが転送されたことを通知するバナーが表示されます。バナーに従ってキュレーションを転送できます。また、移動したサブグラフの詳細ページでもこの情報を見つけることができます。 +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### 自分のキュレーションを L2 に移動したくない場合はどうすればよいですか? -サブグラフが非推奨になった場合、信号を引き出すオプションがあります。同様に、サブグラフがL2に移動した場合、Ethereumメインネットワークで信号を引き出すか、L2に送信することを選択できます。 +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### 私のキュレーションが正常に転送されたことを確認するにはどうすればよいですか? L2トランスファーツールを開始してから約20分後、Explorerを介して信号の詳細にアクセスできるようになります。 -### 一度に複数のサブグラフへキュレーションを転送することはできますか? +### Can I transfer my curation on more than one Subgraph at a time? 現時点では一括転送オプションは提供されていません。 @@ -266,7 +266,7 @@ L2トランスファーツールがステークの転送を完了するのに約 ### 株式を譲渡する前に、Arbitrum でインデックスを作成する必要がありますか? -インデクシングのセットアップよりも先にステークを効果的に転送できますが、L2でのサブグラフへの割り当て、それらのサブグラフのインデクシング、およびPOIの提出を行うまで、L2での報酬を請求することはできません。 +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### 委任者は、インデックス作成の賭け金を移動する前に委任を移動できますか? diff --git a/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-guide.mdx index b77261989131..bc10b94ac149 100644 --- a/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/ja/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ title: L2 転送ツールガイド Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## サブグラフをアービトラムに転送する方法 (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## サブグラフを転送する利点 +## Benefits of transferring your Subgraphs グラフのコミュニティとコア開発者は、過去1年間、Arbitrumに移行する準備をしてきました(https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)。レイヤー2または「L2」ブロックチェーンであるアービトラムは、イーサリアムからセキュリティを継承しますが、ガス料金を大幅に削減します。 -サブグラフをThe Graph Networkに公開またはアップグレードする際には、プロトコル上のスマートコントラクトとやり取りするため、ETHを使用してガスを支払う必要があります。サブグラフをArbitrumに移動することで、将来のサブグラフのアップデートにかかるガス料金が大幅に削減されます。低い手数料と、L2のキュレーションボンディングカーブがフラットであるという点も、他のキュレーターがあなたのサブグラフをキュレーションしやすくし、サブグラフのインデクサーへの報酬を増加させます。この低コストな環境は、インデクサーがサブグラフをインデックス化して提供するコストも削減します。アービトラム上のインデックス報酬は今後数か月間で増加し、Ethereumメインネット上では減少する予定です。そのため、ますます多くのインデクサーがステークを転送し、L2での運用を設定していくことになるでしょう。 +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## シグナル、L1サブグラフ、クエリURLで何が起こるかを理解する +## Understanding what happens with signal, your L1 Subgraph and query URLs -サブグラフをアービトラムに転送するには、アービトラムGRTブリッジが使用され、アービトラムGRTブリッジはネイティブアービトラムブリッジを使用してサブグラフをL2に送信します。「転送」はメインネット上のサブグラフを非推奨にし、ブリッジを使用してL2上のサブグラフを再作成するための情報を送信します。また、サブグラフ所有者のシグナル GRT も含まれ、ブリッジが転送を受け入れるには 0 より大きくなければなりません。 +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -サブグラフの転送を選択すると、サブグラフのすべてのキュレーション信号がGRTに変換されます。これは、メインネットのサブグラフを「非推奨」にすることと同じです。キュレーションに対応するGRTはサブグラフとともにL2に送信され、そこであなたに代わってシグナルを作成するために使用されます。 +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -他のキュレーターは、GRTの分数を引き出すか、同じサブグラフでシグナルをミントするためにL2に転送するかを選択できます。サブグラフの所有者がサブグラフをL2に転送せず、コントラクトコールを介して手動で非推奨にした場合、キュレーターに通知され、キュレーションを取り消すことができます。 +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -サブグラフが転送されるとすぐに、すべてのキュレーションがGRTに変換されるため、インデクサーはサブグラフのインデックス作成に対する報酬を受け取らなくなります。ただし、1) 転送されたサブグラフを24時間提供し続け、2) L2でサブグラフのインデックス作成をすぐに開始するインデクサーがあります。これらのインデクサーには既にサブグラフのインデックスが作成されているため、サブグラフが同期するのを待つ必要はなく、ほぼ即座にL2サブグラフを照会できます。 +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -L2 サブグラフへのクエリは別の URL (「arbitrum-gateway.thegraph.com」) に対して実行する必要がありますが、L1 URL は少なくとも 48 時間は機能し続けます。その後、L1ゲートウェイはクエリをL2ゲートウェイに転送しますが(しばらくの間)、これにより遅延が増えるため、できるだけ早くすべてのクエリを新しいURLに切り替えることをお勧めします。 +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## L2ウォレットの選択 -メインネットでサブグラフを公開したときに、接続されたウォレットを使用してサブグラフを作成し、このウォレットはこのサブグラフを表すNFTを所有し、更新を公開できます。 +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -サブグラフをアービトラムに転送する場合、L2でこのサブグラフNFTを所有する別のウォレットを選択できます。 +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. MetaMaskのような "通常の" ウォレット(外部所有アカウントまたはEOA、つまりスマートコントラクトではないウォレット)を使用している場合、これはオプションであり、L1と同じ所有者アドレスを保持することをお勧めします。 -マルチシグ(Safeなど)などのスマートコントラクトウォレットを使用している場合、このアカウントはメインネットにのみ存在し、このウォレットを使用してアービトラムで取引を行うことができない可能性が高いため、別のL2ウォレットアドレスを選択する必要があります。スマートコントラクトウォレットまたはマルチシグを使い続けたい場合は、Arbitrumで新しいウォレットを作成し、そのアドレスをサブグラフのL2所有者として使用します。 +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -\*\*あなたが管理し、アービトラムで取引を行うことができるウォレットアドレスを使用することは非常に重要です。そうしないと、サブグラフが失われ、復元できません。 +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## 転送の準備: 一部のETHのブリッジング -サブグラフを転送するには、ブリッジを介してトランザクションを送信し、その後アービトラム上で別のトランザクションを実行する必要があります。最初のトランザクションでは、メインネット上のETHを使用し、L2でメッセージが受信される際にガスを支払うためにいくらかのETHが含まれています。ただし、このガスが不足している場合、トランザクションを再試行し、L2で直接ガスを支払う必要があります(これが下記の「ステップ3:転送の確認」です)。このステップは、転送を開始してから7日以内に実行する必要があります。さらに、2つ目のトランザクション(「ステップ4:L2での転送の完了」)は、直接アービトラム上で行われます。これらの理由から、アービトラムウォレットに一定のETHが必要です。マルチシグまたはスマートコントラクトアカウントを使用している場合、ETHはトランザクションを実行するために使用している通常の個人のウォレット(EOAウォレット)にある必要があり、マルチシグウォレットそのものにはないことに注意してください +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. 一部の取引所でETHを購入してアービトラムに直接引き出すか、アービトラムブリッジを使用してメインネットウォレットからL2にETHを送信することができます:[bridge.arbitrum.io](http://bridge.arbitrum.io)。アービトラムのガス料金は安いので、必要なのは少量だけです。トランザクションが承認されるには、低いしきい値(0.01 ETHなど)から始めることをお勧めします。 -## サブグラフ転送ツールの検索 +## Finding the Subgraph Transfer Tool -L2転送ツールは、サブグラフスタジオでサブグラフのページを見ているときに見つけることができます。 +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -サブグラフを所有するウォレットに接続している場合は、エクスプローラーとエクスプローラーのそのサブグラフのページでも入手できます。 +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ L2転送ツールは、サブグラフスタジオでサブグラフのページ ## ステップ1: 転送を開始する -転送を開始する前に、どのアドレスがL2のサブグラフを所有するかを決定する必要があり(上記の「L2ウォレットの選択」を参照)、ガス用のETHをアービトラムにすでにブリッジすることを強くお勧めします(上記の「転送の準備: ETHのブリッジング」を参照)。 +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -また、サブグラフを転送するには、サブグラフを所有するのと同じアカウントを持つサブグラフにゼロ以外の量のシグナルが必要であることに注意してください。サブグラフでシグナルを出していない場合は、少しキュレーションを追加する必要があります(1 GRTのような少量を追加するだけで十分です)。 +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -「Transfer Tool」を開いた後、L2ウォレットアドレスを「受信ウォレットアドレス」フィールドに入力できるようになります。ここで正しいアドレスを入力していることを確認してください。「Transfer Subgraph」をクリックすると、ウォレット上でトランザクションを実行するよう求められます(注意:L2ガスの支払いに十分なETHの価値が含まれています)。これにより、トランスファーが開始され、L1サブグラフが廃止されます(詳細については、「背後で何が起こるか:シグナル、L1サブグラフ、およびクエリURLの理解」を参照してください)。 +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -このステップを実行する場合は、\*\*7日以内にステップ3を完了するまで続行してください。そうしないと、サブグラフとシグナルGRTが失われます。 これは、L1-L2メッセージングがアービトラムでどのように機能するかによるものです: ブリッジを介して送信されるメッセージは、7日以内に実行する必要がある「再試行可能なチケット」であり、アービトラムのガス価格に急上昇がある場合は、最初の実行で再試行が必要になる場合があります。 +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## ステップ2: サブグラフがL2に到達するのを待つ +## Step 2: Waiting for the Subgraph to get to L2 -転送を開始した後、L1サブグラフをL2に送信するメッセージは、アービトラムブリッジを介して伝播する必要があります。これには約20分かかります(ブリッジは、トランザクションを含むメインネットブロックが潜在的なチェーン再編成から「安全」になるまで待機します)。 +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). この待機時間が終了すると、アービトラムはL2契約の転送の自動実行を試みます。 @@ -80,7 +80,7 @@ L2転送ツールは、サブグラフスタジオでサブグラフのページ ## ステップ3: 転送の確認 -ほとんどの場合、ステップ1に含まれるL2ガスは、アービトラム契約のサブグラフを受け取るトランザクションを実行するのに十分であるため、このステップは自動実行されます。ただし、場合によっては、アービトラムのガス価格の急騰により、この自動実行が失敗する可能性があります。この場合、サブグラフをL2に送信する「チケット」は保留中であり、7日以内に再試行する必要があります。 +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. この場合、アービトラムにETHがあるL2ウォレットを使用して接続し、ウォレットネットワークをアービトラムに切り替え、[転送の確認] をクリックしてトランザクションを再試行する必要があります。 @@ -88,33 +88,33 @@ L2転送ツールは、サブグラフスタジオでサブグラフのページ ## ステップ4: L2での転送の完了 -この時点で、サブグラフとGRTはアービトラムで受信されましたが、サブグラフはまだ公開されていません。受信ウォレットとして選択したL2ウォレットを使用して接続し、ウォレットネットワークをArbitrumに切り替えて、[サブグラフの公開] をクリックする必要があります。 +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -これにより、アービトラムで動作しているインデクサーがサブグラフの提供を開始できるように、サブグラフが公開されます。また、L1から転送されたGRTを使用してキュレーションシグナルをミントします。 +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## ステップ 5: クエリ URL の更新 -サブグラフがアービトラムに正常に転送されました! サブグラフを照会するには、新しい URL は次のようになります: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -アービトラム上のサブグラフIDは、メインネット上でのものとは異なることに注意してください。ただし、エクスプローラやスタジオ上で常にそのIDを見つけることができます(詳細は「シグナル、L1サブグラフ、およびクエリURLの動作理解」を参照)。前述のように、古いL1 URLはしばらくの間サポートされますが、サブグラフがL2上で同期されたらすぐに新しいアドレスにクエリを切り替える必要があります。 +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## キュレーションをアービトラム(L2) に転送する方法 -## L2へのサブグラフ転送のキュレーションに何が起こるかを理解する +## Understanding what happens to curation on Subgraph transfers to L2 -サブグラフの所有者がサブグラフをアービトラムに転送すると、サブグラフのすべての信号が同時にGRTに変換されます。これは、「自動移行」シグナル、つまりサブグラフのバージョンまたはデプロイに固有ではないが、サブグラフの最新バージョンに従うシグナルに適用されます。 +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -このシグナルからGRTへの変換は、サブグラフのオーナーがL1でサブグラフを非推奨にした場合と同じです。サブグラフが非推奨化または移管されると、すべてのキュレーションシグナルは同時に(キュレーションボンディングカーブを使用して)「燃やされ」、その結果得られるGRTはGNSスマートコントラクトに保持されます(これはサブグラフのアップグレードと自動移行されるシグナルを処理するコントラクトです)。そのため、そのサブグラフの各キュレーターは、所持していたシェアの量に比例したGRTの請求権を持っています。 +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -サブグラフの所有者に対応するこれらの GRT の一部は、サブグラフとともに L2 に送信されます。 +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -この時点では、キュレートされたGRTはこれ以上のクエリ手数料を蓄積しません。したがって、キュレーターは自分のGRTを引き出すか、それをL2上の同じサブグラフに移動して新しいキュレーションシグナルを作成するために使用することができます。いつ行うかに関わらず、GRTは無期限に保持でき、すべての人が自分のシェアに比例した額を受け取ることができるため、急ぐ必要はありません。 +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## L2ウォレットの選択 @@ -130,9 +130,9 @@ L2転送ツールは、サブグラフスタジオでサブグラフのページ 転送を開始する前に、L2上でキュレーションを所有するアドレスを決定する必要があります(上記の「L2ウォレットの選択」を参照)。また、L2でメッセージの実行を再試行する必要がある場合に備えて、ガスのためにすでにArbitrumにブリッジされたいくらかのETHを持つことをお勧めします。ETHをいくつかの取引所で購入し、それを直接Arbitrumに引き出すことができます。または、Arbitrumブリッジを使用して、メインネットのウォレットからL2にETHを送信することもできます: [bridge.arbitrum.io](http://bridge.arbitrum.io)。Arbitrumのガス料金が非常に低いため、0.01 ETHなどの少額で十分です。 -もしキュレーションしているサブグラフがL2に移行された場合、エクスプローラ上でそのサブグラフが移行されたことを示すメッセージが表示されます。 +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -サブグラフのページを表示する際に、キュレーションを引き出すか、移行するかを選択できます。"Transfer Signal to Arbitrum" をクリックすると、移行ツールが開きます。 +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ L2転送ツールは、サブグラフスタジオでサブグラフのページ ## L1 でキュレーションを取り消す -GRT を L2 に送信したくない場合、または GRT を手動でブリッジしたい場合は、L1 でキュレーションされた GRT を取り消すことができます。サブグラフページのバナーで、「シグナルの引き出し」を選択し、トランザクションを確認します。GRTはあなたのキュレーターアドレスに送信されます。 +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/ja/archived/sunrise.mdx b/website/src/pages/ja/archived/sunrise.mdx index eac51559a724..e53b28b20016 100644 --- a/website/src/pages/ja/archived/sunrise.mdx +++ b/website/src/pages/ja/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### なぜEdge & Nodeはアップグレード・インデクサーを実行しているのか? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### これはデリゲーターにとって何を意味するのか? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/ja/contracts.json b/website/src/pages/ja/contracts.json index 7222da23adc6..be2eb06ea51f 100644 --- a/website/src/pages/ja/contracts.json +++ b/website/src/pages/ja/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "コントラクト", "address": "住所" } diff --git a/website/src/pages/ja/global.json b/website/src/pages/ja/global.json index 6326992e205b..c14d6185adb2 100644 --- a/website/src/pages/ja/global.json +++ b/website/src/pages/ja/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "メインナビゲーション", - "show": "Show navigation", - "hide": "Hide navigation", + "show": "ナビゲーションを表示する", + "hide": "ナビゲーションを隠す", "subgraphs": "サブグラフ", "substreams": "サブストリーム", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", - "resources": "Resources", - "archived": "Archived" + "sps": "サブストリームを用いたサブグラフ", + "tokenApi": "Token API", + "indexing": "インデクシング", + "resources": "リソース", + "archived": "アーカイブ" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "最終更新", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "所要時間", + "minutes": "分" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "前のページ", + "next": "次のページ", + "edit": "GitHubで編集する", + "onThisPage": "このページでは", + "tableOfContents": "目次", + "linkToThisSection": "このセクションへのリンク" }, "content": { - "note": "Note", - "video": "Video" + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, + "video": "ビデオ" + }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "説明書き", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "ステータス", + "description": "説明書き", + "liveResponse": "Live Response", + "example": "例" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "おっと!このページは宇宙で失われた...", + "subtitle": "正しいURLを使用しているかどうかを確認するか、以下のリンクをクリックして当社のウェブサイトを探索してください。", + "back": "ホームへ" } } diff --git a/website/src/pages/ja/index.json b/website/src/pages/ja/index.json index 121f56e91166..2034192e0089 100644 --- a/website/src/pages/ja/index.json +++ b/website/src/pages/ja/index.json @@ -1,99 +1,175 @@ { "title": "Home", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", - "cta2": "Build your first subgraph" + "title": "The Graphのドキュメント", + "description": "ブロックチェーンデータを抽出、変換、読み込み可能なツールを用いて、あなたのWeb3プロジェクトを開始しましょう。", + "cta1": "The Graphの仕組み", + "cta2": "最初のサブグラフを作る" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "ニーズに合ったソリューションを選択し、ブロックチェーンデータを活用してみましょう。", "subgraphs": { "title": "サブグラフ", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "オープンAPIでブロックチェーンデータを抽出、処理、照会しましょう。", + "cta": "サブグラフを作成する" }, "substreams": { "title": "サブストリーム", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "並列実行でブロックチェーンのデータを取得し、使用できます。", + "cta": "サブストリームを使用する" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "サブストリームを用いたサブグラフ", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "サブストリームを用いたサブグラフの設定を行う" }, "graphNode": { "title": "グラフノード", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "description": "ブロックチェーンのデータをインデックスし、GraphQLクエリで提供します。", + "cta": "ローカルでのGraph Nodeのセットアップを行う" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "ブロックチェーンデータをフラットファイルに抽出し、時間動機機能とストリーミング機能を向上させます。", + "cta": "Firehoseを使う" } }, "supportedNetworks": { "title": "サポートされているネットワーク", + "details": "Network Details", + "services": "Services", + "type": "タイプ", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "ドキュメント", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "グラフは{0}をサポートしています。新しいネットワークを追加するには{1}。", + "networks": "ネットワーク", + "completeThisForm": "フォームを記入する" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "名称", + "id": "ID", + "subgraphs": "サブグラフ", + "substreams": "サブストリーム", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "サブストリーム", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "請求書", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { - "title": "Guides", + "title": "ガイド", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "グラフエクスプローラでデータを検索", + "description": "既存のブロックチェーンデータの何百ものパブリックサブグラフを活用。" }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "サブグラフを公開する", + "description": "サブグラフをブロックチェーンネットワークに追加する。" }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "サブストリームの公開", + "description": "サブストリームパッケージをサブストリームレジストリに公開する。" }, "queryingBestPractices": { "title": "クエリのベストプラクティス", - "description": "Optimize your subgraph queries for faster, better results." + "description": "より速く、より良い結果を得るために、サブグラフのクエリを最適化します。" }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "最適化された時系列と集計", + "description": "効率化のためにサブグラフをスリム化する。" }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "APIキー管理", + "description": "サブグラフのAPIキーを簡単に作成、管理、保護できます。" }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "The Graphに移行する", + "description": "どのプラットフォームからでもシームレスにサブグラフをアップグレードできます。" } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "ビデオ・チュートリアル", + "watchOnYouTube": "YouTubeで見る", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "1分でわかるThe Graph", + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { - "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "title": "委任(デリゲーション)とは何か?", + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "サブストリームを用いたサブグラフでSolanaをインデックスする方法", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", - "minutes": "min" + "reading": "所要時間", + "duration": "期間", + "minutes": "分" } } diff --git a/website/src/pages/ja/indexing/_meta-titles.json b/website/src/pages/ja/indexing/_meta-titles.json index 42f4de188fd4..a258ebae5ba6 100644 --- a/website/src/pages/ja/indexing/_meta-titles.json +++ b/website/src/pages/ja/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "インデクサーツール" } diff --git a/website/src/pages/ja/indexing/chain-integration-overview.mdx b/website/src/pages/ja/indexing/chain-integration-overview.mdx index c9349b7a24e5..4b996d3ddfb4 100644 --- a/website/src/pages/ja/indexing/chain-integration-overview.mdx +++ b/website/src/pages/ja/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ The Graph Network の未来を形作る準備はできていますか? [Start yo ### 2. ネットワークがメインネットでサポートされた後に Firehose とサブストリームのサポートが追加された場合はどうなりますか? -これは、サブストリームで動作するサブグラフに対するインデックスリワードのプロトコルサポートに影響を与えるものです。新しいFirehoseの実装は、このGIPのステージ2に概説されている方法論に従って、テストネットでテストされる必要があります。同様に、実装がパフォーマンスが良く信頼性があると仮定して、[Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md)へのPR(「Substreamsデータソース」サブグラフ機能)が必要です。また、インデックスリワードのプロトコルサポートに関する新しいGIPも必要です。誰でもPRとGIPを作成できますが、Foundationは評議会の承認をサポートします。 +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/ja/indexing/new-chain-integration.mdx b/website/src/pages/ja/indexing/new-chain-integration.mdx index decdf0266d65..dc9408b25f69 100644 --- a/website/src/pages/ja/indexing/new-chain-integration.mdx +++ b/website/src/pages/ja/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, in a JSON-RPC batch request -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ ### 2. Firehose Integration @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node の設定 -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/ja/indexing/overview.mdx b/website/src/pages/ja/indexing/overview.mdx index f952fafb882b..df7af3cc4dfd 100644 --- a/website/src/pages/ja/indexing/overview.mdx +++ b/website/src/pages/ja/indexing/overview.mdx @@ -7,7 +7,7 @@ sidebarTitle: 概要 プロトコルにステークされた GRT は解凍期間が設けられており、インデクサーが悪意を持ってアプリケーションに不正なデータを提供したり、不正なインデックスを作成した場合には、スラッシュされる可能性があります。 また、インデクサーはデリゲーターからステークによる委任を受けて、ネットワークに貢献することができます。 -インデクサ − は、サブグラフのキュレーション・シグナルに基づいてインデックスを作成するサブグラフを選択し、キュレーターは、どのサブグラフが高品質で優先されるべきかを示すために GRT をステークします。 消費者(アプリケーションなど)は、インデクサーが自分のサブグラフに対するクエリを処理するパラメータを設定したり、クエリフィーの設定を行うこともできます。 +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### グラフノード -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### グラフノード -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/ja/indexing/supported-network-requirements.mdx b/website/src/pages/ja/indexing/supported-network-requirements.mdx index 6aa0c0caa16f..4de40fef90af 100644 --- a/website/src/pages/ja/indexing/supported-network-requirements.mdx +++ b/website/src/pages/ja/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| ネットワーク | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| ネットワーク | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/ja/indexing/tap.mdx b/website/src/pages/ja/indexing/tap.mdx index b1d43a4e628c..61a0f77343c3 100644 --- a/website/src/pages/ja/indexing/tap.mdx +++ b/website/src/pages/ja/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## 概要 -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### 要件 +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/ja/indexing/tooling/graph-node.mdx b/website/src/pages/ja/indexing/tooling/graph-node.mdx index 604095157886..332b7fd79baf 100644 --- a/website/src/pages/ja/indexing/tooling/graph-node.mdx +++ b/website/src/pages/ja/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: グラフノード --- -グラフノードはサブグラフのインデックスを作成し、得られたデータをGraphQL API経由でクエリできるようにするコンポーネントです。そのため、インデクサースタックの中心的存在であり、グラフノードの正しい動作はインデクサーを成功させるために非常に重要です。 +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## グラフノード -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQLデータベース -グラフノードのメインストアで、サブグラフデータ、サブグラフに関するメタデータ、ブロックキャッシュやeth_callキャッシュなどのサブグラフに依存しないネットワークデータが格納されます。 +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### ネットワーククライアント ネットワークにインデックスを付けるために、グラフ ノードは EVM 互換の JSON-RPC API を介してネットワーク クライアントにアクセスする必要があります。この RPC は単一のクライアントに接続する場合もあれば、複数のクライアントに負荷を分散するより複雑なセットアップになる場合もあります。 -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFSノード -IPFS ノード(バージョン 未満) - サブグラフのデプロイメタデータは IPFS ネットワーク上に保存されます。 グラフノードは、サブグラフのデプロイ時に主に IPFS ノードにアクセスし、サブグラフマニフェストと全てのリンクファイルを取得します。 ネットワーク・インデクサーは独自の IPFS ノードをホストする必要はありません。 ネットワーク用の IPFS ノードは、https://ipfs.network.thegraph.com でホストされています。 +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus メトリクスサーバー @@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit グラフノードは起動時に以下のポートを公開します。 -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. ## グラフノードの高度な設定 -最も単純な場合、Graph Node は、Graph Node の単一のインスタンス、単一の PostgreSQL データベース、IPFS ノード、およびサブグラフのインデックス作成に必要なネットワーク クライアントで操作できます。 +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### 複数のグラフノード -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > なお、複数のGraph Nodeはすべて同じデータベースを使用するように設定することができ、Shardingによって水平方向に拡張することができます。 #### デプロイメントルール -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. デプロイメントルールの設定例: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ query = "" ほとんどの場合、1つのPostgresデータベースでグラフノードインスタンスをサポートするのに十分です。グラフノードインスタンスが1つのPostgresデータベースを使い切った場合、グラフノードデータを複数のPostgresデータベースに分割して保存することが可能です。全てのデータベースが一緒になってグラフノードインスタンスのストアを形成します。個々のデータベースはシャード(shard)と呼ばれます。 -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. グラフノードの負荷に既存のデータベースが追いつかず、これ以上データベースサイズを大きくすることができない場合に、シャーディングが有効になります。 -> 一般的には、シャードを作成する前に、単一のデータベースを可能な限り大きくすることをお勧めします。例外は、クエリのトラフィックがサブグラフ間で非常に不均一に分割される場合です。このような状況では、ボリュームの大きいサブグラフを1つのシャードに、それ以外を別のシャードに保存すると劇的に効果があります。この設定により、ボリュームの大きいサブグラフのデータがdb内部キャッシュに残り、ボリュームの小さいサブグラフからそれほど必要とされていないデータに置き換えられる可能性が少なくなるためです。 +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. 接続の設定に関しては、まずpostgresql.confのmax_connectionsを400(あるいは200)に設定し、store_connection_wait_time_msとstore_connection_checkout_count Prometheusメトリクスを見てみてください。顕著な待ち時間(5ms以上)は、利用可能な接続が少なすぎることを示しています。高い待ち時間は、データベースが非常に忙しいこと(CPU負荷が高いなど)によっても引き起こされます。しかし、データベースが安定しているようであれば、待ち時間が長いのは接続数を増やす必要があることを示しています。設定上、各グラフノードインスタンスが使用できるコネクション数は上限であり、グラフノードは必要ないコネクションはオープンにしておきません。 @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### 複数のネットワークに対応 -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - 複数のネットワーク - ネットワークごとに複数のプロバイダ(プロバイダ間で負荷を分割することができ、また、フルノードとアーカイブノードを構成することができ、作業負荷が許す限り、Graph Nodeはより安価なプロバイダを優先することができます)。 @@ -225,11 +225,11 @@ Graph Node supports a range of environment variables which can enable features, ### グラフノードの管理 -グラフノードが動作している場合、それらのノードに展開されたサブグラフを管理することが課題となります。グラフノードは、サブグラフを管理するための様々なツールを提供します。 +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### ロギング -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### サブグラフの操作 +### Working with Subgraphs #### インデックスステータスAPI -デフォルトではポート8030/graphqlで利用可能なindexing status APIは、異なるサブグラフのindexing statusのチェック、indexing proofのチェック、サブグラフの特徴の検査など、様々なメソッドを公開しています。 +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - 適切なハンドラで順番にイベントを処理する(これには、状態のためにチェーンを呼び出したり、ストアからデータを取得したりすることが含まれます)。 - 出来上がったデータをストアに書き込む -これらのステージはパイプライン化されていますが(つまり、並列に実行することができます)、互いに依存し合っています。サブグラフのインデックス作成に時間がかかる場合、その根本的な原因は、特定のサブグラフに依存します。 +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. インデックス作成が遅くなる一般的な原因: @@ -276,24 +276,24 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - プロバイダー自体がチェーンヘッドに遅れる場合 - チェーンヘッドでプロバイダーから新しいレシートを取得する際の遅延 -サブグラフのインデックス作成指標は、インデックス作成の遅さの根本的な原因を診断するのに役立ちます。あるケースでは、問題はサブグラフ自体にありますが、他のケースでは、ネットワークプロバイダーの改善、データベースの競合の減少、その他の構成の改善により、インデックス作成性能を著しく向上させることができます。 +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### 失敗したサブグラフ +#### Failed Subgraphs -インデックス作成中、サブグラフは予期しないデータに遭遇したり、あるコンポーネントが期待通りに動作しなかったり、イベントハンドラや設定に何らかのバグがあったりすると、失敗することがあります。失敗には一般に2つのタイプがあります。 +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - 決定論的失敗:再試行では解決できない失敗 - 非決定論的失敗:プロバイダの問題や、予期しないグラフノードのエラーに起因する可能性があります。非決定論的失敗が発生すると、グラフノードは失敗したハンドラを再試行し、時間をかけて後退させます。 -いくつかのケースでは、失敗はインデクサーによって解決できるかもしれません(例えば、エラーが正しい種類のプロバイダを持っていない結果である場合、必要なプロバイダを追加することでインデックス作成を継続することが可能になります)。しかし、サブグラフのコードを変更する必要がある場合もあります。 +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### ブロックキャッシュとコールキャッシュ -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. TX受信欠落イベントなど、ブロックキャッシュの不整合が疑われる場合。 @@ -304,7 +304,7 @@ TX受信欠落イベントなど、ブロックキャッシュの不整合が疑 #### 問題やエラーのクエリ -サブグラフがインデックス化されると、インデクサはサブグラフの専用クエリエントポイントを介してクエリを提供することが期待できます。もしインデクサがかなりの量のクエリを提供することを望むなら、専用のクエリノードを推奨します。また、クエリ量が非常に多い場合、インデクサーはレプリカシャードを構成して、クエリがインデックス作成プロセスに影響を与えないようにしたいと思うかもしれません。 +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. ただし、専用のクエリ ノードとレプリカを使用しても、特定のクエリの実行に時間がかかる場合があり、場合によってはメモリ使用量が増加し、他のユーザーのクエリ時間に悪影響を及ぼします。 @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### クエリの分析 -問題のあるクエリが表面化するのは、ほとんどの場合、次の2つの方法のどちらかです。あるケースでは、ユーザー自身があるクエリが遅いと報告します。この場合、一般的な問題なのか、そのサブグラフやクエリに固有の問題なのか、遅さの理由を診断することが課題となります。そしてもちろん、可能であればそれを解決することです。 +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. また、クエリノードでメモリ使用量が多いことが引き金になる場合もあり、その場合は、まず問題の原因となっているクエリを特定することが課題となります。 @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### サブグラフの削除 +#### Removing Subgraphs > これは新しい機能で、Graph Node 0.29.xで利用可能になる予定です。 -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ja/indexing/tooling/graphcast.mdx b/website/src/pages/ja/indexing/tooling/graphcast.mdx index b9d89010f922..0a1fe3e92964 100644 --- a/website/src/pages/ja/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ja/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ title: グラフキャスト Graphcast SDK (ソフトウェア開発キット) を使用すると、開発者はラジオを構築できます。これは、インデクサーが特定の目的を果たすために実行できる、ゴシップを利用したアプリケーションです。また、次のユースケースのために、いくつかのラジオを作成する (または、ラジオを作成したい他の開発者/チームにサポートを提供する) 予定です: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- サブグラフ、サブストリーム、および他のインデクサーからの Firehose データをワープ同期するためのオークションと調整の実施。 -- サブグラフのリクエスト量、料金の量などを含む、アクティブなクエリ分析に関する自己報告。 -- サブグラフのインデックス作成時間、ハンドラー ガスのコスト、発生したインデックス作成エラーなどを含む、インデックス作成分析に関する自己報告。 +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - グラフノードのバージョン、Postgres のバージョン、Ethereum クライアントのバージョンなどを含むスタック情報の自己報告。 ### もっと詳しく知る diff --git a/website/src/pages/ja/resources/_meta-titles.json b/website/src/pages/ja/resources/_meta-titles.json index f5971e95a8f6..70aff3e769c5 100644 --- a/website/src/pages/ja/resources/_meta-titles.json +++ b/website/src/pages/ja/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "その他の役割", + "migration-guides": "移行ガイド" } diff --git a/website/src/pages/ja/resources/benefits.mdx b/website/src/pages/ja/resources/benefits.mdx index f3c7204743fb..8a86396805ea 100644 --- a/website/src/pages/ja/resources/benefits.mdx +++ b/website/src/pages/ja/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| コスト比較 | セルフホスト | グラフネットワーク | -| :-: | :-: | :-: | -| 月額サーバー代 | $350/月 | $0 | -| クエリコスト | $0+ | $0 per month | -| エンジニアリングタイム | $400/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | -| 月ごとのクエリ | インフラ機能に限定 | 100,000 (Free Plan) | -| クエリごとのコスト | $0 | $0 | -| Infrastructure | 集中管理型 | 分散型 | -| 地理的な冗長性 | 追加1ノードにつき$750+ | 含まれる | -| アップタイム | バリエーション | 99.9%+ | -| 月額費用合計 | $750+ | $0 | +| コスト比較 | セルフホスト | グラフネットワーク | +| :---------------------: | :-------------------------------------: | :---------------------------------: | +| 月額サーバー代 | $350/月 | $0 | +| クエリコスト | $0+ | $0 per month | +| エンジニアリングタイム | $400/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | +| 月ごとのクエリ | インフラ機能に限定 | 100,000 (Free Plan) | +| クエリごとのコスト | $0 | $0 | +| Infrastructure | 集中管理型 | 分散型 | +| 地理的な冗長性 | 追加1ノードにつき$750+ | 含まれる | +| アップタイム | バリエーション | 99.9%+ | +| 月額費用合計 | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| コスト比較 | セルフホスト | グラフネットワーク | -| :-: | :-: | :-: | -| 月額サーバー代 | $350/月 | $0 | -| クエリコスト | $500/月 | $120 per month | -| エンジニアリングタイム | $800/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | -| 月ごとのクエリ | インフラ機能に限定 | ~3,000,000 | -| クエリごとのコスト | $0 | $0.00004 | -| Infrastructure | 集中管理型 | 分散型 | -| エンジニアリングコスト | $200/時 | 含まれる | -| 地理的な冗長性 | ノード追加1台につき合計1,200ドル | 含まれる | -| アップタイム | バリエーション | 99.9%+ | -| 月額費用合計 | $1,650+ | $120 | +| コスト比較 | セルフホスト | グラフネットワーク | +| :---------------------: | :----------------------------------------: | :---------------------------------: | +| 月額サーバー代 | $350/月 | $0 | +| クエリコスト | $500/月 | $120 per month | +| エンジニアリングタイム | $800/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | +| 月ごとのクエリ | インフラ機能に限定 | ~3,000,000 | +| クエリごとのコスト | $0 | $0.00004 | +| Infrastructure | 集中管理型 | 分散型 | +| エンジニアリングコスト | $200/時 | 含まれる | +| 地理的な冗長性 | ノード追加1台につき合計1,200ドル | 含まれる | +| アップタイム | バリエーション | 99.9%+ | +| 月額費用合計 | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| コスト比較 | セルフホスト | グラフネットワーク | -| :-: | :-: | :-: | -| 月額サーバー代 | $1100/月(ノードごと) | $0 | -| クエリコスト | $4000 | $1,200 per month | -| 必要ノード数 | 10 | 該当なし | -| エンジニアリングタイム | $6,000/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | -| 月ごとのクエリ | インフラ機能に限定 | ~30,000,000 | -| クエリごとのコスト | $0 | $0.00004 | -| Infrastructure | 集中管理型 | 分散型 | -| 地理的な冗長性 | ノード追加1台につき合計1,200ドル | 含まれる | -| アップタイム | バリエーション | 99.9%+ | -| 月額費用合計 | $11,000+ | $1,200 | +| コスト比較 | セルフホスト | グラフネットワーク | +| :---------------------: | :-----------------------------------------: | :---------------------------------: | +| 月額サーバー代 | $1100/月(ノードごと) | $0 | +| クエリコスト | $4000 | $1,200 per month | +| 必要ノード数 | 10 | 該当なし | +| エンジニアリングタイム | $6,000/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | +| 月ごとのクエリ | インフラ機能に限定 | ~30,000,000 | +| クエリごとのコスト | $0 | $0.00004 | +| Infrastructure | 集中管理型 | 分散型 | +| 地理的な冗長性 | ノード追加1台につき合計1,200ドル | 含まれる | +| アップタイム | バリエーション | 99.9%+ | +| 月額費用合計 | $11,000+ | $1,200 | \*バックアップ費用含む:月額$50〜$100 @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -サブグラフ上のシグナルのキュレーションは、オプションで1回限り、ネットゼロのコストで可能です(例えば、$1,000のシグナルをサブグラフ上でキュレーションし、後で引き出すことができ、その過程でリターンを得る可能性があります)。 +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ja/resources/glossary.mdx b/website/src/pages/ja/resources/glossary.mdx index c71697a009cf..6a602dd4c2d2 100644 --- a/website/src/pages/ja/resources/glossary.mdx +++ b/website/src/pages/ja/resources/glossary.mdx @@ -4,51 +4,51 @@ title: 用語集 - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: 用語集 - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ja/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ja/resources/migration-guides/assemblyscript-migration-guide.mdx index 88e5aea91168..1c7252574879 100644 --- a/website/src/pages/ja/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ja/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript マイグレーションガイド --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -これにより、サブグラフの開発者は、AS 言語と標準ライブラリの新しい機能を使用できるようになります。 +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## 特徴 @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## アップグレードの方法 -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -どちらを選択すべきか迷った場合は、常に安全なバージョンを使用することをお勧めします。 値が存在しない場合は、サブグラフハンドラの中で return を伴う初期の if 文を実行するとよいでしょう。 +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### 変数シャドウイング @@ -132,7 +132,7 @@ in assembly/index.ts(4,3) ### Null 比較 -サブグラフのアップグレードを行うと、時々以下のようなエラーが発生することがあります。 +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -この件に関して、アセンブリ・スクリプト・コンパイラーに問題を提起しましたが、 今のところ、もしサブグラフ・マッピングでこの種の操作を行う場合には、 その前に NULL チェックを行うように変更してください。 +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -これは、値が初期化されていないために起こります。したがって、次のようにサブグラフが値を初期化していることを確認してください。 +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/ja/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ja/resources/migration-guides/graphql-validations-migration-guide.mdx index b004e14d9f98..b13d541ffe35 100644 --- a/website/src/pages/ja/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ja/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: GraphQL 検証移行ガイド +title: GraphQL Validations Migration Guide --- まもなく「graph-node」は [GraphQL Validations 仕様](https://spec.graphql.org/June2018/#sec-Validation) を 100% カバーします。 @@ -20,7 +20,7 @@ GraphQL Validations サポートは、今後の新機能と The Graph Network CLI 移行ツールを使用して、GraphQL 操作の問題を見つけて修正できます。または、GraphQL クライアントのエンドポイントを更新して、`https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` エンドポイントを使用することもできます。このエンドポイントに対してクエリをテストすると、クエリの問題を見つけるのに役立ちます。 -> [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) または [GraphQL Code Generator](https://the-guild.dev) を使用している場合、すべてのサブグラフを移行する必要はありません。 /graphql/codegen)、クエリが有効であることを既に確認しています。 +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## 移行 CLI ツール @@ -521,7 +521,7 @@ query { } ``` -\_注: `@stream`、`@live`、`@defer` はサポートされていません。 +_注: `@stream`、`@live`、`@defer` はサポートされていません。 **ディレクティブは、この場所で 1 回だけ使用できます (#UniqueDirectivesPerLocationRule)** diff --git a/website/src/pages/ja/resources/roles/curating.mdx b/website/src/pages/ja/resources/roles/curating.mdx index ff0ae8aced25..56560702df5c 100644 --- a/website/src/pages/ja/resources/roles/curating.mdx +++ b/website/src/pages/ja/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: キュレーティング --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## シグナルの出し方 -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -キュレーターは、特定のサブグラフのバージョンでシグナルを出すことも、そのサブグラフの最新のプロダクションビルドに自動的にシグナルを移行させることも可能ですます。 どちらも有効な戦略であり、それぞれに長所と短所があります。 +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. シグナルを最新のプロダクションビルドに自動的に移行させることは、クエリー料金の発生を確実にするために有効です。 キュレーションを行うたびに、1%のキュレーション税が発生します。 また、移行ごとに 0.5%のキュレーション税を支払うことになります。 つまり、サブグラフの開発者が、頻繁に新バージョンを公開することは推奨されません。 自動移行された全てのキュレーションシェアに対して、0.5%のキュレーション税を支払わなければならないからです。 -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## リスク 1. The Graph では、クエリ市場は本質的に歴史が浅く、初期の市場ダイナミクスのために、あなたの%APY が予想より低くなるリスクがあります。 -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. サブグラフはバグで失敗することがあります。 失敗したサブグラフは、クエリフィーが発生しません。 結果的に、開発者がバグを修正して新しいバージョンを展開するまで待たなければならなくなります。 - - サブグラフの最新バージョンに加入している場合、シェアはその新バージョンに自動移行します。 これには 0.5%のキュレーション税がかかります。 - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## キューレーション FAQ ### 1. キュレータはクエリフィーの何%を獲得できますか? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. シグナルを出すのに適した質の高いサブグラフはどのようにして決めるのですか? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. サブグラフの更新にかかるコストはいくらですか? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. サブグラフはどれくらいの頻度で更新できますか? +### 4. How often can I update my Subgraph? -サブグラフをあまり頻繁に更新しないことをお勧めします。詳細については、上記の質問を参照してください。 +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. キュレーションのシェアを売却することはできますか? diff --git a/website/src/pages/ja/resources/roles/delegating/undelegating.mdx b/website/src/pages/ja/resources/roles/delegating/undelegating.mdx index f350db31296b..ecc6f27f0bb0 100644 --- a/website/src/pages/ja/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/ja/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## その他のリソース diff --git a/website/src/pages/ja/resources/subgraph-studio-faq.mdx b/website/src/pages/ja/resources/subgraph-studio-faq.mdx index 5810742c4ec4..5992b07f7478 100644 --- a/website/src/pages/ja/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ja/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: サブグラフスタジオFAQ ## 1. サブグラフスタジオとは? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. API キーを作成するにはどうすればよいですか? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th API キーを作成後、「セキュリティ」セクションで、特定の API キーにクエリ可能なドメインを定義できます。 -## 5. 自分のサブグラフを他のオーナーに譲渡することはできますか? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -サブグラフが転送されると、Studio でサブグラフを表示または編集できなくなることに注意してください。 +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. 使用したいサブグラフの開発者ではない場合、サブグラフのクエリ URL を見つけるにはどうすればよいですか? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -APIキーを作成すると、自分でサブグラフを構築した場合でも、ネットワークに公開されているすべてのサブグラフにクエリを実行できることを覚えておいてください。新しい API キーを介したこれらのクエリは、ネットワーク上の他のクエリと同様に支払われます。 +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ja/resources/tokenomics.mdx b/website/src/pages/ja/resources/tokenomics.mdx index 07a04a43b06c..a1f30147507d 100644 --- a/website/src/pages/ja/resources/tokenomics.mdx +++ b/website/src/pages/ja/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## 概要 -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. キュレーター - インデクサーのために最適なサブグラフを見つける。 +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. インデクサー - ブロックチェーンデータのバックボーン @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### サブグラフの作成 +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### 既存のサブグラフのクエリ +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/ja/sps/introduction.mdx b/website/src/pages/ja/sps/introduction.mdx index fbb86f0d0763..71fabdd0416c 100644 --- a/website/src/pages/ja/sps/introduction.mdx +++ b/website/src/pages/ja/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: イントロダクション --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## 概要 -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### その他のリソース @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ja/sps/sps-faq.mdx b/website/src/pages/ja/sps/sps-faq.mdx index de0755e30c95..c038b396b268 100644 --- a/website/src/pages/ja/sps/sps-faq.mdx +++ b/website/src/pages/ja/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## サブストリームによって動作するサブグラフは何ですか? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## サブストリームを利用したサブグラフはサブグラフとどう違うのでしょうか? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## サブストリームを利用したサブグラフを使用する利点は何ですか? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## サブストリームの利点は何ですか? @@ -35,7 +35,7 @@ Substreams-powered subgraphs combine all the benefits of Substreams with the que - 高パフォーマンスのインデックス作成: 並列操作の大規模なクラスター (BigQuery を考えてください) を通じて、桁違いに高速なインデックス作成を実現します。 -- 場所を選ばずにデータをどこにでも沈める: PostgreSQL、MongoDB、Kafka、サブグラフ、フラットファイル、Googleシート。 +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - プログラム可能: コードを使用して抽出をカスタマイズし、変換時の集計を実行し、複数のシンクの出力をモデル化します。 @@ -63,17 +63,17 @@ Firehose を使用すると、次のような多くの利点があります。 - フラット ファイルの活用: ブロックチェーン データは、利用可能な最も安価で最適化されたコンピューティング リソースであるフラット ファイルに抽出されます。 -## 開発者は、サブストリームを利用したサブグラフとサブストリームに関する詳細情報にどこでアクセスできますか? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## サブストリームにおけるRustモジュールの役割は何ですか? -Rust モジュールは、サブグラフの AssemblyScript マッパーに相当します。これらは同様の方法で WASM にコンパイルされますが、プログラミング モデルにより並列実行が可能になります。これらは、生のブロックチェーン データに適用する変換と集計の種類を定義します。 +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst サブストリームを使用すると、変換レイヤーで合成が行われ、キャッシュされたモジュールを再利用できるようになります。 -例として、AliceはDEX価格モジュールを構築し、Bobはそれを使用して興味のあるいくつかのトークンのボリューム集計モジュールを構築し、Lisaは4つの個々のDEX価格モジュールを組み合わせて価格オラクルを作成することができます。単一のSubstreamsリクエストは、これらの個々のモジュールをまとめ、リンクしてより洗練されたデータのストリームを提供します。そのストリームはその後、サブグラフを作成し、消費者によってクエリされることができます。 +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## サブストリームを利用したサブグラフを構築してデプロイするにはどうすればよいでしょうか? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## サブストリームおよびサブストリームを利用したサブグラフの例はどこで見つけることができますか? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -[この Github リポジトリ](https://github.com/pinax-network/awesome-substreams) にアクセスして、サブストリームとサブストリームを利用したサブグラフの例を見つけることができます。 +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## SubstreamsとSubstreamsを活用したサブグラフがThe Graph Networkにとってどのような意味を持つのでしょうか? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? この統合は、非常に高いパフォーマンスのインデクシングと、コミュニティモジュールを活用し、それらを基に構築することによる大きな組み合わせ可能性を含む多くの利点を約束しています。 diff --git a/website/src/pages/ja/sps/triggers.mdx b/website/src/pages/ja/sps/triggers.mdx index 6935eb956f52..9ddb07c5477c 100644 --- a/website/src/pages/ja/sps/triggers.mdx +++ b/website/src/pages/ja/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## 概要 -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### その他のリソース diff --git a/website/src/pages/ja/sps/tutorial.mdx b/website/src/pages/ja/sps/tutorial.mdx index fbf4f5d22894..46c4c8305676 100644 --- a/website/src/pages/ja/sps/tutorial.mdx +++ b/website/src/pages/ja/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## 始めましょう @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/ja/subgraphs/_meta-titles.json b/website/src/pages/ja/subgraphs/_meta-titles.json index 0556abfc236c..5c6121aa7d88 100644 --- a/website/src/pages/ja/subgraphs/_meta-titles.json +++ b/website/src/pages/ja/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", - "cookbook": "Cookbook", - "best-practices": "Best Practices" + "querying": "クエリ", + "developing": "開発", + "guides": "How-to Guides", + "best-practices": "ベストプラクティス" } diff --git a/website/src/pages/ja/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ja/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/ja/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ja/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ja/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/ja/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/ja/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ja/subgraphs/best-practices/grafting-hotfix.mdx index cb44f95f25c1..f4726b7a89b8 100644 --- a/website/src/pages/ja/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### 概要 -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## その他のリソース - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ja/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ja/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/ja/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/ja/subgraphs/best-practices/pruning.mdx b/website/src/pages/ja/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/ja/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ja/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ja/subgraphs/best-practices/timeseries.mdx index c02236d7829c..72c12a82a496 100644 --- a/website/src/pages/ja/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ja/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## 概要 @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ A timeseries entity represents raw data points collected over time. It is define type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ An aggregation entity computes aggregated values from a timeseries source. It is type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -141,7 +145,7 @@ Supported aggregation functions: - sum - count -- min +- 分 - max - first - last @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ja/subgraphs/billing.mdx b/website/src/pages/ja/subgraphs/billing.mdx index 9967aa377644..5ad5947ae4fb 100644 --- a/website/src/pages/ja/subgraphs/billing.mdx +++ b/website/src/pages/ja/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: 請求書 ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/ja/subgraphs/cookbook/arweave.mdx b/website/src/pages/ja/subgraphs/cookbook/arweave.mdx index b834f96b5cb9..66eef9c8160f 100644 --- a/website/src/pages/ja/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/ja/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Arweaveでのサブグラフ構築 --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! このガイドでは、Arweaveブロックチェーンのインデックスを作成するためのサブグラフの構築とデプロイ方法について学びます。 @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are Arweaveのサブグラフを構築し展開できるようにするためには、2つのパッケージが必要です。 -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## サブグラフのコンポーネント -サブグラフには3つの構成要素があります: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ Arweaveのサブグラフを構築し展開できるようにするためには ここでは、GraphQL を使用してサブグラフにインデックスを付けた後にクエリできるようにするデータを定義します。これは実際には API のモデルに似ており、モデルはリクエスト本文の構造を定義します。 -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` これは、リスニングしているデータソースと誰かがやりとりするときに、データをどのように取得し、保存するかを決定するロジックです。データは変換され、あなたがリストアップしたスキーマに基づいて保存されます。 -サブグラフの開発には 2 つの重要なコマンドがあります: +During Subgraph development there are two key commands: ``` -$ graph codegen # マニフェストで識別されたようにファイルから型を生成します -$ グラフ ビルド # AssemblyScript ファイルから Web アセンブリを生成し、/build フォルダにすべてのサブグラフ ファイルを準備します +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## サブグラフマニフェストの定義 -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave データ ソースには、オプションの source.owner フィールドが導入されています。これは、Arweave ウォレットの公開鍵です。 @@ -99,7 +99,7 @@ Arweaveデータソースは 2 種類のハンドラーをサポートしてい ## スキーマ定義 -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript マッピング @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Arweaveサブグラフのクエリ -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## サブグラフの例 -参考までにサブグラフの例を紹介します: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### サブグラフは Arweave やその他のチェーンにインデックスを付けることができますか? +### Can a Subgraph index Arweave and other chains? -いいえ、サブグラフは 1 つのチェーン/ネットワークのデータソースのみをサポートします。 +No, a Subgraph can only support data sources from one chain/network. ### 保存されたファイルをArweaveでインデックス化することはできますか? 現在、The Graph は Arweave をブロックチェーン (ブロックとトランザクション) としてのみインデックス化しています。 -### 自分のサブグラフにあるBundlrバンドルは特定できるのか? +### Can I identify Bundlr bundles in my Subgraph? 現在はサポートされていません。 @@ -188,7 +188,7 @@ Source.ownerには、ユーザの公開鍵またはアカウントアドレス ### 現在の暗号化フォーマットは? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/ja/subgraphs/cookbook/enums.mdx b/website/src/pages/ja/subgraphs/cookbook/enums.mdx index 8df21d2960f9..14c608584b8f 100644 --- a/website/src/pages/ja/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/ja/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/ja/subgraphs/cookbook/grafting.mdx b/website/src/pages/ja/subgraphs/cookbook/grafting.mdx index 0be8b13c8dbd..0ce88bc00b3f 100644 --- a/website/src/pages/ja/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/ja/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: グラフティングでコントラクトを取り替え、履歴を残す --- -このガイドでは、既存のサブグラフをグラフティングして新しいサブグラフを構築し、配備する方法を学びます。 +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## グラフティングとは? -グラフティングは、既存のサブグラフからデータを再利用し、後のブロックからインデックスを作成します。これは、開発中にマッピングの単純なエラーを素早く乗り越えるため、または、既存のサブグラフが失敗した後に一時的に再び動作させるために有用です。また、ゼロからインデックスを作成するのに時間がかかる機能をサブグラフに追加する場合にも使用できます。 +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -グラフト化されたサブグラフは、ベースとなるサブグラフのスキーマと同一ではなく、単に互換性のある GraphQL スキーマを使用することができます。また、それ自体は有効なサブグラフのスキーマでなければなりませんが、以下の方法でベースサブグラフのスキーマから逸脱することができます。 +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - エンティティタイプを追加または削除する - エンティティタイプから属性を削除する @@ -22,38 +22,38 @@ title: グラフティングでコントラクトを取り替え、履歴を残 - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## ネットワークにアップグレードする際の移植に関する重要な注意事項 -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### 何でこれが大切ですか? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### ベストプラクティス -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. これらのガイドラインに従うことで、リスクを最小限に抑え、よりスムーズな移行プロセスを確保できます。 ## 既存のサブグラフの構築 -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## サブグラフマニフェストの定義 -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## グラフティングマニフェストの定義 -グラフティングは、元のサブグラフ・マニフェストに新しい2つの項目を追加する必要があります。 +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## ベースサブグラフの起動 -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. 終了後、サブグラフが正しくインデックスされていることを確認します。The Graph Playgroundで以下のコマンドを実行すると、サブグラフが正常にインデックスされます。 +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ The `base` and `block` values can be found by deploying two subgraphs: one for t } ``` -サブグラフが正しくインデックスされていることを確認したら、グラフティングで素早くサブグラフを更新することができます。 +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## グラフティングサブグラフの展開 グラフト置換されたsubgraph.yamlは、新しいコントラクトのアドレスを持つことになります。これは、ダンプを更新したり、コントラクトを再デプロイしたりしたときに起こりうることです。 -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. 終了後、サブグラフが正しくインデックスされていることを確認します。The Graph Playgroundで以下のコマンドを実行すると、サブグラフが正常にインデックスされます。 +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ The `base` and `block` values can be found by deploying two subgraphs: one for t } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## その他のリソース diff --git a/website/src/pages/ja/subgraphs/cookbook/near.mdx b/website/src/pages/ja/subgraphs/cookbook/near.mdx index 6f4069566be2..9e3738689919 100644 --- a/website/src/pages/ja/subgraphs/cookbook/near.mdx +++ b/website/src/pages/ja/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: NEAR でサブグラフを作成する --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## NEAR とは? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## NEAR サブグラフとは? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - ブロックハンドラ:新しいブロックごとに実行されます - レシートハンドラ:指定されたアカウントでメッセージが実行されるたびに実行されます @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## NEAR サブグラフの構築 -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> NEAR サブグラフの構築は、Ethereum のインデックスを作成するサブグラフの構築と非常によく似ています。 +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -サブグラフの定義には 3 つの側面があります: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -サブグラフの開発には 2 つの重要なコマンドがあります: +During Subgraph development there are two key commands: ```bash -$ graph codegen # マニフェストで識別されたようにファイルから型を生成します -$ グラフ ビルド # AssemblyScript ファイルから Web アセンブリを生成し、/build フォルダにすべてのサブグラフ ファイルを準備します +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### サブグラフマニフェストの定義 -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ NEAR データソースは 2 種類のハンドラーをサポートしていま ### スキーマ定義 -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript マッピング @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## NEAR サブグラフの展開 -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -ノードの構成は、サブグラフがどこにディプロイされるかによって異なります。 +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -デプロイされたサブグラフは、Graph Node によってインデックス化され、その進捗状況は、サブグラフ自体にクエリして確認できます: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ NEAR のインデックスを作成するグラフノードの運用には、以 ## NEAR サブグラフへのクエリ -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## サブグラフの例 -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### ベータ版はどのように機能しますか? -NEAR サポートはベータ版です。統合の改善を続ける中で、API に変更が加えられる可能性があります。NEAR サブグラフの構築をサポートし、最新の開発状況をお知らせしますので、near@thegraph.comまでメールをお送りください。 +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### サブグラフは NEAR チェーンと EVM チェーンの両方にインデックスを付けることができますか? +### Can a Subgraph index both NEAR and EVM chains? -いいえ、サブグラフは 1 つのチェーン/ネットワークのデータソースのみをサポートします。 +No, a Subgraph can only support data sources from one chain/network. -### サブグラフはより具体的なトリガーに反応できますか? +### Can Subgraphs react to more specific triggers? 現在、ブロックとレシートのトリガーのみがサポートされています。指定されたアカウントへのファンクションコールのトリガーを検討しています。また、NEAR がネイティブイベントをサポートするようになれば、イベントトリガーのサポートも検討しています。 @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### NEAR サブグラフは、マッピング中に NEAR アカウントへのビュー呼び出しを行うことができますか? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? これはサポートされていません。この機能がインデックス作成に必要かどうかを評価しています。 -### NEAR サブグラフでデータ ソース テンプレートを使用できますか? +### Can I use data source templates in my NEAR Subgraph? これは現在サポートされていません。この機能がインデックス作成に必要かどうかを評価しています。 -### Ethereum サブグラフは「保留中」バージョンと「現在」バージョンをサポートしていますが、NEAR サブグラフの「保留中」バージョンをデプロイするにはどうすればよいですか? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -「pending」は、NEAR サブグラフではまだサポートされていません。暫定的に、新しいバージョンを別の「named」サブグラフにデプロイし、それがチェーンヘッドと同期したときに、メインの「named」サブグラフに再デプロイすることができます。 +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### 私の質問に対する回答がありません。NEAR サブグラフの作成に関するヘルプはどこで入手できますか? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## 参考文献 diff --git a/website/src/pages/ja/subgraphs/cookbook/polymarket.mdx b/website/src/pages/ja/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/ja/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/ja/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ja/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/ja/subgraphs/cookbook/secure-api-keys-nextjs.mdx index bac42648b0fc..ead239aa93e1 100644 --- a/website/src/pages/ja/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/ja/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## 概要 -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/ja/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/ja/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..2f92d8893576 --- /dev/null +++ b/website/src/pages/ja/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## 概要 + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## 始めましょう + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## その他のリソース + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ja/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/ja/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..ac7025ba8faa --- /dev/null +++ b/website/src/pages/ja/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## イントロダクション + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## 始めましょう + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## その他のリソース + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/ja/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/ja/subgraphs/cookbook/subgraph-debug-forking.mdx index 7d4e4d6a6e6f..cba9bbca2ff7 100644 --- a/website/src/pages/ja/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/ja/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: フォークを用いた迅速かつ容易なサブグラフのデバッグ --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## さて、それは何でしょうか? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## その方法は? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## コードを見てみましょう -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. 通常の試すであろう修正方法: 1. マッピングソースを変更して問題の解決を試す(解決されないことは分かっていても) -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. 同期を待つ 4. 再び問題が発生した場合は、1に戻る It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. マッピングのソースを変更し、問題を解決する -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. もし再度、壊れる場合1に戻る さて、ここで2つの疑問が生じます: @@ -69,18 +69,18 @@ Using **subgraph forking** we can essentially eliminate this step. Here is how i 回答: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. フォーキングは簡単であり煩雑な手間はありません ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! そこで、以下の通りです: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/ja/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/ja/subgraphs/cookbook/subgraph-uncrashable.mdx index 74d66b27fcaa..5f51f521b214 100644 --- a/website/src/pages/ja/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/ja/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: 安全なサブグラフのコード生成 --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Subgraph Uncrashable と統合する理由 -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - また、このフレームワークには、エンティティ変数のグループに対して、カスタムだが安全なセッター関数を作成する方法が(設定ファイルを通じて)含まれています。この方法では、ユーザーが古いグラフ・エンティティをロード/使用することは不可能であり、また、関数が必要とする変数の保存や設定を忘れることも不可能です。 -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashableは、Graph CLI codegenコマンドでオプションのフラグとして実行することができます。 @@ -26,4 +26,4 @@ Subgraph Uncrashableは、Graph CLI codegenコマンドでオプションのフ graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/ja/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/ja/subgraphs/cookbook/transfer-to-the-graph.mdx index 6ef52284a5f5..890b8495ad7b 100644 --- a/website/src/pages/ja/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/ja/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: The Graphに移行する --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### 例 -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### その他のリソース -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ja/subgraphs/developing/_meta-titles.json b/website/src/pages/ja/subgraphs/developing/_meta-titles.json index 01a91b09ed77..f973d764cbc3 100644 --- a/website/src/pages/ja/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/ja/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "作成", + "deploying": "デプロイ", + "publishing": "情報公開", + "managing": "管理" } diff --git a/website/src/pages/ja/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ja/subgraphs/developing/creating/advanced.mdx index b6269f49fcf5..fc0e92c003b1 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## 概要 -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## 致命的でないエラー -すでに同期しているサブグラフのインデックスエラーは、デフォルトではサブグラフを失敗させ、同期を停止させます。サブグラフは、エラーが発生したハンドラーによる変更を無視することで、エラーが発生しても同期を継続するように設定することができます。これにより、サブグラフの作成者はサブグラフを修正する時間を得ることができ、一方でクエリは最新のブロックに対して提供され続けますが、エラーの原因となったバグのために結果が一貫していない可能性があります。なお、エラーの中には常に致命的なものもあり、致命的でないものにするためには、そのエラーが決定論的であることがわかっていなければなりません。 +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -非致命的エラーを有効にするには、サブグラフのマニフェストに以下の機能フラグを設定する必要があります: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - - fullTextSearch + - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -ファイルデータソースは、堅牢で拡張可能な方法でインデックス作成中にオフチェーンデータにアクセスするための新しいサブグラフ機能です。ファイルデータソースは、IPFS および Arweave からのファイルのフェッチをサポートしています。 +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > また、オフチェーンデータの決定論的なインデックス作成、および任意のHTTPソースデータの導入の可能性についても基礎ができました。 @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ export function handleTransfer(event: TransferEvent): void { This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file おめでとうございます!ファイルデータソースが使用できます。 -#### サブグラフのデプロイ +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### 制限事項 -ファイルデータソースハンドラおよびエンティティは、他のサブグラフエンティティから分離され、実行時に決定論的であることを保証し、チェーンベースのデータソースを汚染しないことを保証します。具体的には、以下の通りです。 +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - ファイルデータソースで作成されたエンティティは不変であり、更新することはできません。 - ファイルデータソースハンドラは、他のファイルデータソースのエンティティにアクセスすることはできません。 - ファイルデータソースに関連するエンティティは、チェーンベースハンドラーからアクセスできません。 -> この制約は、ほとんどのユースケースで問題になることはありませんが、一部のユースケースでは複雑さをもたらすかもしれません。ファイルベースのデータをサブグラフでモデル化する際に問題がある場合は、Discordを通じてご連絡ください。 +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! また、オンチェーンデータソースや他のファイルデータソースからデータソースを作成することはできません。この制限は、将来的に解除される可能性があります。 @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -グラフトはベースデータのインデックスではなくコピーを行うため、スクラッチからインデックスを作成するよりもサブグラフを目的のブロックに早く到達させることができますが、非常に大きなサブグラフの場合は最初のデータコピーに数時間かかることもあります。グラフトされたサブグラフが初期化されている間、グラフノードは既にコピーされたエンティティタイプに関する情報を記録します。 +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -グラフト化されたサブグラフは、ベースとなるサブグラフのスキーマと同一ではなく、単に互換性のある GraphQL スキーマを使用することができます。また、それ自体は有効なサブグラフのスキーマでなければなりませんが、以下の方法でベースサブグラフのスキーマから逸脱することができます。 +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - エンティティタイプを追加または削除する - エンティティタイプから属性を削除する @@ -560,4 +560,4 @@ When a subgraph whose manifest contains a `graft` block is deployed, Graph Node - インターフェースの追加または削除 - インターフェースがどのエンティティタイプに実装されるかを変更する -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ja/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ja/subgraphs/developing/creating/assemblyscript-mappings.mdx index 50b664c86f3b..e46466a45c92 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## コード生成 -スマートコントラクト、イベント、エンティティを簡単かつタイプセーフに扱うために、Graph CLIはサブグラフのGraphQLスキーマとデータソースに含まれるコントラクトABIからAssemblyScriptタイプを生成することができます。 +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. これを行うためには @@ -80,7 +80,7 @@ If no value is set for a field in the new entity with the same ID, the field wil graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx index c9d5c8a3ba47..b9e5cace8281 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ Since language mappings are written in AssemblyScript, it is useful to review th ### バージョン -サブグラフマニフェストapiVersionは、特定のサブグラフのマッピングAPIバージョンを指定します。このバージョンは、Graph Nodeによって実行されます。 +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| バージョン | リリースノート | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Ethereum タイプに `TransactionReceipt` と `Log` クラスを追加
Ethereum Event オブジェクトに `receipt` フィールドを追加。 | -| 0.0.6 | Ethereum Transactionオブジェクトに`nonce`フィールドを追加
Ethereum Blockオブジェクトに`baseFeePerGas`を追加。 | +| バージョン | リリースノート | +| :---: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Ethereum タイプに `TransactionReceipt` と `Log` クラスを追加
Ethereum Event オブジェクトに `receipt` フィールドを追加。 | +| 0.0.6 | Ethereum Transactionオブジェクトに`nonce`フィールドを追加
Ethereum Blockオブジェクトに`baseFeePerGas`を追加。 | | 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Ethereum SmartContractCall オブジェクトにfunctionSignatureフィールドを追加 | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Ethereum Transaction オブジェクトに inputフィールドを追加 | +| 0.0.4 | Ethereum SmartContractCall オブジェクトにfunctionSignatureフィールドを追加 | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Ethereum Transaction オブジェクトに inputフィールドを追加 | ### 組み込み型 @@ -223,7 +223,7 @@ Bytesの API の上に以下のメソッドを追加しています。 Store API は、グラフノードのストアにエンティティを読み込んだり、保存したり、削除したりすることができます。 -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### エンティティの作成 @@ -282,11 +282,11 @@ graph-node v0.31.0、@graphprotocol/graph-ts v0.30.0、および @graphprotocol/ The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript -let id = event.transaction.hash // または ID が構築される方法 +let id =event.transaction.hash // または ID が構築される方法 let transfer = Transfer.loadInBlock(id) if (transfer == null) { transfer = 新しい転送(id) @@ -380,11 +380,11 @@ Ethereum API は、スマートコントラクト、パブリックステート #### Ethereum タイプのサポート -エンティティと同様に、graph codegenは、サブグラフで使用されるすべてのスマートコントラクトとイベントのためのクラスを生成します。 このためには、コントラクト ABI がサブグラフマニフェストのデータソースの一部である必要があります。 通常、ABI ファイルはabis/フォルダに格納されています。 +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -生成されたクラスでは、Ethereum Typeと[built-in types](#built-in-types)間の変換が舞台裏で行われるため、サブグラフ作成者はそれらを気にする必要がありません。 +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -以下の例で説明します。 以下のようなサブグラフのスキーマが与えられます。 +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### スマートコントラクトの状態へのアクセス -graph codegenが生成するコードには、サブグラフで使用されるスマートコントラクトのクラスも含まれています。 これらを使って、パブリックな状態変数にアクセスしたり、現在のブロックにあるコントラクトの関数を呼び出したりすることができます。 +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. よくあるパターンは、イベントが発生したコントラクトにアクセスすることです。 これは以下のコードで実現できます。 @@ -506,7 +506,7 @@ Transferは、エンティティタイプとの名前の衝突を避けるため Ethereum の ERC20Contractにsymbolというパブリックな読み取り専用の関数があれば、.symbol()で呼び出すことができます。 パブリックな状態変数については、同じ名前のメソッドが自動的に作成されます。 -サブグラフの一部である他のコントラクトは、生成されたコードからインポートすることができ、有効なアドレスにバインドすることができます。 +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### リバートされた呼び出しの処理 @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false '@graphprotocol/graph-ts'から{ log } をインポートします。 ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. log API には以下の機能があります: @@ -590,7 +590,7 @@ log API には以下の機能があります: - `log.info(fmt: string, args: Array): void` - インフォメーションメッセージを記録します。 - `log.warning(fmt: string, args: Array): void` - 警告メッセージを記録します。 - `log.error(fmt: string, args: Array): void` - エラーメッセージを記録します。 -- `log.critical(fmt: string, args: Array): void` - クリティカル・メッセージを記録して、サブグラフを終了します。 +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. log API は、フォーマット文字列と文字列値の配列を受け取ります。 そして、プレースホルダーを配列の文字列値で置き換えます。 最初の{}プレースホルダーは配列の最初の値に置き換えられ、2 番目の{}プレースホルダーは 2 番目の値に置き換えられ、以下のようになります。 @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) 現在サポートされているフラグは `json` だけで、これは `ipfs.map` に渡さなければなりません。json` フラグを指定すると、IPFS ファイルは一連の JSON 値で構成されます。ipfs.map` を呼び出すと、ファイルの各行を読み込んで `JSONValue` にデシリアライズし、それぞれのコールバックを呼び出します。コールバックは `JSONValue` からデータを格納するためにエンティティ操作を使用することができます。エンティティの変更は、`ipfs.map` を呼び出したハンドラが正常に終了したときのみ保存されます。その間はメモリ上に保持されるので、`ipfs.map` が処理できるファイルのサイズは制限されます。 -成功すると,ipfs.mapは voidを返します。 コールバックの呼び出しでエラーが発生した場合、ipfs.mapを呼び出したハンドラは中止され、サブグラフは失敗とマークされます。 +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -770,44 +770,44 @@ if (value.kind == JSONValueKind.BOOL) { ### タイプ 変換参照 -| Source(s) | Destination | Conversion function | -| -------------------- | -------------------- | ---------------------------- | -| Address | Bytes | none | -| Address | String | s.toHexString() | -| BigDecimal | String | s.toString() | -| BigInt | BigDecimal | s.toBigDecimal() | -| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() | -| BigInt | String (unicode) | s.toString() | -| BigInt | i32 | s.toI32() | -| Boolean | Boolean | none | -| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | -| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | -| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() | -| Bytes | String (unicode) | s.toString() | -| Bytes | String (base58) | s.toBase58() | -| Bytes | i32 | s.toI32() | -| Bytes | u32 | s.toU32() | -| Bytes | JSON | json.fromBytes(s) | -| int8 | i32 | none | -| int32 | i32 | none | -| int32 | BigInt | Bigint.fromI32(s) | -| uint24 | i32 | none | -| int64 - int256 | BigInt | none | -| uint32 - uint256 | BigInt | none | -| JSON | boolean | s.toBool() | -| JSON | i64 | s.toI64() | -| JSON | u64 | s.toU64() | -| JSON | f64 | s.toF64() | -| JSON | BigInt | s.toBigInt() | -| JSON | string | s.toString() | -| JSON | Array | s.toArray() | -| JSON | Object | s.toObject() | -| String | Address | Address.fromString(s) | -| Bytes | Address | Address.fromString(s) | -| String | BigInt | BigDecimal.fromString(s) | -| String | BigDecimal | BigDecimal.fromString(s) | -| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | -| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | +| Source(s) | Destination | Conversion function | +| -------------------- | -------------------- | -------------------------------- | +| Address | Bytes | none | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | none | +| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | none | +| int32 | i32 | none | +| int32 | BigInt | Bigint.fromI32(s) | +| uint24 | i32 | none | +| int64 - int256 | BigInt | none | +| uint32 - uint256 | BigInt | none | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toI64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| Bytes | Address | Address.fromString(s) | +| String | BigInt | BigDecimal.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | ### データソースのメタデータ @@ -836,7 +836,7 @@ if (value.kind == JSONValueKind.BOOL) { ### マニフェスト内のDataSourceContext -DataSources`の`context`セクションでは、サブグラフマッピング内でアクセス可能なキーと値のペアを定義することができます。使用可能な型は`Bool`、`String`、`Int`、`Int8`、`BigDecimal`、`Bytes`、`List`、`BigInt\` です。 +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. 以下は `context` セクションのさまざまな型の使い方を示す YAML の例です: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -このコンテキストは、サブグラフのマッピング・ファイルからアクセスでき、よりダイナミックで設定可能なサブグラフを実現します。 +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/common-issues.mdx index 9bb0634b57b3..e7622788c797 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: AssemblyScriptのよくある問題 --- -AssemblyScript](https://github.com/AssemblyScript/assemblyscript)には、サブグラフの開発中によく遭遇する問題があります。これらの問題は、デバッグの難易度に幅がありますが、認識しておくと役に立つかもしれません。以下は、これらの問題の非網羅的なリストです: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - スコープは[クロージャー関数](https://www.assemblyscript.org/status.html#on-closures)には継承されません。つまり、クロージャー関数の外で宣言された変数は使用できません。Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s)に説明があります。 diff --git a/website/src/pages/ja/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ja/subgraphs/developing/creating/install-the-cli.mdx index 397b011cbdd3..3352df16b841 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Graph CLI のインストール --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## 概要 -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## はじめに @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## サブグラフの作成 ### 既存のコントラクトから -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### サブグラフの例から -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is ABI ファイルは、契約内容と一致している必要があります。ABI ファイルを入手するにはいくつかの方法があります: - 自分のプロジェクトを構築している場合は、最新の ABI にアクセスできる可能性があります。 -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| バージョン | リリースノート | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ja/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ja/subgraphs/developing/creating/ql-schema.mdx index fb06d8d022a0..32b7c233efa2 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## 概要 -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| タイプ | 説明書き | -| --- | --- | -| `Bytes` | Byte 配列で、16 進数の文字列で表されます。Ethereum のハッシュやアドレスによく使われます。 | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| タイプ | 説明書き | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte 配列で、16 進数の文字列で表されます。Ethereum のハッシュやアドレスによく使われます。 | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -1 対多の関係では、関係は常に「1」側に格納され、「多」側は常に派生されるべきです。「多」側にエンティティの配列を格納するのではなく、このように関係を格納することで、サブグラフのインデックス作成と問い合わせの両方で劇的にパフォーマンスが向上します。一般的に、エンティティの配列を保存することは、現実的に可能な限り避けるべきです。 +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### 例 @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -このように多対多の関係をより精巧に保存する方法では、サブグラフに保存されるデータが少なくなるため、サブグラフのインデックス作成や問い合わせが劇的に速くなります。 +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### スキーマへのコメントの追加 @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## 対応言語 @@ -296,29 +296,29 @@ query { サポートされている言語の辞書: | Code | 辞書 | -| ------ | ------------ | -| simple | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | ポルトガル語 | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| ------ | ---------- | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | ポルトガル語 | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | ### ランキングアルゴリズム サポートされている結果の順序付けのアルゴリズム: -| Algorithm | Description | -| ------------- | ------------------------------------------------------------------- | -| rank | フルテキストクエリのマッチ品質 (0-1) を使用して結果を並べ替えます。 | -| proximityRank | Similar to rank but also includes the proximity of the matches. | +| Algorithm | Description | +| ------------- | ----------------------------------------------------------------------- | +| rank | フルテキストクエリのマッチ品質 (0-1) を使用して結果を並べ替えます。 | +| proximityRank | Similar to rank but also includes the proximity of the matches. | diff --git a/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx index c2dcb7ad1d68..3c40e48ef42d 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## 概要 -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| バージョン | リリースノート | +| :---: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ja/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ja/subgraphs/developing/creating/subgraph-manifest.mdx index 1fc82b54930d..b8ad63e6a256 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## 概要 -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). マニフェストを更新する重要な項目は以下の通りです: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## コールハンドラー -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. コールハンドラーは、次の 2 つのケースのいずれかでのみトリガされます:指定された関数がコントラクト自身以外のアカウントから呼び出された場合、または Solidity で外部としてマークされ、同じコントラクト内の別の関数の一部として呼び出された場合。 -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### コールハンドラーの定義 @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### マッピング関数 -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## ブロック・ハンドラー -コントラクトイベントやファンクションコールの購読に加えて、サブグラフは、新しいブロックがチェーンに追加されると、そのデータを更新したい場合があります。これを実現するために、サブグラフは各ブロックの後、あるいは事前に定義されたフィルタにマッチしたブロックの後に、関数を実行することができます。 +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### 対応フィルター @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. ブロックハンドラーにフィルターがない場合、ハンドラーはブロックごとに呼び出されます。1 つのデータソースには、各フィルタータイプに対して 1 つのブロックハンドラーしか含めることができません。 @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### ワンスフィルター @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Once フィルターを使用して定義されたハンドラーは、他のすべてのハンドラーが実行される前に 1 回だけ呼び出されます。 この構成により、サブグラフはハンドラーを初期化ハンドラーとして使用し、インデックス作成の開始時に特定のタスクを実行できるようになります。 +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### マッピング関数 -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## スタートブロック(start Blocks) -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| バージョン | リリースノート | +| :---: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ja/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ja/subgraphs/developing/creating/unit-testing-framework.mdx index 5a089a93aa50..ececebba24c5 100644 --- a/website/src/pages/ja/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ja/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: ユニットテストフレームワーク --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## はじめに @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### デモ・サブグラフ +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### ビデオチュートリアル -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im これで最初のテストが完成しました! 👏 -テストを実行するには、サブグラフのルートフォルダで以下を実行する必要があります: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## テストカバレッジ -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## その他のリソース -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## フィードバック diff --git a/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx index 53c7dcfbd86b..a43e7a32c7b8 100644 --- a/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ja/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## サブグラフを複数のネットワークにデプロイする +## Deploying the Subgraph to multiple networks -場合によっては、すべてのコードを複製せずに、同じサブグラフを複数のネットワークに展開する必要があります。これに伴う主な課題は、これらのネットワークのコントラクト アドレスが異なることです。 +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio・サブグラフ・アーカイブポリシー +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -このポリシーで影響を受けるすべてのサブグラフには、問題のバージョンを戻すオプションがあります。 +Every Subgraph affected with this policy has an option to bring the version in question back. -## サブグラフのヘルスチェック +## Checking Subgraph health -サブグラフが正常に同期された場合、それはそれが永久に正常に動作し続けることを示す良い兆候です。ただし、ネットワーク上の新しいトリガーにより、サブグラフがテストされていないエラー状態に陥ったり、パフォーマンスの問題やノード オペレーターの問題により遅れが生じたりする可能性があります。 +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx index 21bb85d4fb51..4e8503e208e4 100644 --- a/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ja/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- 特定のサブグラフ用の API キーの作成と管理 +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### Subgraph Studio でサブグラフを作成する方法 @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph と The Graph Network の互換性 -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- 以下の機能のいずれも使用してはいけません: - - ipfs.cat & ipfs.map - - 致命的でないエラー - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## グラフ認証 -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## サブグラフのバージョンの自動アーカイブ -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ja/subgraphs/developing/developer-faq.mdx b/website/src/pages/ja/subgraphs/developing/developer-faq.mdx index 9744d7d9a53d..54a9d8b3a865 100644 --- a/website/src/pages/ja/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ja/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. サブグラフとは +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. サブグラフに関連付けられている GitHub アカウントを変更できますか? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -サブグラフを再デプロイする必要がありますが、サブグラフの ID(IPFS ハッシュ)が変わらなければ、最初から同期する必要はありません。 +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -サブグラフ内では、複数のコントラクトにまたがっているかどうかにかかわらず、イベントは常にブロックに表示される順序で処理されます。 +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? はい、あります。organization/subgraphName」を公開先の組織とサブグラフの名前に置き換えて、以下のコマンドを実行してみてください: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/ja/subgraphs/developing/introduction.mdx b/website/src/pages/ja/subgraphs/developing/introduction.mdx index 982e426ba4aa..e7d2fb8eff33 100644 --- a/website/src/pages/ja/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ja/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ja/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ja/subgraphs/developing/managing/deleting-a-subgraph.mdx index 6a9aef388d02..b8c2330ca49d 100644 --- a/website/src/pages/ja/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ja/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- キュレーターは、サブグラフにシグナルを送ることができなくなります。 -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/ja/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ja/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/ja/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ja/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx index f9d92cf7d0d9..c26672ec6b84 100644 --- a/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ja/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: 分散型ネットワークへのサブグラフの公開 +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### パブリッシュされたサブグラフのメタデータの更新 +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ja/subgraphs/developing/subgraphs.mdx b/website/src/pages/ja/subgraphs/developing/subgraphs.mdx index 9f1d50744aab..b96912052ef7 100644 --- a/website/src/pages/ja/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ja/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: サブグラフ ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## サブグラフのライフサイクル -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ja/subgraphs/explorer.mdx b/website/src/pages/ja/subgraphs/explorer.mdx index 94d1203d9084..0357d63fda7e 100644 --- a/website/src/pages/ja/subgraphs/explorer.mdx +++ b/website/src/pages/ja/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: グラフエクスプローラ --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## 概要 -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- サブグラフのシグナル/アンシグナル +- Signal/Un-signal on Subgraphs - チャート、現在のデプロイメント ID、その他のメタデータなどの詳細情報の表示 -- バージョンを切り替えて、サブグラフの過去のイテレーションを調べる -- GraphQL によるサブグラフのクエリ -- プレイグラウンドでのサブグラフのテスト -- 特定のサブグラフにインデクシングしているインデクサーの表示 +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - サブグラフの統計情報(割り当て数、キュレーターなど) -- サブグラフを公開したエンティティの表示 +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - インデクサーが生産的に受け入れることができる委任されたステークの最大量。超過した委任されたステークは、割り当てや報酬の計算には使用できません。 - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. キュレーター -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### サブグラフタブ -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### インデックスタブ -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. このセクションには、インデクサー報酬とクエリフィーの詳細も含まれます。 以下のような指標が表示されます: @@ -223,13 +223,13 @@ In the Delegators tab, you can find the details of your active and historical de ### キュレーションタブ -[キュレーション] タブには、シグナルを送信しているすべてのサブグラフが表示されます (これにより、クエリ料金を受け取ることができます)。シグナリングにより、キュレーターはどのサブグラフが価値があり信頼できるかをインデクサーに強調表示し、それらをインデックス化する必要があることを知らせることができます。 +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. このタブでは、以下の概要を見ることができます: -- キュレーションしている全てのサブグラフとシグナルの詳細 -- サブグラフごとのシェアの合計 -- サブグラフごとのクエリ報酬 +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - 日付詳細に更新済み ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/ja/subgraphs/guides/arweave.mdx b/website/src/pages/ja/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..66eef9c8160f --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Arweaveでのサブグラフ構築 +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +このガイドでは、Arweaveブロックチェーンのインデックスを作成するためのサブグラフの構築とデプロイ方法について学びます。 + +## Arweaveとは? + +Arweave プロトコルは、開発者がデータを永久に保存することを可能にし、それが Arweave と IPFS の主な違いです。IPFSは永続性に欠ける一方、Arweaveに保存されたファイルは変更も削除もできません。 + +Arweaveは既に、さまざまなプログラミング言語でプロトコルを統合するための多数のライブラリを構築しています。詳細については、次を確認できます。 + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## Arweaveサブグラフとは? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Arweave サブグラフの作成 + +Arweaveのサブグラフを構築し展開できるようにするためには、2つのパッケージが必要です。 + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## サブグラフのコンポーネント + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +対象のデータ ソースとその処理方法を定義します。 Arweave は新しい種類のデータ ソースです。 + +### 2. Schema - `schema.graphql` + +ここでは、GraphQL を使用してサブグラフにインデックスを付けた後にクエリできるようにするデータを定義します。これは実際には API のモデルに似ており、モデルはリクエスト本文の構造を定義します。 + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +これは、リスニングしているデータソースと誰かがやりとりするときに、データをどのように取得し、保存するかを決定するロジックです。データは変換され、あなたがリストアップしたスキーマに基づいて保存されます。 + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## サブグラフマニフェストの定義 + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave データ ソースには、オプションの source.owner フィールドが導入されています。これは、Arweave ウォレットの公開鍵です。 + +Arweaveデータソースは 2 種類のハンドラーをサポートしています: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> Source.owner は、所有者のアドレスまたは公開鍵にすることができます。 +> +> トランザクションはArweave permawebの構成要素であり、エンドユーザーによって作成されるオブジェクトです。 +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## スキーマ定義 + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript マッピング + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Arweaveサブグラフのクエリ + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## サブグラフの例 + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### 保存されたファイルをArweaveでインデックス化することはできますか? + +現在、The Graph は Arweave をブロックチェーン (ブロックとトランザクション) としてのみインデックス化しています。 + +### Can I identify Bundlr bundles in my Subgraph? + +現在はサポートされていません。 + +### トランザクションを特定のアカウントにフィルターするにはどうすればよいですか? + +Source.ownerには、ユーザの公開鍵またはアカウントアドレスを指定することができます。 + +### 現在の暗号化フォーマットは? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/ja/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ja/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..414948153176 --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## 概要 + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +または + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ja/subgraphs/guides/enums.mdx b/website/src/pages/ja/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..14c608584b8f --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## その他のリソース + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/ja/subgraphs/guides/grafting.mdx b/website/src/pages/ja/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..0ce88bc00b3f --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: グラフティングでコントラクトを取り替え、履歴を残す +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## グラフティングとは? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- エンティティタイプを追加または削除する +- エンティティタイプから属性を削除する +- 属性を追エンティティタイプに nullable加する +- null 化できない属性を null 化できる属性に変更する +- enums に値を追加する +- インターフェースの追加または削除 +- インターフェースがどのエンティティタイプに実装されるかを変更する + +詳しくは、こちらでご確認ください。 + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## ネットワークにアップグレードする際の移植に関する重要な注意事項 + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### 何でこれが大切ですか? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### ベストプラクティス + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +これらのガイドラインに従うことで、リスクを最小限に抑え、よりスムーズな移行プロセスを確保できます。 + +## 既存のサブグラフの構築 + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## サブグラフマニフェストの定義 + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## グラフティングマニフェストの定義 + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## ベースサブグラフの起動 + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +このようなものが返ってきます: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## グラフティングサブグラフの展開 + +グラフト置換されたsubgraph.yamlは、新しいコントラクトのアドレスを持つことになります。これは、ダンプを更新したり、コントラクトを再デプロイしたりしたときに起こりうることです。 + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +以下のように返ってくるはずです: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## その他のリソース + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/ja/subgraphs/guides/near.mdx b/website/src/pages/ja/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..9e3738689919 --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: NEAR でサブグラフを作成する +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## NEAR とは? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- ブロックハンドラ:新しいブロックごとに実行されます +- レシートハンドラ:指定されたアカウントでメッセージが実行されるたびに実行されます + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> レシートは、システム内で唯一実行可能なオブジェクトです。NEAR プラットフォームで「トランザクションの処理」といえば、最終的にはどこかの時点で「レシートの適用」を意味します。 + +## NEAR サブグラフの構築 + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### サブグラフマニフェストの定義 + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +NEAR データソースは 2 種類のハンドラーをサポートしています: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### スキーマ定義 + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript マッピング + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## NEAR サブグラフの展開 + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### ローカル グラフ ノード (デフォルト構成に基づく) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### ローカル グラフ ノードを使用した NEAR のインデックス作成 + +NEAR のインデックスを作成するグラフノードの運用には、以下のような運用要件があります: + +- NEAR Indexer Framework と Firehose instrumentation +- NEAR Firehose コンポーネント +- Firehose エンドポイントが設定されたグラフノード + +上記のコンポーネントの運用については、近日中に詳しくご紹介します。 + +## NEAR サブグラフへのクエリ + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## サブグラフの例 + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### ベータ版はどのように機能しますか? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +現在、ブロックとレシートのトリガーのみがサポートされています。指定されたアカウントへのファンクションコールのトリガーを検討しています。また、NEAR がネイティブイベントをサポートするようになれば、イベントトリガーのサポートも検討しています。 + +### 領収書ハンドラーは、アカウントとそのサブアカウントに対してトリガーされますか? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +これはサポートされていません。この機能がインデックス作成に必要かどうかを評価しています。 + +### Can I use data source templates in my NEAR Subgraph? + +これは現在サポートされていません。この機能がインデックス作成に必要かどうかを評価しています。 + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## 参考文献 + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/ja/subgraphs/guides/polymarket.mdx b/website/src/pages/ja/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ja/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ja/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..ead239aa93e1 --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## 概要 + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/ja/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ja/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..62b2d8eb4657 --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## イントロダクション + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## 始めましょう + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## その他のリソース + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ja/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ja/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..cba9bbca2ff7 --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: フォークを用いた迅速かつ容易なサブグラフのデバッグ +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## さて、それは何でしょうか? + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## その方法は? + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## コードを見てみましょう + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +通常の試すであろう修正方法: + +1. マッピングソースを変更して問題の解決を試す(解決されないことは分かっていても) +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. 同期を待つ +4. 再び問題が発生した場合は、1に戻る + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. マッピングのソースを変更し、問題を解決する +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. もし再度、壊れる場合1に戻る + +さて、ここで2つの疑問が生じます: + +1. フォークベースとは? +2. フォーキングは誰ですか? + +回答: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. フォーキングは簡単であり煩雑な手間はありません + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +そこで、以下の通りです: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/ja/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/ja/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..5f51f521b214 --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: 安全なサブグラフのコード生成 +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Subgraph Uncrashable と統合する理由 + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- また、このフレームワークには、エンティティ変数のグループに対して、カスタムだが安全なセッター関数を作成する方法が(設定ファイルを通じて)含まれています。この方法では、ユーザーが古いグラフ・エンティティをロード/使用することは不可能であり、また、関数が必要とする変数の保存や設定を忘れることも不可能です。 + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +Subgraph Uncrashableは、Graph CLI codegenコマンドでオプションのフラグとして実行することができます。 + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/ja/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ja/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..890b8495ad7b --- /dev/null +++ b/website/src/pages/ja/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: The Graphに移行する +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### 例 + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### その他のリソース + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ja/subgraphs/querying/best-practices.mdx b/website/src/pages/ja/subgraphs/querying/best-practices.mdx index d0700c1fe37d..bd25c5d2fea6 100644 --- a/website/src/pages/ja/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ja/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: クエリのベストプラクティス The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- クロスチェーンのサブグラフ処理:1回のクエリで複数のサブグラフからクエリを実行可能 +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - 完全なタイプ付け結果 @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ja/subgraphs/querying/from-an-application.mdx b/website/src/pages/ja/subgraphs/querying/from-an-application.mdx index 226a9cd2d686..1bece60d7df9 100644 --- a/website/src/pages/ja/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ja/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: アプリケーションからのクエリ +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- クロスチェーンのサブグラフ処理:1回のクエリで複数のサブグラフからクエリを実行可能 +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - 完全なタイプ付け結果 @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### ステップ1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### ステップ1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### ステップ1 diff --git a/website/src/pages/ja/subgraphs/querying/graph-client/README.md b/website/src/pages/ja/subgraphs/querying/graph-client/README.md index 416cadc13c6f..39ba6a53b215 100644 --- a/website/src/pages/ja/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ja/subgraphs/querying/graph-client/README.md @@ -14,15 +14,15 @@ This library is intended to simplify the network aspect of data consumption for > The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! -| Status | Feature | Notes | -| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | +| ステータス | Feature | Notes | +| :---: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | | ✅ | Multiple indexers | based on fetch strategies | | ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | | ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | | ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | | ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | | ✅ | Integration with `@apollo/client` | | @@ -32,7 +32,7 @@ This library is intended to simplify the network aspect of data consumption for > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## はじめに You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### 例 You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/ja/subgraphs/querying/graph-client/live.md b/website/src/pages/ja/subgraphs/querying/graph-client/live.md index e6f726cb4352..961787fa9a4c 100644 --- a/website/src/pages/ja/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/ja/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## はじめに Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/ja/subgraphs/querying/graphql-api.mdx b/website/src/pages/ja/subgraphs/querying/graphql-api.mdx index e6fa6e325eea..24324d70ac5e 100644 --- a/website/src/pages/ja/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ja/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -これは、前回のポーリング以降など、変更されたエンティティのみをフェッチする場合に役立ちます。または、サブグラフでエンティティがどのように変化しているかを調査またはデバッグするのに役立ちます (ブロック フィルターと組み合わせると、特定のブロックで変更されたエンティティのみを分離できます)。 +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### 全文検索クエリ -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. 全文検索演算子: -| シンボル | オペレーター | 説明書き | -| --- | --- | --- | -| `&` | `And` | 複数の検索語を組み合わせて、指定したすべての検索語を含むエンティティをフィルタリングします。 | -| | | `Or` | 複数の検索語をオペレーターで区切って検索すると、指定した語のいずれかにマッチするすべてのエンティティが返されます。 | -| `<->` | `Follow by` | 2 つの単語の間の距離を指定します。 | -| `:*` | `Prefix` | プレフィックス検索語を使って、プレフィックスが一致する単語を検索します(2 文字必要) | +| シンボル | オペレーター | 説明書き | +| ------ | ----------- | --------------------------------------------------------- | +| `&` | `And` | 複数の検索語を組み合わせて、指定したすべての検索語を含むエンティティをフィルタリングします。 | +| | | `Or` | 複数の検索語をオペレーターで区切って検索すると、指定した語のいずれかにマッチするすべてのエンティティが返されます。 | +| `<->` | `Follow by` | 2 つの単語の間の距離を指定します。 | +| `:*` | `Prefix` | プレフィックス検索語を使って、プレフィックスが一致する単語を検索します(2 文字必要) | #### 例 @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### サブグラフ メタデータ -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -ブロックが提供されている場合、メタデータはそのブロックのものであり、そうでない場合は、最新のインデックス付きブロックが使用されます。提供される場合、ブロックはサブグラフの開始ブロックの後にあり、最後にインデックス付けされたブロック以下でなければなりません。 +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s - hash: ブロックのハッシュ - number: ブロック番号 -- timestamp: 可能であれば、ブロックのタイムスタンプ (これは現在、EVMネットワークのインデックスを作成するサブグラフでのみ利用可能) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ja/subgraphs/querying/introduction.mdx b/website/src/pages/ja/subgraphs/querying/introduction.mdx index d85e6980674d..0424d25aa607 100644 --- a/website/src/pages/ja/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ja/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## 概要 -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx index fc7402c28349..5e0531142b22 100644 --- a/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ja/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: API キーの管理 +title: Managing API keys --- ## 概要 -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - 使用した GRT の量 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - API キーの使用を許可されたドメイン名の表示と管理 - - API キーでクエリ可能なサブグラフの割り当て + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ja/subgraphs/querying/python.mdx b/website/src/pages/ja/subgraphs/querying/python.mdx index 4a42ae3275b4..cae61f4b49e0 100644 --- a/website/src/pages/ja/subgraphs/querying/python.mdx +++ b/website/src/pages/ja/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgroundsは、[Playgrounds](https://playgrounds.network/)によって構築された、サブグラフをクエリするための直感的なPythonライブラリです。サブグラフデータを直接Pythonデータ環境に接続し、[pandas](https://pandas.pydata.org/)のようなライブラリを使用してデータ分析を行うことができます! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgroundsは、GraphQLクエリを構築するためのシンプルなPythonic APIを提供し、ページ分割のような面倒なワークフローを自動化し、制御されたスキーマ変換によって高度なユーザーを支援します。 @@ -17,27 +17,27 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -インストールしたら、以下のクエリでsubgroundsを試すことができる。以下の例では、Aave v2 プロトコルのサブグラフを取得し、TVL (Total Value Locked) 順に並べられた上位 5 つの市場をクエリし、その名前と TVL (USD) を選択し、pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame) としてデータを返します。 +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# サブグラフを読み込む +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") -# クエリの構築 +# Construct the query latest_markets = aave_v2.Query.markets( - orderBy=aave_v2.Market.totalValueLockedUSD、 - orderDirection='desc'、 - first=5、 + orderBy=aave_v2.Market.totalValueLockedUSD, + orderDirection='desc', + first=5, ) -# クエリをデータフレームに戻す +# Return query to a dataframe sg.query_df([ - latest_markets.name、 - latest_markets.totalValueLockedUSD、 + latest_markets.name, + latest_markets.totalValueLockedUSD, ]) ``` diff --git a/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index d1964ae0764b..4bf98ccc0c6f 100644 --- a/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ja/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -サブグラフはサブグラフIDで識別され、サブグラフの各バージョンはデプロイメントIDで識別されます。 +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Deployment ID を使用するエンドポイントの例: @@ -20,8 +20,8 @@ Deployment ID を使用するエンドポイントの例: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ja/subgraphs/quick-start.mdx b/website/src/pages/ja/subgraphs/quick-start.mdx index 1e322680d75d..df410ba8ec9b 100644 --- a/website/src/pages/ja/subgraphs/quick-start.mdx +++ b/website/src/pages/ja/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: クイックスタート --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Graph CLI をインストールする @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> 特定のサブグラフのコマンドは、[Subgraph Studio](https://thegraph.com/studio/) のサブグラフ ページで見つけることができます。 +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -サブグラフを初期化する際に予想されることの例については、次のスクリーンショットを参照してください。 +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -サブグラフが作成されたら、次のコマンドを実行します。 +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ja/substreams/_meta-titles.json b/website/src/pages/ja/substreams/_meta-titles.json index 6262ad528c3a..1c58294c4bfc 100644 --- a/website/src/pages/ja/substreams/_meta-titles.json +++ b/website/src/pages/ja/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "開発" } diff --git a/website/src/pages/ja/substreams/developing/dev-container.mdx b/website/src/pages/ja/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/ja/substreams/developing/dev-container.mdx +++ b/website/src/pages/ja/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/ja/substreams/developing/sinks.mdx b/website/src/pages/ja/substreams/developing/sinks.mdx index 3f34e35b5163..184532995eba 100644 --- a/website/src/pages/ja/substreams/developing/sinks.mdx +++ b/website/src/pages/ja/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| 名称 | サポート | Maintainer | Source Code | +| ---------- | ---- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| 名称 | サポート | Maintainer | Source Code | +| ---------- | ---- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/ja/substreams/developing/solana/account-changes.mdx b/website/src/pages/ja/substreams/developing/solana/account-changes.mdx index 6a018b522d67..bbd30084cf9e 100644 --- a/website/src/pages/ja/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ja/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/ja/substreams/developing/solana/transactions.mdx b/website/src/pages/ja/substreams/developing/solana/transactions.mdx index 7912b5535ab2..ec1b7d592c37 100644 --- a/website/src/pages/ja/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ja/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### サブグラフ 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/ja/substreams/introduction.mdx b/website/src/pages/ja/substreams/introduction.mdx index 8af3eada8419..771e1cf64862 100644 --- a/website/src/pages/ja/substreams/introduction.mdx +++ b/website/src/pages/ja/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/ja/substreams/publishing.mdx b/website/src/pages/ja/substreams/publishing.mdx index 6de1dc158d15..4529da331fc6 100644 --- a/website/src/pages/ja/substreams/publishing.mdx +++ b/website/src/pages/ja/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/ja/substreams/quick-start.mdx b/website/src/pages/ja/substreams/quick-start.mdx index 9f23f174a4f1..6bbe99168657 100644 --- a/website/src/pages/ja/substreams/quick-start.mdx +++ b/website/src/pages/ja/substreams/quick-start.mdx @@ -1,5 +1,5 @@ --- -title: Substreams Quick Start +title: サブストリーム速習ガイド sidebarTitle: クイックスタート --- diff --git a/website/src/pages/ja/supported-networks.json b/website/src/pages/ja/supported-networks.json index b545e3d4f916..92aed17a5dd4 100644 --- a/website/src/pages/ja/supported-networks.json +++ b/website/src/pages/ja/supported-networks.json @@ -1,5 +1,5 @@ { - "name": "Name", + "name": "名称", "id": "ID", "subgraphs": "サブグラフ", "substreams": "サブストリーム", diff --git a/website/src/pages/ja/supported-networks.mdx b/website/src/pages/ja/supported-networks.mdx index 9acefbcf6d19..4e138e5575cc 100644 --- a/website/src/pages/ja/supported-networks.mdx +++ b/website/src/pages/ja/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: サポートされているネットワーク hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ja/token-api/_meta-titles.json b/website/src/pages/ja/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/ja/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/ja/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/ja/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/ja/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/ja/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/ja/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/ja/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/ja/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/ja/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/ja/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/ja/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/ja/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/ja/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/ja/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/ja/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/ja/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/ja/token-api/faq.mdx b/website/src/pages/ja/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/ja/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ja/token-api/mcp/claude.mdx b/website/src/pages/ja/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..c44f99914138 --- /dev/null +++ b/website/src/pages/ja/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## コンフィギュレーション + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/ja/token-api/mcp/cline.mdx b/website/src/pages/ja/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..64f32deea38f --- /dev/null +++ b/website/src/pages/ja/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## コンフィギュレーション + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/ja/token-api/mcp/cursor.mdx b/website/src/pages/ja/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..1c4da59b67bc --- /dev/null +++ b/website/src/pages/ja/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## コンフィギュレーション + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/ja/token-api/monitoring/get-health.mdx b/website/src/pages/ja/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/ja/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/ja/token-api/monitoring/get-networks.mdx b/website/src/pages/ja/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/ja/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/ja/token-api/monitoring/get-version.mdx b/website/src/pages/ja/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/ja/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/ja/token-api/quick-start.mdx b/website/src/pages/ja/token-api/quick-start.mdx new file mode 100644 index 000000000000..0b64515243cb --- /dev/null +++ b/website/src/pages/ja/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: クイックスタート +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/ko/about.mdx b/website/src/pages/ko/about.mdx index 02b29895881f..833b097673d2 100644 --- a/website/src/pages/ko/about.mdx +++ b/website/src/pages/ko/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The flow follows these steps: 1. A dapp adds data to Ethereum through a transaction on a smart contract. 2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/ko/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ko/archived/arbitrum/arbitrum-faq.mdx index 562824e64e95..d121f5a2d0f3 100644 --- a/website/src/pages/ko/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/ko/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -39,7 +39,7 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-faq.mdx index 904587bfc535..be8af8c171b5 100644 --- a/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -22,11 +22,12 @@ The exception is with smart contract wallets like multisigs: these are smart con ### 만약 7일 안에 이체를 완료하지 못하면 어떻게 되나요? -L2 전송 도구는 Arbitrum의 기본 메커니즘을 사용하여 L1에서 L2로 메시지를 보냅니다. 이 메커니즘은 "재시도 가능한 티켓"이라고 하며 Arbitrum GRT 브리지를 포함한 모든 네이티브 토큰 브리지를 사용하여 사용됩니다. 재시도 가능한 티켓에 대해 자세히 읽을 수 있습니다 [Arbitrum 문서] (https://docs.arbitrum.io/arbos/l1-to-l2-messaging). +L2 전송 도구는 Arbitrum의 기본 메커니즘을 사용하여 L1에서 L2로 메시지를 보냅니다. 이 메커니즘은 "재시도 가능한 티켓"이라고 하며 Arbitrum GRT 브리지를 포함한 모든 네이티브 토큰 브리지를 사용하여 사용됩니다. 재시도 가능한 티켓에 대해 자세히 읽을 수 있습니다 [Arbitrum 문서] +(https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -자산(하위 그래프, 스테이크, 위임 또는 큐레이션) 을 L2로 이전하면 L2에서 재시도 가능한 티켓을 생성하는 Arbitrum GRT 브리지를 통해 메시지가 전송됩니다. 전송 도구에는 거래에 일부 ETH 값이 포함되어 있으며, 이는 1) 티켓 생성 비용을 지불하고 2) L2에서 티켓을 실행하기 위해 가스 비용을 지불하는 데 사용됩니다. 그러나 티켓이 L2에서 실행될 준비가 될 때까지 가스 가격이 시간에 따라 달라질 수 있으므로 이 자동 실행 시도가 실패할 수 있습니다. 그런 일이 발생하면 Arbitrum 브릿지는 재시도 가능한 티켓을 최대 7일 동안 유지하며 누구나 티켓 "사용"을 재시도할 수 있습니다(Arbitrum에 브릿지된 일부 ETH가 있는 지갑이 필요함). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -이것이 모든 전송 도구에서 '확인' 단계라고 부르는 것입니다. 자동 실행이 성공하는 경우가 가장 많기 때문에 대부분의 경우 자동으로 실행되지만 제대로 진행되었는지 다시 확인하는 것이 중요합니다. 성공하지 못하고 7일 이내에 성공적인 재시도가 없으면 Arbitrum 브릿지는 티켓을 폐기하며 귀하의 자산(하위 그래프, 지분, 위임 또는 큐레이션)은 손실되어 복구할 수 없습니다. Graph 코어 개발자는 이러한 상황을 감지하고 너무 늦기 전에 티켓을 교환하기 위해 모니터링 시스템을 갖추고 있지만 전송이 제 시간에 완료되도록 하는 것은 궁극적으로 귀하의 책임입니다. 거래를 확인하는 데 문제가 있는 경우 [이 양식]을 사용하여 문의하세요 (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) 핵심 개발자들이 도와드릴 것입니다. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,41 +37,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## 하위 그래프 전송 -### 내 서브그래프를 어떻게 이전하나요? +### How do I transfer my Subgraph? +To transfer your Subgraph, you will need to complete the following steps: + 1. 이더리움 메인넷에서 전송 시작 2. 확인을 위해 20분 정도 기다리세요 -3. Arbitrum에서 하위 그래프 전송 확인\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Arbitrum에 하위 그래프 게시 완료 +4. Finish publishing Subgraph on Arbitrum 5. 쿼리 URL 업데이트(권장) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### 어디에서 이전을 시작해야 합니까? -[Subgraph Studio](https://thegraph.com/studio/), [Explorer](https://thegraph.com/explorer) 또는 하위 그래프 세부정보 페이지에서 전송을 시작할 수 있습니다. 하위 그래프 세부 정보 페이지에서 "하위 그래프 전송" 버튼을 클릭하여 전송을 시작하세요. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### 내 하위 그래프가 전송될 때까지 얼마나 기다려야 합니까? +### How long do I need to wait until my Subgraph is transferred 환승 시간은 약 20분 정도 소요됩니다. Arbitrum 브리지는 브리지 전송을 자동으로 완료하기 위해 백그라운드에서 작동하고 있습니다. 경우에 따라 가스 비용이 급증할 수 있으며 거래를 다시 확인해야 합니다. -### 내 하위 그래프를 L2로 전송한 후에도 계속 검색할 수 있나요? +### Will my Subgraph still be discoverable after I transfer it to L2? -귀하의 하위 그래프는 해당 하위 그래프가 게시된 네트워크에서만 검색 가능합니다. 예를 들어, 귀하의 하위 그래프가 Arbitrum One에 있는 경우 Arbitrum One의 Explorer에서만 찾을 수 있으며 Ethereum에서는 찾을 수 없습니다. 올바른 네트워크에 있는지 확인하려면 페이지 상단의 네트워크 전환기에서 Arbitrum One을 선택했는지 확인하세요. 이전 후 L1 하위 그래프는 더 이상 사용되지 않는 것으로 표시됩니다. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### 내 하위 그래프를 전송하려면 게시해야 합니까? +### Does my Subgraph need to be published to transfer it? -하위 그래프 전송 도구를 활용하려면 하위 그래프가 이미 이더리움 메인넷에 게시되어 있어야 하며 하위 그래프를 소유한 지갑이 소유한 일부 큐레이션 신호가 있어야 합니다. 하위 그래프가 게시되지 않은 경우 Arbitrum One에 직접 게시하는 것이 좋습니다. 관련 가스 요금은 상당히 낮아집니다. 게시된 하위 그래프를 전송하고 싶지만 소유자 계정이 이에 대한 신호를 큐레이팅하지 않은 경우 해당 계정에서 소액(예: 1 GRT)을 신호로 보낼 수 있습니다. "자동 마이그레이션" 신호를 선택했는지 확인하세요. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Arbitrum으로 이전한 후 내 서브그래프의 이더리움 메인넷 버전은 어떻게 되나요? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -귀하의 하위 그래프를 Arbitrum으로 이전한 후에는 Ethereum 메인넷 버전이 더 이상 사용되지 않습니다. 48시간 이내에 쿼리 URL을 업데이트하는 것이 좋습니다. 그러나 타사 dapp 지원이 업데이트될 수 있도록 메인넷 URL이 작동하도록 유지하는 유예 기간이 있습니다. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### 양도한 후에 Arbitrum에 다시 게시해야 합니까? @@ -78,21 +81,21 @@ If you have the L1 transaction hash (which you can find by looking at the recent ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### L2에서 Ethereum Ethereum 메인넷과 게시 및 버전 관리가 동일합니까? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### 내 하위 그래프의 큐레이션이 내 하위 그래프와 함께 이동하나요? +### Will my Subgraph's curation move with my Subgraph? -자동 마이그레이션 신호를 선택한 경우 자체 큐레이션의 100%가 하위 그래프와 함께 Arbitrum One으로 이동됩니다. 하위 그래프의 모든 큐레이션 신호는 전송 시 GRT로 변환되며, 큐레이션 신호에 해당하는 GRT는 L2 하위 그래프의 신호 생성에 사용됩니다. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -다른 큐레이터는 GRT 일부를 인출할지, 아니면 L2로 전송하여 동일한 하위 그래프의 신호를 생성할지 선택할 수 있습니다. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### 이전 후 구독을 이더리움 메인넷으로 다시 이동할 수 있나요? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -이전되면 이 하위 그래프의 Ethereum 메인넷 버전은 더 이상 사용되지 않습니다. 메인넷으로 다시 이동하려면 다시 메인넷에 재배포하고 게시해야 합니다. 그러나 인덱싱 보상은 결국 Arbitrum One에 전적으로 배포되므로 이더리움 메인넷으로 다시 이전하는 것은 권장되지 않습니다. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### 전송을 완료하려면 브리지된 ETH가 필요한 이유는 무엇입니까? @@ -204,19 +207,19 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans \*필요한 경우 - 즉, 계약 주소를 사용하고 있습니다. -### 내가 큐레이트한 하위 그래프가 L2로 이동했는지 어떻게 알 수 있나요? +### How will I know if the Subgraph I curated has moved to L2? -하위 세부정보 페이지를 보면 해당 하위 하위가 이전되었음을 알리는 배너가 표시됩니다. 메시지에 따라 큐레이션을 전송할 수 있습니다. 이동한 하위 그래프의 하위 그래프 세부정보 페이지에서도 이 정보를 찾을 수 있습니다. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### 큐레이션을 L2로 옮기고 싶지 않으면 어떻게 되나요? -하위 그래프가 더 이상 사용되지 않으면 신호를 철회할 수 있는 옵션이 있습니다. 마찬가지로 하위 그래프가 L2로 이동한 경우 이더리움 메인넷에서 신호를 철회하거나 L2로 신호를 보낼 수 있습니다. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### 내 큐레이션이 성공적으로 전송되었는지 어떻게 알 수 있나요? L2 전송 도구가 시작된 후 약 20분 후에 Explorer를 통해 신호 세부 정보에 액세스할 수 있습니다. -### 한 번에 두 개 이상의 하위 그래프에 대한 내 큐레이션을 전송할 수 있나요? +### Can I transfer my curation on more than one Subgraph at a time? 현재는 대량 전송 옵션이 없습니다. @@ -264,7 +267,7 @@ L2 전송 도구가 지분 전송을 완료하는 데 약 20분이 소요됩니 ### 지분을 양도하기 전에 Arbitrum에서 색인을 생성해야 합니까? -인덱싱을 설정하기 전에 먼저 지분을 효과적으로 이전할 수 있지만, L2의 하위 그래프에 할당하고 이를 인덱싱하고 POI를 제시할 때까지는 L2에서 어떤 보상도 청구할 수 없습니다. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### 내가 인덱싱 지분을 이동하기 전에 위임자가 자신의 위임을 이동할 수 있나요? diff --git a/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-guide.mdx index 549618bfd7c3..4a34da9bad0e 100644 --- a/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/ko/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph has made it easy to move to L2 on Arbitrum One. For each protocol part Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## How to transfer your subgraph to Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefits of transferring your subgraphs +## Benefits of transferring your Subgraphs The Graph's community and core devs have [been preparing](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) to move to Arbitrum over the past year. Arbitrum, a layer 2 or "L2" blockchain, inherits the security from Ethereum but provides drastically lower gas fees. -When you publish or upgrade your subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your subgraphs to Arbitrum, any future updates to your subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your subgraph, increasing the rewards for Indexers on your subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Understanding what happens with signal, your L1 subgraph and query URLs +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -When you choose to transfer the subgraph, this will convert all of the subgraph's curation signal to GRT. This is equivalent to "deprecating" the subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the subgraph, where they will be used to mint signal on your behalf. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. If a subgraph owner does not transfer their subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -As soon as the subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the subgraph. However, there will be Indexers that will 1) keep serving transferred subgraphs for 24 hours, and 2) immediately start indexing the subgraph on L2. Since these Indexers already have the subgraph indexed, there should be no need to wait for the subgraph to sync, and it will be possible to query the L2 subgraph almost immediately. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries to the L2 subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Choosing your L2 wallet -When you published your subgraph on mainnet, you used a connected wallet to create the subgraph, and this wallet owns the NFT that represents this subgraph and allows you to publish updates. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -When transferring the subgraph to Arbitrum, you can choose a different wallet that will own this subgraph NFT on L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. If you're using a "regular" wallet like MetaMask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same owner address as in L1. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the subgraph will be lost and cannot be recovered.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparing for the transfer: bridging some ETH -Transferring the subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Since gas fees on Arbitrum are lower, you should only need a small amount. It is recommended that you start at a low threshold (0.e.g. 01 ETH) for your transaction to be approved. -## Finding the subgraph Transfer Tool +## Finding the Subgraph Transfer Tool -You can find the L2 Transfer Tool when you're looking at your subgraph's page on Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -It is also available on Explorer if you're connected with the wallet that owns a subgraph and on that subgraph's page on Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Clicking on the Transfer to L2 button will open the transfer tool where you can ## Step 1: Starting the transfer -Before starting the transfer, you must decide which address will own the subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Also please note transferring the subgraph requires having a nonzero amount of signal on the subgraph with the same account that owns the subgraph; if you haven't signaled on the subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 subgraph (see "Understanding what happens with signal, your L1 subgraph and query URLs" above for more details on what goes on behind the scenes). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Step 2: Waiting for the subgraph to get to L2 +## Step 2: Waiting for the Subgraph to get to L2 -After you start the transfer, the message that sends your L1 subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. @@ -80,7 +80,7 @@ Once this wait time is over, Arbitrum will attempt to auto-execute the transfer ## Step 3: Confirming the transfer -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your subgraph to L2 will be pending and require a retry within 7 days. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. @@ -88,33 +88,33 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Step 4: Finishing the transfer on L2 -At this point, your subgraph and GRT have been received on Arbitrum, but the subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -This will publish the subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Step 5: Updating the query URL -Your subgraph has been successfully transferred to Arbitrum! To query the subgraph, the new URL will be : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note that the subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the subgraph has been synced on L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## How to transfer your curation to Arbitrum (L2) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Choosing your L2 wallet @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/ko/archived/sunrise.mdx b/website/src/pages/ko/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/ko/archived/sunrise.mdx +++ b/website/src/pages/ko/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/ko/global.json b/website/src/pages/ko/global.json index f0bd80d9715b..4364984ad90c 100644 --- a/website/src/pages/ko/global.json +++ b/website/src/pages/ko/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Description", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Description", + "liveResponse": "Live Response", + "example": "Example" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/ko/index.json b/website/src/pages/ko/index.json index 787097b1fbc4..95bf30d1752a 100644 --- a/website/src/pages/ko/index.json +++ b/website/src/pages/ko/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -37,10 +37,86 @@ }, "supportedNetworks": { "title": "Supported Networks", + "details": "Network Details", + "services": "Services", + "type": "Type", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Docs", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { "base": "The Graph supports {0}. To add a new network, {1}", "networks": "networks", "completeThisForm": "complete this form" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Name", + "id": "ID", + "subgraphs": "Subgraphs", + "substreams": "Substreams", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Substreams", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Billing", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { @@ -80,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/ko/indexing/chain-integration-overview.mdx b/website/src/pages/ko/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/ko/indexing/chain-integration-overview.mdx +++ b/website/src/pages/ko/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/ko/indexing/new-chain-integration.mdx b/website/src/pages/ko/indexing/new-chain-integration.mdx index e45c4b411010..c401fa57b348 100644 --- a/website/src/pages/ko/indexing/new-chain-integration.mdx +++ b/website/src/pages/ko/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, in a JSON-RPC batch request -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ ### 2. Firehose Integration @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/ko/indexing/overview.mdx b/website/src/pages/ko/indexing/overview.mdx index 914b04e0bf47..0b9b31f5d22d 100644 --- a/website/src/pages/ko/indexing/overview.mdx +++ b/website/src/pages/ko/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/ko/indexing/supported-network-requirements.mdx b/website/src/pages/ko/indexing/supported-network-requirements.mdx index df15ef48d762..ce9919503666 100644 --- a/website/src/pages/ko/indexing/supported-network-requirements.mdx +++ b/website/src/pages/ko/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/ko/indexing/tap.mdx b/website/src/pages/ko/indexing/tap.mdx index 3bab672ab211..477534d63201 100644 --- a/website/src/pages/ko/indexing/tap.mdx +++ b/website/src/pages/ko/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Overview -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/ko/indexing/tooling/graph-node.mdx b/website/src/pages/ko/indexing/tooling/graph-node.mdx index 0250f14a3d08..f5778789213d 100644 --- a/website/src/pages/ko/indexing/tooling/graph-node.mdx +++ b/website/src/pages/ko/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ko/indexing/tooling/graphcast.mdx b/website/src/pages/ko/indexing/tooling/graphcast.mdx index 4072877a1257..d1795e9be577 100644 --- a/website/src/pages/ko/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ko/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Learn More diff --git a/website/src/pages/ko/resources/benefits.mdx b/website/src/pages/ko/resources/benefits.mdx index 06b1b5594b1f..1c264a6a72b9 100644 --- a/website/src/pages/ko/resources/benefits.mdx +++ b/website/src/pages/ko/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | The Graph 네트워크 | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | The Graph 네트워크 | +| :--------------------------: | :-------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | The Graph 네트워크 | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | The Graph 네트워크 | +| :--------------------------: | :----------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | The Graph 네트워크 | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | The Graph 네트워크 | +| :--------------------------: | :-----------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ko/resources/glossary.mdx b/website/src/pages/ko/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/ko/resources/glossary.mdx +++ b/website/src/pages/ko/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ko/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ko/resources/migration-guides/assemblyscript-migration-guide.mdx index 85f6903a6c69..aead2514ff51 100644 --- a/website/src/pages/ko/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ko/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/ko/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ko/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/ko/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ko/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/ko/resources/roles/curating.mdx b/website/src/pages/ko/resources/roles/curating.mdx index 1cc05bb7b62f..a228ebfb3267 100644 --- a/website/src/pages/ko/resources/roles/curating.mdx +++ b/website/src/pages/ko/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## How to Signal -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Curation FAQs ### 1. What % of query fees do Curators earn? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Can I sell my curation shares? diff --git a/website/src/pages/ko/resources/roles/delegating/undelegating.mdx b/website/src/pages/ko/resources/roles/delegating/undelegating.mdx index c3e31e653941..6a361c508450 100644 --- a/website/src/pages/ko/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/ko/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## Additional Resources diff --git a/website/src/pages/ko/resources/subgraph-studio-faq.mdx b/website/src/pages/ko/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/ko/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ko/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ko/resources/tokenomics.mdx b/website/src/pages/ko/resources/tokenomics.mdx index 4a9b42ca6e0d..dac3383a28e7 100644 --- a/website/src/pages/ko/resources/tokenomics.mdx +++ b/website/src/pages/ko/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Overview -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Backbone of blockchain data @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/ko/sps/introduction.mdx b/website/src/pages/ko/sps/introduction.mdx index b11c99dfb8e5..92d8618165dd 100644 --- a/website/src/pages/ko/sps/introduction.mdx +++ b/website/src/pages/ko/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ko/sps/sps-faq.mdx b/website/src/pages/ko/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/ko/sps/sps-faq.mdx +++ b/website/src/pages/ko/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/ko/sps/triggers.mdx b/website/src/pages/ko/sps/triggers.mdx index 816d42cb5f12..66687aa21889 100644 --- a/website/src/pages/ko/sps/triggers.mdx +++ b/website/src/pages/ko/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/ko/sps/tutorial.mdx b/website/src/pages/ko/sps/tutorial.mdx index 55e563608bce..e20a22ba4b1c 100644 --- a/website/src/pages/ko/sps/tutorial.mdx +++ b/website/src/pages/ko/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Get Started @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/ko/subgraphs/_meta-titles.json b/website/src/pages/ko/subgraphs/_meta-titles.json index 0556abfc236c..3fd405eed29a 100644 --- a/website/src/pages/ko/subgraphs/_meta-titles.json +++ b/website/src/pages/ko/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", + "guides": "How-to Guides", "best-practices": "Best Practices" } diff --git a/website/src/pages/ko/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ko/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/ko/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ko/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ko/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/ko/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/ko/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ko/subgraphs/best-practices/grafting-hotfix.mdx index d514e1633c75..674cf6b87c62 100644 --- a/website/src/pages/ko/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ko/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ko/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/ko/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/ko/subgraphs/best-practices/pruning.mdx b/website/src/pages/ko/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/ko/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ko/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ko/subgraphs/best-practices/timeseries.mdx index cacdc44711fe..9732199531a8 100644 --- a/website/src/pages/ko/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ko/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Overview @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ko/subgraphs/billing.mdx b/website/src/pages/ko/subgraphs/billing.mdx index c9f380bb022c..ec654ca63f55 100644 --- a/website/src/pages/ko/subgraphs/billing.mdx +++ b/website/src/pages/ko/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/ko/subgraphs/cookbook/arweave.mdx b/website/src/pages/ko/subgraphs/cookbook/arweave.mdx index 2372025621d1..e59abffa383f 100644 --- a/website/src/pages/ko/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/ko/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Building Subgraphs on Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are To be able to build and deploy Arweave Subgraphs, you need two packages: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Subgraph's components -There are three components of a subgraph: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ Defines the data sources of interest, and how they should be processed. Arweave Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ``` $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet @@ -99,7 +99,7 @@ Arweave data sources support two types of handlers: ## Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript Mappings @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Querying an Arweave Subgraph -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here is an example subgraph for reference: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a subgraph index Arweave and other chains? +### Can a Subgraph index Arweave and other chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. ### Can I index the stored files on Arweave? Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). -### Can I identify Bundlr bundles in my subgraph? +### Can I identify Bundlr bundles in my Subgraph? This is not currently supported. @@ -188,7 +188,7 @@ The source.owner can be the user's public key or account address. ### What is the current encryption format? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/ko/subgraphs/cookbook/enums.mdx b/website/src/pages/ko/subgraphs/cookbook/enums.mdx index a10970c1539f..9f55ae07c54b 100644 --- a/website/src/pages/ko/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/ko/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/ko/subgraphs/cookbook/grafting.mdx b/website/src/pages/ko/subgraphs/cookbook/grafting.mdx index 57d5169830a7..d9abe0e70d2a 100644 --- a/website/src/pages/ko/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/ko/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Replace a Contract and Keep its History With Grafting --- -In this guide, you will learn how to build and deploy new subgraphs by grafting existing subgraphs. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## What is Grafting? -Grafting reuses the data from an existing subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. Also, it can be used when adding a feature to a subgraph that takes long to index from scratch. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -22,38 +22,38 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## Building an Existing Subgraph -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Grafting Manifest Definition -Grafting requires adding two new items to the original subgraph manifest: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Deploying the Base Subgraph -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the subgraph is indexing properly, you can quickly update the subgraph with grafting. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Deploying the Grafting Subgraph The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ It should return the following: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Additional Resources diff --git a/website/src/pages/ko/subgraphs/cookbook/near.mdx b/website/src/pages/ko/subgraphs/cookbook/near.mdx index 6060eb27e761..e78a69eb7fa2 100644 --- a/website/src/pages/ko/subgraphs/cookbook/near.mdx +++ b/website/src/pages/ko/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Building Subgraphs on NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## What is NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## What are NEAR subgraphs? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Block handlers: these are run on every new block - Receipt handlers: run every time a message is executed at a specified account @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Building a NEAR Subgraph -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -There are three aspects of subgraph definition: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ```bash $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Subgraph Manifest Definition -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ NEAR data sources support two types of handlers: ### Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript Mappings @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -The node configuration will depend on where the subgraph is being deployed. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ We will provide more information on running the above components soon. ## Querying a NEAR Subgraph -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### How does the beta work? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Can a subgraph index both NEAR and EVM chains? +### Can a Subgraph index both NEAR and EVM chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. -### Can subgraphs react to more specific triggers? +### Can Subgraphs react to more specific triggers? Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### Can NEAR subgraphs make view calls to NEAR accounts during mappings? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? This is not supported. We are evaluating whether this functionality is required for indexing. -### Can I use data source templates in my NEAR subgraph? +### Can I use data source templates in my NEAR Subgraph? This is not currently supported. We are evaluating whether this functionality is required for indexing. -### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### My question hasn't been answered, where can I get more help building NEAR subgraphs? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## References diff --git a/website/src/pages/ko/subgraphs/cookbook/polymarket.mdx b/website/src/pages/ko/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/ko/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/ko/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ko/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/ko/subgraphs/cookbook/secure-api-keys-nextjs.mdx index fc7e0ff52eb4..e17e594408ff 100644 --- a/website/src/pages/ko/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/ko/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## Overview -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/ko/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/ko/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..de6fdd9fd9fb --- /dev/null +++ b/website/src/pages/ko/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Overview + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Get Started + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ko/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/ko/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..17b105edac59 --- /dev/null +++ b/website/src/pages/ko/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Get Started + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/ko/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/ko/subgraphs/cookbook/subgraph-debug-forking.mdx index 6610f19da66d..91aa7484d2ec 100644 --- a/website/src/pages/ko/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/ko/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Quick and Easy Subgraph Debugging Using Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## Ok, what is it? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## What?! How? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## Please, show me some code! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. The usual way to attempt a fix is: 1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Wait for it to sync-up. 4. If it breaks again go back to 1, otherwise: Hooray! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. If it breaks again, go back to 1, otherwise: Hooray! Now, you may have 2 questions: @@ -69,18 +69,18 @@ Now, you may have 2 questions: And I answer: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Forking is easy, no need to sweat: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! So, here is what I do: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/ko/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/ko/subgraphs/cookbook/subgraph-uncrashable.mdx index 0cc91a0fa2c3..a08e2a7ad8c9 100644 --- a/website/src/pages/ko/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/ko/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Safe Subgraph Code Generator --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Why integrate with Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. @@ -26,4 +26,4 @@ Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/ko/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/ko/subgraphs/cookbook/transfer-to-the-graph.mdx index 194deb018404..9a4b037cafbc 100644 --- a/website/src/pages/ko/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/ko/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Example -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Additional Resources -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ko/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ko/subgraphs/developing/creating/advanced.mdx index ee9918f5f254..8dbc48253034 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ko/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ko/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2ac894695fe1..cd81dc118f28 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx index 35bb04826c98..2e256ae18190 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/ko/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ko/subgraphs/developing/creating/install-the-cli.mdx index 674cc5bc22d2..c9d6966ef5fe 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Create a Subgraph ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ko/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ko/subgraphs/developing/creating/ql-schema.mdx index 27562f970620..2eb805320753 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx index 4823231d9a40..4931e6b1fd34 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Overview -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ko/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ko/subgraphs/developing/creating/subgraph-manifest.mdx index a42a50973690..085eaf2fb533 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Overview -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). The important entries to update for the manifest are: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Defining a Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ko/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ko/subgraphs/developing/creating/unit-testing-framework.mdx index 2133c1d4b5c9..e56e1109bc04 100644 --- a/website/src/pages/ko/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ko/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ko/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx index 634c2700ba68..77d10212c770 100644 --- a/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ko/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Must not use any of the following features: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ko/subgraphs/developing/developer-faq.mdx b/website/src/pages/ko/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/ko/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ko/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/ko/subgraphs/developing/introduction.mdx b/website/src/pages/ko/subgraphs/developing/introduction.mdx index 615b6cec4c9c..06bc2b76104d 100644 --- a/website/src/pages/ko/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ko/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ko/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ko/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/ko/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ko/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/ko/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ko/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/ko/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ko/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ko/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ko/subgraphs/developing/subgraphs.mdx b/website/src/pages/ko/subgraphs/developing/subgraphs.mdx index 951ec74234d1..b5a75a88e94f 100644 --- a/website/src/pages/ko/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ko/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ko/subgraphs/explorer.mdx b/website/src/pages/ko/subgraphs/explorer.mdx index f29f2a3602d9..499fcede88d3 100644 --- a/website/src/pages/ko/subgraphs/explorer.mdx +++ b/website/src/pages/ko/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signal/Un-signal on subgraphs +- Signal/Un-signal on Subgraphs - View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraphs Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -223,13 +223,13 @@ Keep in mind that this chart is horizontally scrollable, so if you scroll all th ### Curating Tab -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Within this tab, you’ll find an overview of: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/ko/subgraphs/guides/arweave.mdx b/website/src/pages/ko/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..e59abffa383f --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Building Subgraphs on Arweave +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. + +## What is Arweave? + +The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. + +Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## What are Arweave Subgraphs? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Building an Arweave Subgraph + +To be able to build and deploy Arweave Subgraphs, you need two packages: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## Subgraph's components + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. + +### 2. Schema - `schema.graphql` + +Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet + +Arweave data sources support two types of handlers: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> The source.owner can be the owner's address, or their Public Key. +> +> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Querying an Arweave Subgraph + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can I index the stored files on Arweave? + +Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). + +### Can I identify Bundlr bundles in my Subgraph? + +This is not currently supported. + +### How can I filter transactions to a specific account? + +The source.owner can be the user's public key or account address. + +### What is the current encryption format? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/ko/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ko/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..ab5076c5ebf4 --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Overview + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +or + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ko/subgraphs/guides/enums.mdx b/website/src/pages/ko/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..9f55ae07c54b --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Additional Resources + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/ko/subgraphs/guides/grafting.mdx b/website/src/pages/ko/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..d9abe0e70d2a --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Replace a Contract and Keep its History With Grafting +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## What is Grafting? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- It changes for which entity types an interface is implemented + +For more information, you can check: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Important Note on Grafting When Upgrading to the Network + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Why Is This Important? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Best Practices + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +By adhering to these guidelines, you minimize risks and ensure a smoother migration process. + +## Building an Existing Subgraph + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Grafting Manifest Definition + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## Deploying the Base Subgraph + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It returns something like this: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## Deploying the Grafting Subgraph + +The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It should return the following: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## Additional Resources + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/ko/subgraphs/guides/near.mdx b/website/src/pages/ko/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..e78a69eb7fa2 --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Building Subgraphs on NEAR +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## What is NEAR? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- Block handlers: these are run on every new block +- Receipt handlers: run every time a message is executed at a specified account + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. + +## Building a NEAR Subgraph + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### Subgraph Manifest Definition + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +NEAR data sources support two types of handlers: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## Deploying a NEAR Subgraph + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Local Graph Node (based on default configuration) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexing NEAR with a Local Graph Node + +Running a Graph Node that indexes NEAR has the following operational requirements: + +- NEAR Indexer Framework with Firehose instrumentation +- NEAR Firehose Component(s) +- Graph Node with Firehose endpoint configured + +We will provide more information on running the above components soon. + +## Querying a NEAR Subgraph + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### How does the beta work? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. + +### Will receipt handlers trigger for accounts and their sub-accounts? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +This is not supported. We are evaluating whether this functionality is required for indexing. + +### Can I use data source templates in my NEAR Subgraph? + +This is not currently supported. We are evaluating whether this functionality is required for indexing. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## References + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/ko/subgraphs/guides/polymarket.mdx b/website/src/pages/ko/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ko/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ko/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..e17e594408ff --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## Overview + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/ko/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ko/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..09f1939c1fde --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Get Started + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ko/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ko/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..91aa7484d2ec --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Quick and Easy Subgraph Debugging Using Forks +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## Ok, what is it? + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## What?! How? + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## Please, show me some code! + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +The usual way to attempt a fix is: + +1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. Wait for it to sync-up. +4. If it breaks again go back to 1, otherwise: Hooray! + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. Make a change in the mappings source, which you believe will solve the issue. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. If it breaks again, go back to 1, otherwise: Hooray! + +Now, you may have 2 questions: + +1. fork-base what??? +2. Forking who?! + +And I answer: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. Forking is easy, no need to sweat: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +So, here is what I do: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/ko/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/ko/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..a08e2a7ad8c9 --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Safe Subgraph Code Generator +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Why integrate with Subgraph Uncrashable? + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/ko/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ko/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..9a4b037cafbc --- /dev/null +++ b/website/src/pages/ko/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfer to The Graph +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### Example + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### Additional Resources + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ko/subgraphs/querying/best-practices.mdx b/website/src/pages/ko/subgraphs/querying/best-practices.mdx index ff5f381e2993..ab02b27cbc03 100644 --- a/website/src/pages/ko/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ko/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ko/subgraphs/querying/from-an-application.mdx b/website/src/pages/ko/subgraphs/querying/from-an-application.mdx index 681f6e6ba8d5..44677d78dcdf 100644 --- a/website/src/pages/ko/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ko/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Step 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Step 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Step 1 diff --git a/website/src/pages/ko/subgraphs/querying/graph-client/README.md b/website/src/pages/ko/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/ko/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ko/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/ko/subgraphs/querying/graphql-api.mdx b/website/src/pages/ko/subgraphs/querying/graphql-api.mdx index b3003ece651a..b82afcfa252c 100644 --- a/website/src/pages/ko/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ko/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ------ | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ko/subgraphs/querying/introduction.mdx b/website/src/pages/ko/subgraphs/querying/introduction.mdx index 36ea85c37877..2c9c553293fa 100644 --- a/website/src/pages/ko/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ko/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx index 6964b1a7ad9b..aed3d10422e1 100644 --- a/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ko/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ko/subgraphs/querying/python.mdx b/website/src/pages/ko/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/ko/subgraphs/querying/python.mdx +++ b/website/src/pages/ko/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ko/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ko/subgraphs/quick-start.mdx b/website/src/pages/ko/subgraphs/quick-start.mdx index dc280ec699d3..a803ac8695fa 100644 --- a/website/src/pages/ko/subgraphs/quick-start.mdx +++ b/website/src/pages/ko/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Quick Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ko/substreams/developing/dev-container.mdx b/website/src/pages/ko/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/ko/substreams/developing/dev-container.mdx +++ b/website/src/pages/ko/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/ko/substreams/developing/sinks.mdx b/website/src/pages/ko/substreams/developing/sinks.mdx index 5f6f9de21326..45e5471f0d09 100644 --- a/website/src/pages/ko/substreams/developing/sinks.mdx +++ b/website/src/pages/ko/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/ko/substreams/developing/solana/account-changes.mdx b/website/src/pages/ko/substreams/developing/solana/account-changes.mdx index a282278c7d91..8c821acaee3f 100644 --- a/website/src/pages/ko/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ko/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/ko/substreams/developing/solana/transactions.mdx b/website/src/pages/ko/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/ko/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ko/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/ko/substreams/introduction.mdx b/website/src/pages/ko/substreams/introduction.mdx index e11174ee07c8..0bd1ea21c9f6 100644 --- a/website/src/pages/ko/substreams/introduction.mdx +++ b/website/src/pages/ko/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/ko/substreams/publishing.mdx b/website/src/pages/ko/substreams/publishing.mdx index 3d1a3863c882..3d93e6f9376f 100644 --- a/website/src/pages/ko/substreams/publishing.mdx +++ b/website/src/pages/ko/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/ko/supported-networks.mdx b/website/src/pages/ko/supported-networks.mdx index 02e45c66ca42..ef2c28393033 100644 --- a/website/src/pages/ko/supported-networks.mdx +++ b/website/src/pages/ko/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Supported Networks hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ko/token-api/_meta-titles.json b/website/src/pages/ko/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/ko/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/ko/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/ko/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/ko/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/ko/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/ko/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/ko/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/ko/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/ko/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/ko/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/ko/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/ko/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/ko/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/ko/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/ko/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/ko/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/ko/token-api/faq.mdx b/website/src/pages/ko/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/ko/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ko/token-api/mcp/claude.mdx b/website/src/pages/ko/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..12a036b6fc24 --- /dev/null +++ b/website/src/pages/ko/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Configuration + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/ko/token-api/mcp/cline.mdx b/website/src/pages/ko/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..ef98e45939fe --- /dev/null +++ b/website/src/pages/ko/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Configuration + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/ko/token-api/mcp/cursor.mdx b/website/src/pages/ko/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..658108d1337b --- /dev/null +++ b/website/src/pages/ko/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Configuration + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/ko/token-api/monitoring/get-health.mdx b/website/src/pages/ko/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/ko/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/ko/token-api/monitoring/get-networks.mdx b/website/src/pages/ko/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/ko/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/ko/token-api/monitoring/get-version.mdx b/website/src/pages/ko/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/ko/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/ko/token-api/quick-start.mdx b/website/src/pages/ko/token-api/quick-start.mdx new file mode 100644 index 000000000000..4653c3d41ac6 --- /dev/null +++ b/website/src/pages/ko/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Quick Start +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/mr/about.mdx b/website/src/pages/mr/about.mdx index 6ec630cd8e4e..9597ecb03bb2 100644 --- a/website/src/pages/mr/about.mdx +++ b/website/src/pages/mr/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![ग्राफिक डेटा ग्राहकांना प्रश्न देण्यासाठी ग्राफ नोड कसा वापरतो हे स्पष्ट करणारे ग्राफिक](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The diagram below provides more detailed information about the flow of data afte 1. A dapp स्मार्ट करारावरील व्यवहाराद्वारे इथरियममध्ये डेटा जोडते. 2. व्यवहारावर प्रक्रिया करताना स्मार्ट करार एक किंवा अधिक इव्हेंट सोडतो. -3. ग्राफ नोड सतत नवीन ब्लॉक्ससाठी इथरियम स्कॅन करतो आणि तुमच्या सबग्राफचा डेटा त्यात असू शकतो. -4. ग्राफ नोड या ब्लॉक्समध्ये तुमच्या सबग्राफसाठी इथरियम इव्हेंट शोधतो आणि तुम्ही प्रदान केलेले मॅपिंग हँडलर चालवतो. मॅपिंग हे WASM मॉड्यूल आहे जे इथरियम इव्हेंट्सच्या प्रतिसादात ग्राफ नोड संचयित केलेल्या डेटा घटक तयार करते किंवा अद्यतनित करते. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. नोडचा [GraphQL एंडपॉइंट](https://graphql.org/learn/) वापरून ब्लॉकचेन वरून अनुक्रमित केलेल्या डेटासाठी dapp ग्राफ नोडची क्वेरी करते. ग्राफ नोड यामधून, स्टोअरच्या इंडेक्सिंग क्षमतांचा वापर करून, हा डेटा मिळविण्यासाठी त्याच्या अंतर्निहित डेटा स्टोअरच्या क्वेरींमध्ये GraphQL क्वेरीचे भाषांतर करतो. dapp हा डेटा अंतिम वापरकर्त्यांसाठी समृद्ध UI मध्ये प्रदर्शित करते, जो ते Ethereum वर नवीन व्यवहार जारी करण्यासाठी वापरतात. चक्राची पुनरावृत्ती होते. ## पुढील पायऱ्या -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/mr/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/mr/archived/arbitrum/arbitrum-faq.mdx index 562824e64e95..d121f5a2d0f3 100644 --- a/website/src/pages/mr/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/mr/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -39,7 +39,7 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-faq.mdx index b6ee08a5bbed..696f3c69a4fc 100644 --- a/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con L2 ट्रांस्फर टूल्स आपल्याला L1 वरून L2ला संदेश पाठविण्याच्या अर्बिट्रमच्या स्वभाविक विधानाचा वापर करतात. हा विधान "पुनः प्रयासयोग्य पर्याय" म्हणून ओळखला जातो आणि हा सर्व स्थानिक टोकन ब्रिजेस, अर्बिट्रम GRT ब्रिज यासह सहाय्यक आहे. आपण पुनः प्रयासयोग्य पर्यायांबद्दल अधिक माहिती [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging) वाचू शकता. -आपल्याला आपल्या संपत्तींच्या (सबग्राफ, स्टेक, प्रतिनिधित्व किंवा पुरवणी) L2ला स्थानांतरित केल्यास, एक संदेश अर्बिट्रम GRT ब्रिजमध्ये पाठविला जातो ज्याने L2वर पुनः प्रयासयोग्य पर्याय तयार करतो. स्थानांतरण उपकरणात्रूटील वैल्यूत्या किंवा संचलनसाठी काही ईटीएच वॅल्यू आहे, ज्यामुळे 1) पर्याय तयार करण्यासाठी पैसे देणे आणि 2) L2मध्ये पर्याय संचालित करण्यासाठी गॅस देणे ह्याचा वापर केला जातो. परंतु, पर्याय संचालनाच्या काळात गॅसची किंमते वेळेत बदलू शकतात, ज्यामुळे ही स्वयंप्रयत्न किंवा संचालन प्रयत्न अपयशी होऊ शकतात. जेव्हा ती प्रक्रिया अपयशी होते, तेव्हा अर्बिट्रम ब्रिज किंवा 7 दिवसापर्यंत पुन्हा प्रयत्न करण्याची क्षमता आहे, आणि कोणत्याही व्यक्ती त्या "पुनर्मिलन" पर्यायाचा प्रयत्न करू शकतो (त्यासाठी अर्बिट्रमवर काही ईटीएच स्थानांतरित केलेले असणे आवश्यक आहे). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -ही आपल्याला सगळ्या स्थानांतरण उपकरणांमध्ये "पुष्टीकरण" चरण म्हणून ओळखता - आपल्याला अधिकांशपेक्षा अधिक आपल्याला स्वयंप्रयत्न सध्याच्या वेळेत स्वयंप्रयत्न सध्याच्या वेळेत स्वतः संचालित होईल, परंतु आपल्याला येते कि ते दिले आहे ह्याची तपासणी करणे महत्वपूर्ण आहे. आपल्याला किंवा 7 दिवसात कोणत्याही सफल पुनर्मिलनाचे प्रयत्न केले त्यामुळे प्रयत्नशील नसत्या आणि त्या 7 दिवसांत कोणताही प्रयत्न नसत्याने, अर्बिट्रम ब्रिजने पुनर्मिलन पर्यायाचा त्याग केला आहे, आणि आपली संपत्ती (सबग्राफ, स्टेक, प्रतिनिधित्व किंवा पुरवणी) वेळेत विचली जाईल आणि पुनर्प्राप्त केली जाऊ शकणार नाही. ग्राफचे मुख्य डेव्हलपर्सन्सने या परिस्थितियांच्या जाणीवपणे प्राणीसमूह ठरविले आहे आणि त्याच्या अगोदर पुनर्मिलन केले जाईल, परंतु याच्यातून, आपल्याला आपल्या स्थानांतरणाची पूर्ण करण्याची जबाबदारी आहे. आपल्याला आपल्या व्यवहाराची पुष्टी करण्यात किंवा संचालनाची समस्या आहे का, कृपया [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) वापरून संपूर्ण डेव्हलपर्सन्सची मदत करण्याची क्षमता आहे. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## सबग्राफ हस्तांतरण -### मी माझा सबग्राफ कसा हस्तांतरित करू? +### How do I transfer my Subgraph? -तुमचा सबग्राफ हस्तांतरित करण्यासाठी, तुम्हाला खालील चरण पूर्ण करावे लागतील: +To transfer your Subgraph, you will need to complete the following steps: 1. Ethereum mainnet वर हस्तांतरण सुरू करा 2. पुष्टीकरणासाठी 20 मिनिटे प्रतीक्षा करा -3. आर्बिट्रमवर सबग्राफ हस्तांतरणाची पुष्टी करा\* +3. Confirm Subgraph transfer on Arbitrum\* -4. आर्बिट्रम वर सबग्राफ प्रकाशित करणे समाप्त करा +4. Finish publishing Subgraph on Arbitrum 5. क्वेरी URL अपडेट करा (शिफारस केलेले) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### मी माझे हस्तांतरण कोठून सुरू करावे? -आपल्याला स्थानांतरण सुरू करण्याची क्षमता आहे Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) किंवा कोणत्याही सबग्राफ तपशील पृष्ठापासून सुरू करू शकता. सबग्राफ तपशील पृष्ठावर "सबग्राफ स्थानांतरित करा" बटणवर क्लिक करा आणि स्थानांतरण सुरू करा. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### माझा सबग्राफ हस्तांतरित होईपर्यंत मला किती वेळ प्रतीक्षा करावी लागेल +### How long do I need to wait until my Subgraph is transferred स्थानांतरणासाठी किंमतीतून प्रायः 20 मिनिटे लागतात. आर्बिट्रम ब्रिज आपल्याला स्वत: स्थानांतरण स्वयंप्रयत्नातून पूर्ण करण्यासाठी पारंपारिकपणे काम करत आहे. कितीतरी प्रकारांत स्थानांतरण केल्यास, गॅस किंमती वाढू शकतात आणि आपल्याला परिपुष्टीकरण पुन्हा करण्याची आवश्यकता लागू शकते. -### मी L2 मध्ये हस्तांतरित केल्यानंतर माझा सबग्राफ अजूनही शोधण्यायोग्य असेल का? +### Will my Subgraph still be discoverable after I transfer it to L2? -आपला सबग्राफ केवळ त्या नेटवर्कवर शोधन्यायला येतो, ज्यावर तो प्रकाशित केला जातो. उदाहरणार्थ, आपला सबग्राफ आर्बिट्रम वनवर आहे तर आपल्याला तो केवळ आर्बिट्रम वनवरच्या एक्सप्लोररमध्ये शोधू शकता आणि आपल्याला इथे एथेरियमवर शोधायला सक्षम नसेल. कृपया पृष्ठाच्या वरील नेटवर्क स्विचरमध्ये आर्बिट्रम वन निवडल्याची आपल्याला कसे सुनिश्चित करण्याची आवश्यकता आहे. स्थानांतरणानंतर, L1 सबग्राफ विकलप म्हणून दिसणारा. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### माझा सबग्राफ हस्तांतरित करण्यासाठी प्रकाशित करणे आवश्यक आहे का? +### Does my Subgraph need to be published to transfer it? -सबग्राफ स्थानांतरण उपकरणाचा लाभ घेण्यासाठी, आपल्याला आपल्या सबग्राफला आधीच प्रकाशित केलेला पाहिजे आणि त्याच्या सबग्राफच्या मालक वॉलेटमध्ये काही परिपुष्टी संकेत असणे आवश्यक आहे. आपला सबग्राफ प्रकाशित नसल्यास, आपल्याला साधारणपणे आर्बिट्रम वनवर सीधे प्रकाशित करण्यात योग्य आहे - संबंधित गॅस फीस खूपच किमान असतील. आपल्याला प्रकाशित सबग्राफ स्थानांतरित करू इच्छित असल्यास, परंतु मालक खाते त्यावर कोणतीही प्रतिसाद संकेत दिली नाही, तर आपण त्या खाते पासून थोडीसी परिपुष्टी (उदा. 1 GRT) संकेतिक करू शकता; कृपया "स्वत: स्थानांतरित होणारी" संकेत निवडायला नक्की करा. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### माझ्या सबग्राफच्या इथेरियम मुख्य नेटवर्कचा संस्करण हस्तांतरित करताना Arbitrum वर काय होतं? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -आर्बिट्रमकडे आपल्या सबग्राफ स्थानांतरित करण्यानंतर, एथेरियम मुख्यनेट आवृत्ती विकलप म्हणून दिली जाईल. आपल्याला आपल्या क्वेरी URL वरील बदल करण्याची सल्ला आहे की त्याच्या 48 तासांत दिला जाईल. हेरंब विलंबप्रदान केलेले आहे ज्यामुळे आपली मुख्यनेट URL सक्रिय ठेवली जाईल आणि कोणत्याही तृतीय पक्षाच्या dapp समर्थनाच्या आधी अद्यतनित केल्या जाऊ शकतात. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### मी हस्तांतरित केल्यानंतर, मला आर्बिट्रमवर पुन्हा प्रकाशित करण्याची देखील आवश्यकता आहे का? @@ -80,21 +80,21 @@ If you have the L1 transaction hash (which you can find by looking at the recent ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### एल2 व Ethereum मुख्य नेटवर्कवर प्रकाशन आणि संस्करणदेखील सारखं आहे का? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### पुन्हा प्रकाशित करताना माझ्या एंडपॉईंटला डाउन-टाइम असेल का? +### Will my Subgraph's curation move with my Subgraph? -आपण "स्वत: स्थानांतरित होणारी" संकेत निवडल्यास, आपल्या आपल्या स्वत: स्थानांतरित करणार्या सबग्राफसह 100% आपल्या पुरवणीने निवडलेल्या स्थानांतरण होईल. सबग्राफच्या सर्व स्थानांतरण संकेताच्या स्थानांतरणाच्या क्षणी जीआरटीत रूपांतरित केली जाईल, आणि आपल्या पुरवणीसंकेताशी संबंधित जीआरटी आपल्याला L2 सबग्राफवर संकेत वितरित करण्यासाठी वापरली जाईल. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -इतर क्युरेटर्सनी त्याच्या भागाची GRT वापरून घेण्याची किंवा त्याच्या सबग्राफवर सिग्नल मिंट करण्यासाठी त्याची GRT L2वर हस्तांतरित करण्याची परवानगी आहे. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### तुम्ही आपले सबग्राफ L2 वर हस्तांतरित केल्यानंतर पुन्हा Ethereum मुख्य नेटवर्कवर परत करू शकता का? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -स्थानांतरित केल्यानंतर, आपल्या आर्बिट्रम वनवरच्या सबग्राफची एथेरियम मुख्यनेट आवृत्ती विकलप म्हणून दिली जाईल. आपल्याला मुख्यनेटवर परत जाण्याची इच्छा आहे किंवा, आपल्याला मुख्यनेटवर परत जाण्याची इच्छा आहे तर आपल्याला पुन्हा डिप्लॉय आणि प्रकाशित करण्याची आवश्यकता आहे. परंतु आर्बिट्रम वनवर परत गेल्याच्या बदलाच्या दिल्लाला मुख्यनेटवरील सूचना पूर्णपणे त्यात दिलेली आहे. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### माझे हस्तांतरण पूर्ण करण्यासाठी मला ब्रिज्ड ETH का आवश्यक आहे? @@ -206,19 +206,19 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans \*आवश्यक असल्यास - उदा. तुम्ही एक कॉन्ट्रॅक्ट पत्ता वापरत आहात. -### मी क्युरेट केलेला सबग्राफ L2 वर गेला असल्यास मला कसे कळेल? +### How will I know if the Subgraph I curated has moved to L2? -सबग्राफ तपशील पृष्ठाची पाहणी केल्यास, एक बॅनर आपल्याला सूचित करेल की हा सबग्राफ स्थानांतरित केलेला आहे. आपल्याला सुचवल्यास, आपल्या पुरवणीचे स्थानांतरण करण्यासाठी प्रॉम्प्ट अनुसरण करू शकता. आपल्याला ह्या माहितीला सापडण्याची किंवा स्थानांतरित केलेल्या कोणत्याही सबग्राफच्या तपशील पृष्ठावर मिळवू शकता. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### मी माझे क्युरेशन L2 वर हलवू इच्छित नसल्यास काय करावे? -कोणत्याही सबग्राफला प्राकृतिक रितीने प्रतिसादित केल्यानंतर, आपल्याला आपल्या सिग्नलला वापरून घेण्याची पर्वाह आहे. तसेच, आपल्याला जर सबग्राफ L2 वर हस्तांतरित केलेला असेल तर, आपल्याला आपल्या सिग्नलला ईथेरियम मेननेटवरून वापरून घेण्याची किंवा L2 वर सिग्नल पाठवण्याची पर्वाह आहे. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### माझे क्युरेशन यशस्वीरित्या हस्तांतरित झाले हे मला कसे कळेल? L2 हस्तांतरण साधन सुरू केल्यानंतर, संकेत तपशील २० मिनिटांनंतर Explorer मध्ये पहिल्या दिशेने प्रवेशक्षम होईल. -### किंवा तुम्ही एकापेक्षा अधिक सबग्राफवर एकावेळी आपल्या कुरेशनची हस्तांतरण करू शकता का? +### Can I transfer my curation on more than one Subgraph at a time? यावेळी मोठ्या प्रमाणात हस्तांतरण पर्याय नाही. @@ -266,7 +266,7 @@ L2 स्थानांतरण उपकरणाने आपल्याच ### माझ्या शेअर्स हस्तांतरित करण्यापूर्वी मला Arbitrum वर सूचीबद्ध करण्याची आवश्यकता आहे का? -आपल्याला स्वारूपण ठरविण्यापूर्वीच आपले स्टेक प्रभावीपणे स्थानांतरित करू शकता, परंतु L2 वर कोणत्या उत्पादनाची मागणी करण्याची अनुमती नसेल तोंद, ते लागू करण्यास आपल्याला L2 वरील सबग्राफ्सला आवंटन देण्याची, त्यांची सूचीबद्धीकरण करण्याची आणि POIs प्रस्तुत करण्याची आवश्यकता आहे, ते तुम्ही L2 वर कोणत्याही प्रामोड पावण्याच्या पर्यायी नसेल. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### मी माझा इंडेक्सिंग स्टेक हलवण्यापूर्वी प्रतिनिधी त्यांचे प्रतिनिधी हलवू शकतात का? diff --git a/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-guide.mdx index cb0215fe9cd0..32e1b7fc75f3 100644 --- a/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/mr/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ title: L2 Transfer Tools Guide Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## तुमचा सबग्राफ आर्बिट्रम (L2) वर कसा हस्तांतरित करायचा +## How to transfer your Subgraph to Arbitrum (L2) -## तुमचे सबग्राफ हस्तांतरित करण्याचे फायदे +## Benefits of transferring your Subgraphs मागील वर्षापासून, The Graph चे समुदाय आणि मुख्य डेव्हलपर [been preparing](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)करीत होते त्याच्या गोष्टीसाठी आर्बिट्रमवर जाण्याची. आर्बिट्रम, एक श्रेणी 2 किंवा "L2" ब्लॉकचेन, ईथेरियमकिडून सुरक्षा अनुभवतो परंतु काही लोअर गॅस फी प्रदान करतो. -जेव्हा तुम्ही आपल्या सबग्राफला The Graph Network वर प्रकाशित किंवा अपग्रेड करता तेव्हा, तुम्ही प्रोटोकॉलवरच्या स्मार्ट कॉन्ट्रॅक्ट्ससोबत संवाद साधता आहात आणि हे ईथ वापरून गॅससाठी पैसे देता येतात. आर्बिट्रमवर तुमच्या सबग्राफला हल्लीक अपडेट्सची आवश्यकता असल्यामुळे आपल्याला खूप कमी गॅस फी परतण्यात आलेली आहे. या कमी फीस, आणि लोअर करण्याची बंद पट आर्बिट्रमवर असल्याचे, तुमच्या सबग्राफवर इतर क्युरेटरसाठी सुविधा असताना तुमच्या सबग्राफवर कुणासही क्युरेशन करणे सोपे होते, आणि तुमच्या सबग्राफवर इंडेक्सरसाठी पुरस्कारांची वाढ होतील. या किमतीसवर्गीय वातावरणात इंडेक्सरसाठी सबग्राफला सूचीबद्ध करणे आणि सेव करणे सोपे होते. आर्बिट्रमवर इंडेक्सिंग पुरस्कारे आणि ईथेरियम मेननेटवर किमतीची वाढ होणारी आहेत, आणि यामुळे अगदी अधिक इंडेक्सरस त्याची स्थानिकता हस्तांतरित करत आहेत आणि त्यांचे ऑपरेशन्स L2 वर स्थापित करत आहेत.". +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## सिग्नल, तुमचा L1 सबग्राफ आणि क्वेरी URL सह काय होते हे समजून घेणे +## Understanding what happens with signal, your L1 Subgraph and query URLs -सबग्राफला आर्बिट्रमवर हस्तांतरित करण्यासाठी, आर्बिट्रम GRT सेतूक वापरला जातो, ज्याच्या परत आर्बिट्रमच्या मूळ सेतूकाचा वापर केला जातो, सबग्राफला L2 वर पाठवण्यासाठी. "हस्तांतरण" मुख्यनेटवर सबग्राफची वैल्यू कमी करणारा आहे आणि सेतूकाच्या ब्रिजच्या माध्यमातून लॉकल 2 वर सबग्राफ पुन्हा तयार करण्याची माहिती पाठवण्यात आली आहे. त्यामुळे हा "हस्तांतरण" मुख्यनेटवरील सबग्राफला अस्तित्वातून टाकेल आणि त्याची माहिती ब्रिजवार L2 वर पुन्हा तयार करण्यात आली आहे. हस्तांतरणात सबग्राफ मालकाची संकेतित GRT समाविष्ट केली आहे, ज्याची उपसंकेतित GRT मूळ सेतूकाच्या ब्रिजकडून हस्तांतरित करण्यासाठी जास्तीत जास्त शून्यापेक्षा असणे आवश्यक आहे. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -जेव्हा तुम्ही सबग्राफला हस्तांतरित करण्याची निवड करता, हे सबग्राफचे सर्व क्युरेशन सिग्नल GRT मध्ये रूपांतरित होईल. ह्याचे मुख्यनेटवर "अप्रामाणिक" घेण्याच्या अर्थाने आहे. तुमच्या क्युरेशनसह संबंधित GRT सबग्राफसह पाठवली जाईल, त्यामुळे त्यांचा L2 वर पाठवला जाईल, त्यातून त्यांचा नमूद कुंडला तयार केला जाईल. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -इतर क्युरेटरस स्वत: त्यांच्या भागाचा GRT परत घेण्याची किंवा त्याच्या एकल सबग्राफवर त्यांच्या सिग्नल तयार करण्यासाठी हस्तांतरित करण्याची पर्वानगी देऊ शकतात. जर सबग्राफ मालक त्याच्या सबग्राफला L2 वर हस्तांतरित करत नसता आणि त्याच्या कॉन्ट्रॅक्ट कॉलद्वारे मौना करतो, तर क्युरेटरसला सूचना दिली जाईल आणि त्यांना आपल्याच्या क्युरेशनची परवानगी वापरून परत घेतली जाईल. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -सबग्राफ हस्तांतरित केल्यानंतर, क्युरेशन सर्व GRT मध्ये रूपांतरित केल्यामुळे इंडेक्सरसला सबग्राफच्या इंडेक्सिंगसाठी पुरस्कार मिळवत नाही. परंतु, 24 तासांसाठी हस्तांतरित केलेल्या सबग्राफवर सेवा देणारे इंडेक्सर असतील आणि 2) L2 वर सबग्राफची इंडेक्सिंग प्रारंभ करतील. ह्या इंडेक्सरसांच्या पासून आधीपासूनच सबग्राफची इंडेक्सिंग आहे, म्हणून सबग्राफ सिंक होण्याची वाटचाल नसल्याची आवश्यकता नसून, आणि L2 सबग्राफची क्वेरी करण्यासाठी त्याच्यासाठी वाटचाल नसेल. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -L2 सबग्राफला क्वेरीसाठी वेगवेगळे URL वापरण्याची आवश्यकता आहे ('arbitrum-gateway.thegraph.com' वरील), परंतु L1 URL किमान 48 तासांसाठी काम करणार आहे. त्यानंतर, L1 गेटवे वेगवेगळ्या क्वेरीला L2 गेटवेला पुर्वानुमान देईल (काही कालावधीसाठी), परंतु त्यामुळे द्रुतिकरण वाढतो, म्हणजे तुमच्या क्वेरीस सर्व किंवा नवीन URL वर स्विच करणे शक्य आहे. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## तुमचे L2 वॉलेट निवडत आहे -तुम्ही तुमच्या सबग्राफची मेननेटवर प्रकाशित केल्यास, तुम्ही सबग्राफ तयार करण्यासाठी एक संयुक्त केलेल्या वॉलेटचा वापर केला होता, आणि हा वॉलेट हा सबग्राफ प्रतिनिधित्व करणारा NFT मिळवतो, आणि तुम्हाला अपडेट प्रकाशित करण्याची परवानगी देतो. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -सबग्राफ आर्बिट्रममध्ये हस्तांतरित करताना, तुम्ही वेगळे वॉलेट निवडू शकता जे L2 वर या सबग्राफ NFT चे मालक असेल. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. आपल्याला "सामान्य" वॉलेट वापरत आहे किंवा MetaMask (एक बाह्यिकपणे मालकीत खाता किंवा EOA, अर्थात स्मार्ट कॉन्ट्रॅक्ट नसलेला वॉलेट), तर ह्या निवडनीय आहे आणि L1 मध्ये असलेल्या समान मालकीचे पत्ते ठेवणे शिफारसले जाते. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**तुम्हाला एक वॉलेट पत्ता वापरण्याची महत्त्वाची आहे ज्याच्या तुम्ही नियंत्रण असता आणि त्याने Arbitrum वर व्यवहार करू शकतो. अन्यथा, सबग्राफ गमावला जाईल आणि त्याची पुनर्प्राप्ती केली जाऊ शकणार नाही.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## हस्तांतरणाची तयारी: काही ETH ब्रिजिंग -सबग्राफला हस्तांतरित करण्यासाठी एक ट्रॅन्झॅक्शन सेंड करण्यात आल्यामुळे ब्रिजद्वारे एक ट्रॅन्झॅक्शन आणि नंतर आर्बिट्रमवर दुसर्या ट्रॅन्झॅक्शन चालवावा लागतो. पहिल्या ट्रॅन्झॅक्शनमध्ये मुख्यनेटवर ETH वापरले जाते, आणि L2 वर संदेश प्राप्त होण्यात आल्यावर गॅस देण्यासाठी काही ETH समाविष्ट केले जाते. हेच गॅस कमी असल्यास, तर तुम्ही ट्रॅन्झॅक्शन पुन्हा प्रयत्न करून लॅटन्सीसाठी त्याच्यावर थेट पैसे द्यायला हवे, त्याच्यामुळे हे "चरण 3: हस्तांतरणाची पुष्टी करणे" असते (खालीलपैकी). ह्या कदाचित्का **तुम्ही हस्तांतरण सुरू केल्याच्या 7 दिवसांच्या आत** हे प्रक्रिया पुर्ण करणे आवश्यक आहे. इतरत्र, दुसऱ्या ट्रॅन्झॅक्शन ("चरण 4: L2 वर हस्तांतरण समाप्त करणे") ही आपल्याला खासगी आर्बिट्रमवर आणण्यात आली आहे. ह्या कारणांसाठी, तुम्हाला किमानपर्यंत काही ETH आवश्यक आहे, एक मल्टीसिग किंवा स्मार्ट कॉन्ट्रॅक्ट खात्याच्या आवश्यक आहे, ETH रोजच्या (EOA) वॉलेटमध्ये असणे आवश्यक आहे, ज्याचा तुम्ही ट्रॅन्झॅक्शन चालवण्यासाठी वापरता, मल्टीसिग वॉलेट स्वत: नसतो. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. तुम्ही किमानतरी एक्सचेंजेसवर ETH खरेदी करू शकता आणि त्याच्यामध्ये सीधे Arbitrum वर विद्यमान ठेवू शकता, किंवा तुम्ही Arbitrum ब्रिजवापरून ETH मुख्यनेटवरील एक वॉलेटपासून L2 वर पाठवू शकता: bridge.arbitrum.io. आर्बिट्रमवर गॅस फीस खूप कमी आहेत, म्हणजे तुम्हाला फक्त थोडेसे फक्त आवश्यक आहे. तुमच्या ट्रॅन्झॅक्शनसाठी मंजूरी मिळविण्यासाठी तुम्हाला किमान अंतरावर (उदा. 0.01 ETH) सुरुवात करणे शिफारसले जाते. -## सबग्राफ ट्रान्सफर टूल शोधत आहे +## Finding the Subgraph Transfer Tool -तुम्ही सबग्राफ स्टुडिओवर तुमच्या सबग्राफचे पेज पाहता तेव्हा तुम्हाला L2 ट्रान्सफर टूल सापडेल: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -हे तयार आहे Explorer वर, आपल्याला जर तुमच्याकडून एक सबग्राफच्या मालकीची वॉलेट असेल आणि Explorer सह कनेक्ट केले तर, आणि त्या सबग्राफच्या पृष्ठावर Explorer वरून मिळवू शकता: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho ## पायरी 1: हस्तांतरण सुरू करत आहे -हस्तांतरण सुरू करण्यापूर्वी, तुम्ही L2 वर सबग्राफच्या मालकपत्रक्षयक्षमतेचे निर्णय करावे लागेल (वरील "तुमच्या L2 वॉलेटची निवड" पहा), आणि आपल्याला आर्बिट्रमवर पुर्न ठेवण्यासाठी आधीपासून काही ETH असणे अत्यंत शिफारसले जाते (वरील "हस्तांतरण साठी प्राप्ती करणे: काही ETH हस्तांतरित करणे" पहा). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -कृपया लक्षात घ्या की सबग्राफ हस्तांतरित करण्यासाठी सबग्राफवर आपल्याला त्याच्या मालकपत्रक्षयक्षमतेसह अगदीच सिग्नल असावे; जर तुम्हाला सबग्राफवर सिग्नल केलेलं नसलं तर तुम्हाला थोडीसी क्युरेशन वाढवावी (एक थोडीसी असांतर किंवा 1 GRT आढवंच काही आहे). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -हस्तांतरण साधन उघडण्यात आल्यावर, तुम्ही "प्राप्ति वॉलेट पत्ता" क्षेत्रात L2 वॉलेट पत्ता भरू शकता - **तुम्ही येथे योग्य पत्ता नोंदवला आहे हे खात्री करा**. सबग्राफ हस्तांतरित करण्याच्या वर्तमानीत तुम्ही आपल्या वॉलेटवर ट्रॅन्झॅक्शन सुरू करण्याच्या आवश्यकता आहे (लक्षात घ्या की L2 गॅससाठी काही ETH मूळ आहे); हे हस्तांतरणाच्या प्रक्रियेचे सुरूवात करेल आणि आपल्या L1 सबग्राफला कमी करेल (अद्यतनसाठी "सिग्न. +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -जर तुम्ही हे कदम पूर्ण करता आहात, नुकसान होऊ नये हे सुनिश्चित करा की 7 दिवसांपेक्षा कमी वेळेत पुन्हा आपल्या क्रियान्वयनाचा तपास करा, किंवा सबग्राफ आणि तुमच्या सिग्नल GRT नष्ट होईल. हे त्याच्या कारणे आहे की आर्बिट्रमवर L1-L2 संदेशाचा कसा काम करतो: ब्रिजद्वारे पाठवलेले संदेश "पुन्हा प्रयत्नीय पर्यायपत्रे" आहेत ज्याचा क्रियान्वयन 7 दिवसांच्या आत अंदाजपत्री केला पाहिजे, आणि सुरुवातीचा क्रियान्वयन, आर्बिट्रमवर गॅस दरात वाढ असल्यास, पुन्हा प्रयत्न करण्याची आवश्यकता असेल. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## पायरी 2: सबग्राफ L2 वर येण्याची वाट पाहत आहे +## Step 2: Waiting for the Subgraph to get to L2 -तुम्ही हस्तांतरण सुरू केल्यानंतर, तुमच्या L1 सबग्राफला L2 वर हस्तांतरित करण्याचे संदेश Arbitrum ब्रिजद्वारे प्रसारित होणे आवश्यक आहे. हे किंवा. 20 मिनिटे लागतात (ब्रिज त्या व्यक्तिमत्वीकृत आहे की L1 मेननेट ब्लॉक जो लेनदार चेन reorgs साठी "सुरक्षित" आहे, त्यातील संदेश किंवा लेनदार चेन reorgs साठी "सुरक्षित" आहे, त्यातील संदेश होऊन जातो). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). ही प्रतीक्षा वेळ संपल्यानंतर, आर्बिट्रम L2 करारांवर हस्तांतरण स्वयं-अंमलबजावणी करण्याचा प्रयत्न करेल. @@ -80,7 +80,7 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho ## पायरी 3: हस्तांतरणाची पुष्टी करणे -अधिकांश प्रकरणात, आपल्याला प्राथमिकपणे संघटित ल2 गॅस असेल, ज्यामुळे सबग्राफला आर्बिट्रम कॉन्ट्रॅक्टवर प्राप्त करण्याच्या ट्रॅन्झॅक्शनची स्वत: क्रियारत झाली पाहिजे. कितीतरी प्रकरणात, आर्बिट्रमवर गॅस दरात वाढ असल्यामुळे ह्या स्वत: क्रियान्वितीत अयशस्वीता आपल्याला काहीतरी किंवा काहीतरी संभावना आहे. ह्या प्रकारे, आपल्या सबग्राफला L2 वर पाठवण्याच्या "पर्यायपत्रास" क्रियारत बसण्यासाठी अपूर्ण ठरेल आणि 7 दिवसांच्या आत पुन्हा प्रयत्न करण्याची आवश्यकता आहे. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. असे असल्यास, तुम्हाला आर्बिट्रमवर काही ETH असलेले L2 वॉलेट वापरून कनेक्ट करावे लागेल, तुमचे वॉलेट नेटवर्क आर्बिट्रमवर स्विच करा आणि व्यवहाराचा पुन्हा प्रयत्न करण्यासाठी "हस्तांतरण पुष्टी करा" वर क्लिक करा. @@ -88,33 +88,33 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho ## पायरी 4: L2 वर हस्तांतरण पूर्ण करणे -आता, आपला सबग्राफ आणि GRT आर्बिट्रमवर प्राप्त झालेले आहेत, परंतु सबग्राफ अद्याप प्रकाशित झालेला नाही. आपल्याला प्राप्ति वॉलेटसाठी निवडलेल्या L2 वॉलेटशी कनेक्ट करण्याची आवश्यकता आहे, आपला वॉलेट नेटवर्क आर्बिट्रमवर स्विच करण्याची आणि "पब्लिश सबग्राफ" वर क्लिक करण्याची आवश्यकता आहे +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -हे सबग्राफ प्रकाशित करेल आहे, त्यामुळे त्याचे सेवन करणारे इंडेक्सर्स आर्बिट्रमवर संचालित आहेत, आणि त्यामुळे ला ट्रान्सफर केलेल्या GRT वापरून संवाद सिग्नल क्युरेशन निर्माणित केले जाईल. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## पायरी 5: क्वेरी URL अपडेट करत आहे -तुमचा सबग्राफ आर्बिट्रममध्ये यशस्वीरित्या हस्तांतरित केला गेला आहे! सबग्राफची क्वेरी करण्यासाठी, नवीन URL असेल: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -लक्षात घ्या की आर्बिट्रमवर सबग्राफचे ID मुख्यनेटवर आपल्याला आहे आणि त्याच्या परिपर्यंत आपल्याला आर्बिट्रमवर आहे, परंतु आपल्याला वेगवेगळा सबग्राफ ID असेल, परंतु तुम्ही सदैव तो Explorer किंवा Studio वर शोधू शकता. उपरोक्त (वरील "सिग्नलसह, आपल्या L1 सबग्राफसह आणि क्वेरी URLसह काय करता येईल" पहा) म्हणजे पुराणे L1 URL थोडेसे वेळाने समर्थित राहील, परंतु आपल्याला सबग्राफ L2 वर सिंक केल्यानंतर आपल्या क्वेरीजला त्वरित नवीन पत्ता देणे शिफारसले जाते. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## तुमचे क्युरेशन आर्बिट्रम (L2) वर कसे हस्तांतरित करावे -## L2 मध्ये सबग्राफ ट्रान्सफरवरील क्युरेशनचे काय होते हे समजून घेणे +## Understanding what happens to curation on Subgraph transfers to L2 -सबग्राफच्या मालकाने सबग्राफला आर्बिट्रमवर हस्तांतरित केल्यास, सर्व सबग्राफच्या सिग्नलला एकाच वेळी GRT मध्ये रूपांतरित केला जातो. ही "ऑटो-माइग्रेटेड" सिग्नलसाठी लागू होते, अर्थात सबग्राफाच्या कोणत्याही संस्करण किंवा डिप्लॉयमेंटसाठी नसलेली सिग्नल किंवा नवीन संस्करणाच्या आधीच्या सबग्राफच्या आवृत्तीस पुरावीत केली जाते. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -सिग्नलपासून GRTमध्ये असे रूपांतरण होण्याचे त्याचे आपल्याला उदाहरण दिले आहे ज्याच्यासाठी जर सबग्राफमालक सबग्राफला L1मध्ये पुरावा दिला तर. सबग्राफ विकल्प किंवा हस्तांतरित केला जाता तेव्हा सर्व सिग्नलला समयानुसार "दहन" केला जातो (क्युरेशन बोंडिंग कर्वच्या वापराने) आणि निकाललेल्या GRTने GNS स्मार्ट कॉन्ट्रॅक्टने (जो सबग्राफ अपग्रेड्स आणि ऑटो-माइग्रेटेड सिग्नलच्या व्यवस्थापनासाठी जबाबदार आहे) साठवलेले आहे. प्रत्येक क्युरेटरने त्या सबग्राफसाठी कितीशेअर्स आहेत त्या प्रमाणे त्याच्याकडे गणना असते, आणि त्यामुळे त्याच्या शेअर्सचा GRTचा दावा असतो. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -सबग्राफ मालकाशी संबंधित या GRT चा एक अंश सबग्राफसह L2 ला पाठविला जातो. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -आत्ताच, संशोधित GRTमध्ये कोणतीही अधिक क्वेरी फीस घटना आहे नसून, क्युरेटर्सला आपली GRT वापरण्याची किंवा त्याची L2वर त्याच्या आपल्या वर्णनासाठी हस्तांतरित करण्याची पर्वानगी आहे, ज्याच्या माध्यमातून नवीन क्युरेशन सिग्नल तयार केला जाऊ शकतो. हे करण्यासाठी त्वरित किंवा अनिश्चित काळासाठी कोणतीही जरूरत नाही कारण GRT अनश्वास पाहिजे आणि प्रत्येकाला त्याच्या शेअर्सच्या प्रमाणानुसार एक निश्चित वस्तु मिळणार आहे, कोणत्या वेळीही. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## तुमचे L2 वॉलेट निवडत आहे @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho हस्तांतरण सुरू करण्यापूर्वी, तुम्ही त्याच्या L2 वर क्युरेशनचा मालक होणारा पत्ता निवडणे आवश्यक आहे (वरील "तुमच्या L2 वॉलेटची निवड" पाहा), आणि आर्बिट्रमवर संदेशाच्या क्रियान्वयनाचा पुन्हा प्रयत्न केल्यास लागणारे गॅससाठी काही ETH आधीच्या पुलाकीत सांडलेले असले पर्याय सुरुवातीच्या वेळी किंवा पुन्हा प्रयत्नीय पर्यायसाठी. आपल्याला काही एक्सचेंजवरून ETH खरेदी करून त्याची तुमच्या आर्बिट्रमवर स्थानांतरित करून सुरू आहे, किंवा आपल्याला मुख्यनेटवरून L2 वर ETH पाठवण्याच्या आर्बिट्रम ब्रिजचा वापर करून किंवा ETH खरेदी करून L2 वर पाठवण्याच्या कामाकरीत करण्याची शक्यता आहे: [bridge.arbitrum.io](http://bridge.arbitrum.io)- आर्बिट्रमवर गॅस दरात तोंड असल्यामुळे, तुम्हाला केवळ किंवा 0.01 ETH ची किंमत दरम्यानची आवश्यकता असेल. -आपल्याला संवादित केलेल्या सबग्राफ्टला L2 वर हस्तांतरित केले आहे तर, आपल्याला एक संदेश दिलेला जाईल ज्याच्या माध्यमातून Explorer वरून आपल्याला सांगण्यात येईल की आपण हस्तांतरित सबग्राफ्टच्या संवादनी आहात. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -सबग्राफ्ट पेज पाहताना, आपण संवादनाची पुनर्प्राप्ती किंवा हस्तांतरित करण्याचा निवड करू शकता. "Transfer Signal to Arbitrum" वर क्लिक केल्यास, हस्तांतरण साधने उघडतील. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho ## L1 वर तुमचे क्युरेशन मागे घेत आहे -जर आपल्याला आपल्या GRT ला L2 वर पाठवायचं आवडत नसलं तर किंवा आपल्याला GRT ला मॅन्युअली ब्रिज करण्याची प्राथमिकता आहे, तर आपल्याला L1 वरील आपल्या क्युरेटेड GRT ला काढून घ्यायला दिले आहे. सबग्राफच्या पृष्ठाच्या बॅनरवरून "Withdraw Signal" निवडा आणि व्यवस्थापन प्रक्रियेची पुष्टी करा; GRT आपल्या क्युरेटर पत्त्याला पाठविला जाईल. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/mr/archived/sunrise.mdx b/website/src/pages/mr/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/mr/archived/sunrise.mdx +++ b/website/src/pages/mr/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/mr/global.json b/website/src/pages/mr/global.json index b57692ddb6cf..9f39ea376ca1 100644 --- a/website/src/pages/mr/global.json +++ b/website/src/pages/mr/global.json @@ -6,6 +6,7 @@ "subgraphs": "सबग्राफ", "substreams": "उपप्रवाह", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "वर्णन", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "वर्णन", + "liveResponse": "Live Response", + "example": "उदाहरण" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/mr/index.json b/website/src/pages/mr/index.json index 3bd097a42eef..add2f95c68b0 100644 --- a/website/src/pages/mr/index.json +++ b/website/src/pages/mr/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "सबग्राफ", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -37,10 +37,86 @@ }, "supportedNetworks": { "title": "Supported Networks", + "details": "Network Details", + "services": "Services", + "type": "प्रकार", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Docs", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { "base": "The Graph supports {0}. To add a new network, {1}", "networks": "networks", "completeThisForm": "complete this form" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Name", + "id": "ID", + "subgraphs": "सबग्राफ", + "substreams": "उपप्रवाह", + "firehose": "फायरहोस", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "उपप्रवाह", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Billing", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { @@ -80,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/mr/indexing/chain-integration-overview.mdx b/website/src/pages/mr/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/mr/indexing/chain-integration-overview.mdx +++ b/website/src/pages/mr/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/mr/indexing/new-chain-integration.mdx b/website/src/pages/mr/indexing/new-chain-integration.mdx index e45c4b411010..c401fa57b348 100644 --- a/website/src/pages/mr/indexing/new-chain-integration.mdx +++ b/website/src/pages/mr/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, in a JSON-RPC batch request -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ ### 2. Firehose Integration @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/mr/indexing/overview.mdx b/website/src/pages/mr/indexing/overview.mdx index 0113721170dd..ce44b1b7a1d4 100644 --- a/website/src/pages/mr/indexing/overview.mdx +++ b/website/src/pages/mr/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i प्रोटोकॉलमध्ये स्टॅक केलेले GRT वितळण्याच्या कालावधीच्या अधीन आहे आणि जर इंडेक्सर्स दुर्भावनापूर्ण असतील आणि ऍप्लिकेशन्सना चुकीचा डेटा देत असतील किंवा ते चुकीच्या पद्धतीने इंडेक्स करत असतील तर ते कमी केले जाऊ शकतात. इंडेक्सर्स नेटवर्कमध्ये योगदान देण्यासाठी डेलिगेटर्सकडून डेलिगेटेड स्टेकसाठी बक्षिसे देखील मिळवतात. -इंडेक्सर्स सबग्राफच्या क्युरेशन सिग्नलच्या आधारे इंडेक्समध्ये सबग्राफ निवडतात, जिथे क्यूरेटर्स जीआरटी घेतात जेणेकरून कोणते सबग्राफ उच्च-गुणवत्तेचे आहेत आणि त्यांना प्राधान्य दिले पाहिजे. ग्राहक (उदा. ऍप्लिकेशन्स) मापदंड देखील सेट करू शकतात ज्यासाठी इंडेक्सर्स त्यांच्या सबग्राफसाठी क्वेरी प्रक्रिया करतात आणि क्वेरी शुल्क किंमतीसाठी प्राधान्ये सेट करतात. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### आलेख नोड -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### आलेख नोड -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/mr/indexing/supported-network-requirements.mdx b/website/src/pages/mr/indexing/supported-network-requirements.mdx index a1a9e0338649..950cc0ef64ee 100644 --- a/website/src/pages/mr/indexing/supported-network-requirements.mdx +++ b/website/src/pages/mr/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| हिमस्खलन | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| इथरियम | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| फॅन्टम | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| आशावाद | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| बहुभुज | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| हिमस्खलन | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| इथरियम | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| फॅन्टम | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| आशावाद | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| बहुभुज | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/mr/indexing/tap.mdx b/website/src/pages/mr/indexing/tap.mdx index f6248123d886..dd5401d6e9d5 100644 --- a/website/src/pages/mr/indexing/tap.mdx +++ b/website/src/pages/mr/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## सविश्लेषण -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/mr/indexing/tooling/graph-node.mdx b/website/src/pages/mr/indexing/tooling/graph-node.mdx index 30595816e62c..a85367e1c773 100644 --- a/website/src/pages/mr/indexing/tooling/graph-node.mdx +++ b/website/src/pages/mr/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: आलेख नोड --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## आलेख नोड -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL डेटाबेस -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### नेटवर्क क्लायंट In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### आयपीएफएस नोड्स -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### प्रोमिथियस मेट्रिक्स सर्व्हर @@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. ## प्रगत ग्राफ नोड कॉन्फिगरेशन -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### एकाधिक ग्राफ नोड्स -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### डिप्लॉयमेंट नियम -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. उपयोजन नियम कॉन्फिगरेशनचे उदाहरण: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### एकाधिक नेटवर्क समर्थन -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - एकाधिक नेटवर्क - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### ग्राफ नोडचे व्यवस्थापन -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### लॉगिंग -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### सबग्राफसह कार्य करणे +### Working with Subgraphs #### अनुक्रमणिका स्थिती API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - परिणामी डेटा स्टोअरमध्ये लिहित आहे -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. अनुक्रमणिका मंद होण्याची सामान्य कारणे: @@ -276,24 +276,24 @@ These stages are pipelined (i.e. they can be executed in parallel), but they are - प्रदाता स्वतः साखळी डोके मागे घसरण - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### अयशस्वी सबग्राफ +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### ब्लॉक आणि कॉल कॅशे -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### समस्या आणि त्रुटींची चौकशी करणे -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### प्रश्नांचे विश्लेषण करत आहे -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### सबग्राफ काढून टाकत आहे +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/mr/indexing/tooling/graphcast.mdx b/website/src/pages/mr/indexing/tooling/graphcast.mdx index 46e7c77e864d..966849766b7a 100644 --- a/website/src/pages/mr/indexing/tooling/graphcast.mdx +++ b/website/src/pages/mr/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Is there something you'd like to learn from or share with your fellow Indexers i ग्राफकास्ट SDK (सॉफ्टवेअर डेव्हलपमेंट किट) विकसकांना रेडिओ तयार करण्यास अनुमती देते, जे गॉसिप-शक्तीवर चालणारे अनुप्रयोग आहेत जे निर्देशांक दिलेल्या उद्देशासाठी चालवू शकतात. खालील वापराच्या प्रकरणांसाठी काही रेडिओ तयार करण्याचा आमचा मानस आहे (किंवा रेडिओ तयार करू इच्छिणाऱ्या इतर विकासकांना/संघांना समर्थन पुरवणे): -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### अधिक जाणून घ्या diff --git a/website/src/pages/mr/resources/benefits.mdx b/website/src/pages/mr/resources/benefits.mdx index 4ffee4b07761..128743e2c9ff 100644 --- a/website/src/pages/mr/resources/benefits.mdx +++ b/website/src/pages/mr/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | -| :-: | :-: | :-: | -| मासिक सर्व्हर खर्च\* | दरमहा $350 | $0 | -| क्वेरी खर्च | $0+ | $0 per month | -| अभियांत्रिकी वेळ | दरमहा $400 | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | -| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | 100,000 (Free Plan) | -| प्रति क्वेरी खर्च | $0 | $0 | -| Infrastructure | केंद्रीकृत | विकेंद्रित | -| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड $750+ | समाविष्ट | -| अपटाइम | बदलते | 99.9%+ | -| एकूण मासिक खर्च | $750+ | $0 | +| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | +| :--------------------------: | :-------------------------------------: | :----------------------------------------------------------------------: | +| मासिक सर्व्हर खर्च\* | दरमहा $350 | $0 | +| क्वेरी खर्च | $0+ | $0 per month | +| अभियांत्रिकी वेळ | दरमहा $400 | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | +| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | 100,000 (Free Plan) | +| प्रति क्वेरी खर्च | $0 | $0 | +| Infrastructure | केंद्रीकृत | विकेंद्रित | +| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड $750+ | समाविष्ट | +| अपटाइम | बदलते | 99.9%+ | +| एकूण मासिक खर्च | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | -| :-: | :-: | :-: | -| मासिक सर्व्हर खर्च\* | दरमहा $350 | $0 | -| क्वेरी खर्च | दरमहा $500 | $120 per month | -| अभियांत्रिकी वेळ | दरमहा $800 | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | -| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | ~3,000,000 | -| प्रति क्वेरी खर्च | $0 | $0.00004 | -| Infrastructure | केंद्रीकृत | विकेंद्रित | -| अभियांत्रिकी खर्च | $200 प्रति तास | समाविष्ट | -| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड एकूण खर्चात $1,200 | समाविष्ट | -| अपटाइम | बदलते | 99.9%+ | -| एकूण मासिक खर्च | $1,650+ | $120 | +| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | +| :--------------------------: | :----------------------------------------: | :----------------------------------------------------------------------: | +| मासिक सर्व्हर खर्च\* | दरमहा $350 | $0 | +| क्वेरी खर्च | दरमहा $500 | $120 per month | +| अभियांत्रिकी वेळ | दरमहा $800 | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | +| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | ~3,000,000 | +| प्रति क्वेरी खर्च | $0 | $0.00004 | +| Infrastructure | केंद्रीकृत | विकेंद्रित | +| अभियांत्रिकी खर्च | $200 प्रति तास | समाविष्ट | +| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड एकूण खर्चात $1,200 | समाविष्ट | +| अपटाइम | बदलते | 99.9%+ | +| एकूण मासिक खर्च | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | -| :-: | :-: | :-: | -| मासिक सर्व्हर खर्च\* | प्रति नोड, प्रति महिना $1100 | $0 | -| क्वेरी खर्च | $4000 | $1,200 per month | -| आवश्यक नोड्सची संख्या | 10 | लागू नाही | -| अभियांत्रिकी वेळ | दरमहा $6,000 किंवा अधिक | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | -| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | ~30,000,000 | -| प्रति क्वेरी खर्च | $0 | $0.00004 | -| Infrastructure | केंद्रीकृत | विकेंद्रित | -| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड एकूण खर्चात $1,200 | समाविष्ट | -| अपटाइम | बदलते | 99.9%+ | -| एकूण मासिक खर्च | $11,000+ | $1,200 | +| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | +| :--------------------------: | :-----------------------------------------: | :----------------------------------------------------------------------: | +| मासिक सर्व्हर खर्च\* | प्रति नोड, प्रति महिना $1100 | $0 | +| क्वेरी खर्च | $4000 | $1,200 per month | +| आवश्यक नोड्सची संख्या | 10 | लागू नाही | +| अभियांत्रिकी वेळ | दरमहा $6,000 किंवा अधिक | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | +| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | ~30,000,000 | +| प्रति क्वेरी खर्च | $0 | $0.00004 | +| Infrastructure | केंद्रीकृत | विकेंद्रित | +| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड एकूण खर्चात $1,200 | समाविष्ट | +| अपटाइम | बदलते | 99.9%+ | +| एकूण मासिक खर्च | $11,000+ | $1,200 | \*बॅकअपच्या खर्चासह: $50-$100 प्रति महिना @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -सबग्राफवर क्युरेटिंग सिग्नल हा पर्यायी एक-वेळचा, निव्वळ-शून्य खर्च आहे (उदा., $1k सिग्नल सबग्राफवर क्युरेट केला जाऊ शकतो आणि नंतर मागे घेतला जाऊ शकतो—प्रक्रियेत परतावा मिळविण्याच्या संभाव्यतेसह). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/mr/resources/glossary.mdx b/website/src/pages/mr/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/mr/resources/glossary.mdx +++ b/website/src/pages/mr/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/mr/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/mr/resources/migration-guides/assemblyscript-migration-guide.mdx index b989c4de4c11..3983adc51b62 100644 --- a/website/src/pages/mr/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/mr/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -हे सबग्राफ विकसकांना AS भाषा आणि मानक लायब्ररीची नवीन वैशिष्ट्ये वापरण्यास सक्षम करेल. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -तुम्हाला कोणती निवड करायची याची खात्री नसल्यास, आम्ही नेहमी सुरक्षित आवृत्ती वापरण्याची शिफारस करतो. जर मूल्य अस्तित्वात नसेल तर तुम्ही तुमच्या सबग्राफ हँडलरमध्ये रिटर्नसह फक्त लवकर इफ स्टेटमेंट करू इच्छित असाल. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -आम्ही यासाठी असेंबलीस्क्रिप्ट कंपायलरवर एक समस्या उघडली आहे, परंतु आत्ता तुम्ही तुमच्या सबग्राफ मॅपिंगमध्ये अशा प्रकारचे ऑपरेशन करत असल्यास, तुम्ही त्यापूर्वी शून्य तपासणी करण्यासाठी ते बदलले पाहिजेत. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/mr/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/mr/resources/migration-guides/graphql-validations-migration-guide.mdx index b0910e65fc1b..efe189247930 100644 --- a/website/src/pages/mr/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/mr/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. तुमच्या GraphQL ऑपरेशन्समधील समस्या शोधण्यासाठी आणि त्यांचे निराकरण करण्यासाठी तुम्ही CLI माइग्रेशन टूल वापरू शकता. वैकल्पिकरित्या तुम्ही `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` एंडपॉइंट वापरण्यासाठी तुमच्या GraphQL क्लायंटचा एंडपॉइंट अपडेट करू शकता. या एंडपॉइंटवर तुमच्या क्वेरींची चाचणी केल्याने तुम्हाला तुमच्या क्वेरींमधील समस्या शोधण्यात मदत होईल. -> तुम्ही [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) किंवा [GraphQL कोड जनरेटर](https://the-guild.dev) वापरत असल्यास, सर्व उपग्राफ स्थलांतरित करण्याची गरज नाही /graphql/codegen), ते तुमच्या क्वेरी वैध असल्याची खात्री करतात. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/mr/resources/roles/curating.mdx b/website/src/pages/mr/resources/roles/curating.mdx index 2d504102644e..4c73d5b33d31 100644 --- a/website/src/pages/mr/resources/roles/curating.mdx +++ b/website/src/pages/mr/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: क्युरेटिंग --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## सिग्नल कसे करावे -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -क्युरेटर विशिष्ट सबग्राफ आवृत्तीवर सिग्नल करणे निवडू शकतो किंवा ते त्यांचे सिग्नल त्या सबग्राफच्या नवीनतम उत्पादन बिल्डमध्ये स्वयंचलितपणे स्थलांतरित करणे निवडू शकतात. दोन्ही वैध धोरणे आहेत आणि त्यांच्या स्वतःच्या साधक आणि बाधकांसह येतात. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. तुमचा सिग्नल नवीनतम प्रोडक्शन बिल्डवर आपोआप स्थलांतरित होणे हे तुम्ही क्वेरी फी जमा करत असल्याचे सुनिश्चित करण्यासाठी मौल्यवान असू शकते. प्रत्येक वेळी तुम्ही क्युरेट करता तेव्हा 1% क्युरेशन कर लागतो. तुम्ही प्रत्येक स्थलांतरावर 0.5% क्युरेशन कर देखील द्याल. सबग्राफ विकसकांना वारंवार नवीन आवृत्त्या प्रकाशित करण्यापासून परावृत्त केले जाते - त्यांना सर्व स्वयं-स्थलांतरित क्युरेशन शेअर्सवर 0.5% क्युरेशन कर भरावा लागतो. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## जोखीम 1. द ग्राफमध्ये क्वेरी मार्केट मूळतः तरुण आहे आणि नवीन मार्केट डायनॅमिक्समुळे तुमचा %APY तुमच्या अपेक्षेपेक्षा कमी असण्याचा धोका आहे. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. बगमुळे सबग्राफ अयशस्वी होऊ शकतो. अयशस्वी सबग्राफ क्वेरी शुल्क जमा करत नाही. परिणामी, विकसक बगचे निराकरण करेपर्यंत आणि नवीन आवृत्ती तैनात करेपर्यंत तुम्हाला प्रतीक्षा करावी लागेल. - - तुम्ही सबग्राफच्या नवीनतम आवृत्तीचे सदस्यत्व घेतले असल्यास, तुमचे शेअर्स त्या नवीन आवृत्तीमध्ये स्वयंचलितपणे स्थलांतरित होतील. यावर 0.5% क्युरेशन कर लागेल. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## क्युरेशन FAQs ### 1. क्युरेटर्स किती % क्वेरी फी मिळवतात? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. कोणते सबग्राफ उच्च दर्जाचे आहेत हे मी कसे ठरवू? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. मी माझे क्युरेशन शेअर्स विकू शकतो का? diff --git a/website/src/pages/mr/resources/roles/delegating/undelegating.mdx b/website/src/pages/mr/resources/roles/delegating/undelegating.mdx index 0f7eee794703..7e73cae007df 100644 --- a/website/src/pages/mr/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/mr/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## अतिरिक्त संसाधने diff --git a/website/src/pages/mr/resources/subgraph-studio-faq.mdx b/website/src/pages/mr/resources/subgraph-studio-faq.mdx index f5729fb6cfa8..e50ecf505404 100644 --- a/website/src/pages/mr/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/mr/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: सबग्राफ स्टुडिओ FAQ ## 1. सबग्राफ स्टुडिओ म्हणजे काय? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. मी API की कशी तयार करू? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th API की तयार केल्यानंतर, सिक्युरिटी विभागात, तुम्ही डोमेन परिभाषित करू शकता जे विशिष्ट क्वेरी करू शकतात API. -## 5. मी माझा सबग्राफ दुसर्‍या मालकाकडे हस्तांतरित करू शकतो का? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -लक्षात ठेवा की एकदा स्‍टुडिओमध्‍ये सबग्राफ स्‍थानांतरित केल्‍यानंतर तुम्‍ही तो पाहू किंवा संपादित करू शकणार नाही. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. मला वापरायचा असलेल्या सबग्राफचा मी विकसक नसल्यास सबग्राफसाठी क्वेरी URL कसे शोधू? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -लक्षात ठेवा की तुम्ही API की तयार करू शकता आणि नेटवर्कवर प्रकाशित केलेल्या कोणत्याही सबग्राफची क्वेरी करू शकता, जरी तुम्ही स्वतः सबग्राफ तयार केला असला तरीही. नवीन API की द्वारे या क्वेरी, नेटवर्कवरील इतर कोणत्याही सशुल्क क्वेरी आहेत. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/mr/resources/tokenomics.mdx b/website/src/pages/mr/resources/tokenomics.mdx index 0fe45e9d9969..168cbea5509b 100644 --- a/website/src/pages/mr/resources/tokenomics.mdx +++ b/website/src/pages/mr/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## सविश्लेषण -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. क्युरेटर - इंडेक्सर्ससाठी सर्वोत्तम सबग्राफ शोधा +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. इंडेक्सर्स - ब्लॉकचेन डेटाचा कणा @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### सबग्राफ तयार करणे +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### विद्यमान सबग्राफची चौकशी करत आहे +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/mr/sps/introduction.mdx b/website/src/pages/mr/sps/introduction.mdx index 69be7173e0cf..d22d998dee0d 100644 --- a/website/src/pages/mr/sps/introduction.mdx +++ b/website/src/pages/mr/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## सविश्लेषण -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### अतिरिक्त संसाधने @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/mr/sps/sps-faq.mdx b/website/src/pages/mr/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/mr/sps/sps-faq.mdx +++ b/website/src/pages/mr/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/mr/sps/triggers.mdx b/website/src/pages/mr/sps/triggers.mdx index f5f05b02f759..df877d792fad 100644 --- a/website/src/pages/mr/sps/triggers.mdx +++ b/website/src/pages/mr/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## सविश्लेषण -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### अतिरिक्त संसाधने diff --git a/website/src/pages/mr/sps/tutorial.mdx b/website/src/pages/mr/sps/tutorial.mdx index 7f038fe09059..3e89f8c8804d 100644 --- a/website/src/pages/mr/sps/tutorial.mdx +++ b/website/src/pages/mr/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## सुरु करूया @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/mr/subgraphs/_meta-titles.json b/website/src/pages/mr/subgraphs/_meta-titles.json index 0556abfc236c..3fd405eed29a 100644 --- a/website/src/pages/mr/subgraphs/_meta-titles.json +++ b/website/src/pages/mr/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", + "guides": "How-to Guides", "best-practices": "Best Practices" } diff --git a/website/src/pages/mr/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/mr/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/mr/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/mr/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/mr/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/mr/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/mr/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/mr/subgraphs/best-practices/grafting-hotfix.mdx index 0989034a01a3..63b5d9bbe017 100644 --- a/website/src/pages/mr/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### सविश्लेषण -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## अतिरिक्त संसाधने - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/mr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/mr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/mr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/mr/subgraphs/best-practices/pruning.mdx b/website/src/pages/mr/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/mr/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/mr/subgraphs/best-practices/timeseries.mdx b/website/src/pages/mr/subgraphs/best-practices/timeseries.mdx index 239c7e0158db..c690981afd7c 100644 --- a/website/src/pages/mr/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/mr/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## सविश्लेषण @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ A timeseries entity represents raw data points collected over time. It is define type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ An aggregation entity computes aggregated values from a timeseries source. It is type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/mr/subgraphs/billing.mdx b/website/src/pages/mr/subgraphs/billing.mdx index 7126ce22520f..3199bdea1317 100644 --- a/website/src/pages/mr/subgraphs/billing.mdx +++ b/website/src/pages/mr/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/mr/subgraphs/cookbook/arweave.mdx b/website/src/pages/mr/subgraphs/cookbook/arweave.mdx index 2b43324539b9..be076ab8f655 100644 --- a/website/src/pages/mr/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/mr/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Arweave वर सबग्राफ तयार करणे --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! या मार्गदर्शकामध्ये, तुम्ही Arweave ब्लॉकचेन इंडेक्स करण्यासाठी सबग्राफ कसे तयार करावे आणि कसे तैनात करावे ते शिकाल. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are Arweave Subgraphs तयार आणि तैनात करण्यात सक्षम होण्यासाठी, तुम्हाला दोन पॅकेजेसची आवश्यकता आहे: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## सबग्राफचे घटक -सबग्राफचे तीन घटक आहेत: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ Arweave Subgraphs तयार आणि तैनात करण्यात GraphQL वापरून तुमचा सबग्राफ इंडेक्स केल्यानंतर तुम्ही कोणता डेटा क्वेरी करू इच्छिता ते येथे तुम्ही परिभाषित करता. हे प्रत्यक्षात API च्या मॉडेलसारखेच आहे, जेथे मॉडेल विनंती मुख्य भागाची रचना परिभाषित करते. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` जेव्हा तुम्ही ऐकत असलेल्या डेटा स्रोतांशी कोणीतरी संवाद साधते तेव्हा डेटा कसा पुनर्प्राप्त आणि संग्रहित केला जावा हे हे तर्कशास्त्र आहे. डेटा अनुवादित केला जातो आणि तुम्ही सूचीबद्ध केलेल्या स्कीमावर आधारित संग्रहित केला जातो. -सबग्राफ विकासादरम्यान दोन प्रमुख आज्ञा आहेत: +During Subgraph development there are two key commands: ``` -$ graph codegen # मॅनिफेस्टमध्ये ओळखल्या गेलेल्या स्कीमा फाइलमधून प्रकार व्युत्पन्न करते -$ graph build # असेंबलीस्क्रिप्ट फायलींमधून वेब असेंब्ली तयार करते आणि /बिल्ड फोल्डरमध्ये सर्व सबग्राफ फाइल्स तयार करते +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## सबग्राफ मॅनिफेस्ट व्याख्या -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave डेटा स्रोत पर्यायी source.owner फील्ड सादर करतात, जी Arweave वॉलेटची सार्वजनिक की आहे @@ -99,7 +99,7 @@ Arweave डेटा स्रोत दोन प्रकारच्या ## स्कीमा व्याख्या -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## असेंबलीस्क्रिप्ट मॅपिंग @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## प्रश्न करत आहे Arweave सबग्राफ -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## उदाहरणे सबग्राफ -संदर्भासाठी येथे एक उदाहरण उपग्राफ आहे: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### सबग्राफ इंडेक्स Arweave आणि इतर साखळी करू शकता? +### Can a Subgraph index Arweave and other chains? -नाही, सबग्राफ केवळ एका साखळी/नेटवर्कमधील डेटा स्रोतांना समर्थन देऊ शकतो. +No, a Subgraph can only support data sources from one chain/network. ### मी Arweave वर संग्रहित फाइल्स अनुक्रमित करू शकतो? सध्या, ग्राफ फक्त ब्लॉकचेन (त्याचे ब्लॉक्स आणि व्यवहार) म्हणून Arweave अनुक्रमित करत आहे. -### Currently, The Graph फक्त blockchain (त्याचे blocks आणि transactions) म्हणून Arweave अनुक्रमित करत आहे? +### Can I identify Bundlr bundles in my Subgraph? हे सध्या समर्थित नाही. @@ -188,7 +188,7 @@ source.owner वापरकर्त्याची सार्वजनिक ### सध्याचे एन्क्रिप्शन स्वरूप काय आहे? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/mr/subgraphs/cookbook/enums.mdx b/website/src/pages/mr/subgraphs/cookbook/enums.mdx index 081add904f9a..c2f2a41791f3 100644 --- a/website/src/pages/mr/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/mr/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/mr/subgraphs/cookbook/grafting.mdx b/website/src/pages/mr/subgraphs/cookbook/grafting.mdx index 3ceb7d2c7901..1fd0c6d49932 100644 --- a/website/src/pages/mr/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/mr/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: करार बदला आणि त्याचा इतिहास ग्राफ्टिंगसह ठेवा --- -या मार्गदर्शकामध्ये, तुम्ही विद्यमान सबग्राफ्सचे ग्राफ्टिंग करून नवीन सबग्राफ कसे तयार करावे आणि कसे तैनात करावे ते शिकाल. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## ग्राफ्टिंग म्हणजे काय? -ग्राफ्टिंग विद्यमान सबग्राफमधील डेटा पुन्हा वापरते आणि नंतरच्या ब्लॉकमध्ये अनुक्रमित करणे सुरू करते. मॅपिंगमध्ये भूतकाळातील साध्या चुका लवकर मिळवण्यासाठी किंवा विद्यमान सबग्राफ अयशस्वी झाल्यानंतर तात्पुरते काम करण्यासाठी हे विकासादरम्यान उपयुक्त आहे. तसेच, स्क्रॅचपासून इंडेक्स होण्यास बराच वेळ घेणार्‍या सबग्राफमध्ये वैशिष्ट्य जोडताना ते वापरले जाऊ शकते. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -ग्राफ्टेड सबग्राफ GraphQL स्कीमा वापरू शकतो जो बेस सबग्राफपैकी एकाशी एकसारखा नसतो, परंतु त्याच्याशी फक्त सुसंगत असतो. ती स्वतःच्या अधिकारात वैध सबग्राफ स्कीमा असणे आवश्यक आहे, परंतु खालील प्रकारे बेस सबग्राफच्या स्कीमापासून विचलित होऊ शकते: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - हे घटक प्रकार जोडते किंवा काढून टाकते - हे घटक प्रकारातील गुणधर्म काढून टाकते @@ -22,38 +22,38 @@ title: करार बदला आणि त्याचा इतिहास - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## विद्यमान सबग्राफ तयार करणे -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## सबग्राफ मॅनिफेस्ट व्याख्या -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Grafting मॅनिफेस्ट व्याख्या -ग्राफ्टिंगसाठी मूळ सबग्राफ मॅनिफेस्टमध्ये दोन नवीन आयटम जोडणे आवश्यक आहे: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## बेस सबग्राफ तैनात करणे -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. एकदा पूर्ण झाल्यावर, सबग्राफ योग्यरित्या अनुक्रमित होत असल्याचे सत्यापित करा. जर तुम्ही ग्राफ प्लेग्राउंडमध्ये खालील आदेश चालवलात +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ The `base` and `block` values can be found by deploying two subgraphs: one for t } ``` -एकदा तुम्ही सबग्राफ व्यवस्थित इंडेक्स करत असल्याची पडताळणी केल्यानंतर, तुम्ही ग्राफ्टिंगसह सबग्राफ त्वरीत अपडेट करू शकता. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## ग्राफ्टिंग सबग्राफ तैनात करणे कलम बदली subgraph.yaml मध्ये नवीन करार पत्ता असेल. जेव्हा तुम्ही तुमचा dapp अपडेट करता, करार पुन्हा लागू करता तेव्हा असे होऊ शकते. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. एकदा पूर्ण झाल्यावर, सबग्राफ योग्यरित्या अनुक्रमित होत असल्याचे सत्यापित करा. जर तुम्ही ग्राफ प्लेग्राउंडमध्ये खालील आदेश चालवलात +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ It should return the following: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## अतिरिक्त संसाधने diff --git a/website/src/pages/mr/subgraphs/cookbook/near.mdx b/website/src/pages/mr/subgraphs/cookbook/near.mdx index 6e790fdcb0cf..4a183fca2e16 100644 --- a/website/src/pages/mr/subgraphs/cookbook/near.mdx +++ b/website/src/pages/mr/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: NEAR वर सबग्राफ तयार करणे --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## जवळ म्हणजे काय? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## NEAR subgraphs म्हणजे काय? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - ब्लॉक हँडलर: हे प्रत्येक नवीन ब्लॉकवर चालवले जातात - पावती हँडलर्स: निर्दिष्ट खात्यावर संदेश कार्यान्वित झाल्यावर प्रत्येक वेळी चालवा @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## एक NEAR सबग्राफतयार करणे -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> NEAR सबग्राफ तयार करणे, याची प्रक्रिया इथेरियमवरील सबग्राफ तयार करण्याशी खूप सामान्यतेने सादर करते. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -सबग्राफ व्याख्येचे तीन पैलू आहेत: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -सबग्राफ विकासादरम्यान दोन प्रमुख आज्ञा आहेत: +During Subgraph development there are two key commands: ```bash -$ graph codegen # मॅनिफेस्टमध्ये ओळखल्या गेलेल्या स्कीमा फाइलमधून प्रकार व्युत्पन्न करते -$ graph build # असेंबलीस्क्रिप्ट फायलींमधून वेब असेंब्ली तयार करते आणि /बिल्ड फोल्डरमध्ये सर्व सबग्राफ फाइल्स तयार करते +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### सबग्राफ मॅनिफेस्ट व्याख्या -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ accounts: ### स्कीमा व्याख्या -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### असेंबलीस्क्रिप्ट मॅपिंग @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## NEAR सबग्राफ डिप्लॉय करण्यासाठी -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -नोड कॉन्फिगरेशन सबग्राफ कोठे तैनात केले जात आहे यावर अवलंबून असेल. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -एकदा तुमचा सबग्राफ तैनात केला गेला की, तो ग्राफ नोडद्वारे अनुक्रमित केला जाईल. तुम्ही सबग्राफवरच क्वेरी करून त्याची प्रगती तपासू शकता: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ NEAR ची अनुक्रमणिका देणारा आलेख ## NEAR सबग्राफची क्वेरी करणे -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## उदाहरणे सबग्राफ -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### बीटा कसे कार्य करते? -NEAR सपोर्ट बीटामध्ये आहे, याचा अर्थ असा की API मध्ये बदल होऊ शकतात कारण आम्ही एकत्रीकरण सुधारण्यासाठी काम करत आहोत. कृपया near@thegraph.com वर ईमेल करा जेणेकरुन आम्‍ही तुम्‍हाला जवळचे सबग्राफ तयार करण्‍यात मदत करू शकू आणि तुम्‍हाला नवीनतम घडामोडींबद्दल अद्ययावत ठेवू शकू! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### सबग्राफ इंडेक्स NEAR आणि EVM दोन्ही चेन करू शकतो का? +### Can a Subgraph index both NEAR and EVM chains? -नाही, सबग्राफ केवळ एका साखळी/नेटवर्कमधील डेटा स्रोतांना समर्थन देऊ शकतो. +No, a Subgraph can only support data sources from one chain/network. -### सबग्राफ अधिक विशिष्ट ट्रिगरवर प्रतिक्रिया देऊ शकतात? +### Can Subgraphs react to more specific triggers? सध्या, फक्त ब्लॉक आणि पावती ट्रिगर समर्थित आहेत. आम्ही एका निर्दिष्ट खात्यावर फंक्शन कॉलसाठी ट्रिगर तपासत आहोत. आम्‍हाला इव्‍हेंट ट्रिगरला सपोर्ट करण्‍यात देखील रस आहे, एकदा NEAR ला नेटिव्ह इव्‍हेंट सपोर्ट असेल. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### मॅपिंग दरम्यान NEAR subgraphs NEAR खात्यांना व्ह्यू कॉल करू शकतात? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? हे समर्थित नाही. अनुक्रमणिकेसाठी ही कार्यक्षमता आवश्यक आहे का याचे आम्ही मूल्यमापन करत आहोत. -### NEARのサブグラフでデータソーステンプレートを使用できますか? +### Can I use data source templates in my NEAR Subgraph? हे सध्या समर्थित नाही. अनुक्रमणिकेसाठी ही कार्यक्षमता आवश्यक आहे का याचे आम्ही मूल्यमापन करत आहोत. -### イーサリアムのサブグラフでは、「pending」および「current」のバージョンがサポートされていますが、NEARのサブグラフの「pending」バージョンをどのようにデプロイできるでしょうか? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -NEARサブグラフの「保留中」機能はまだサポートされていません。その間、異なる「名前付き」サブグラフに新しいバージョンをデプロイし、それがチェーンヘッドと同期された後、主要な「名前付き」サブグラフに再デプロイすることができます。この場合、同じ基礎となるデプロイメントIDを使用するため、メインのサブグラフは即座に同期されます. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### माझा प्रश्न उत्तर दिला नाही, NEAR सबग्राफ तयार करण्यासाठी अधिक मदत कुठे मिळेल? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## संदर्भ diff --git a/website/src/pages/mr/subgraphs/cookbook/polymarket.mdx b/website/src/pages/mr/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/mr/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/mr/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/mr/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/mr/subgraphs/cookbook/secure-api-keys-nextjs.mdx index d5ff1b146dfd..b6b043fa29f1 100644 --- a/website/src/pages/mr/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/mr/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## सविश्लेषण -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/mr/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/mr/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..c3c2e4af7f1f --- /dev/null +++ b/website/src/pages/mr/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## सविश्लेषण + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## सुरु करूया + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## अतिरिक्त संसाधने + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/mr/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/mr/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..5aef79286296 --- /dev/null +++ b/website/src/pages/mr/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## सुरु करूया + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## अतिरिक्त संसाधने + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/mr/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/mr/subgraphs/cookbook/subgraph-debug-forking.mdx index 3c7f2ec051e3..7007c6021580 100644 --- a/website/src/pages/mr/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/mr/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: फॉर्क्स वापरून जलद आणि सुलभ सबग्राफ डीबगिंग --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## ठीक आहे, ते काय आहे? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## काय?! कसे? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## कृपया, मला काही कोड दाखवा! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. निराकरण करण्याचा प्रयत्न करण्याचा नेहमीचा मार्ग आहे: 1. मॅपिंग स्त्रोतामध्ये बदल करा, जो तुम्हाला विश्वास आहे की समस्या सोडवेल (जेव्हा मला माहित आहे की ते होणार नाही). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. ते समक्रमित होण्याची प्रतीक्षा करा. 4. तो पुन्हा खंडित झाल्यास 1 वर परत जा, अन्यथा: हुर्रे! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. मॅपिंग स्त्रोतामध्ये बदल करा, जो तुम्हाला विश्वास आहे की समस्या सोडवेल. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. तो पुन्हा खंडित झाल्यास, 1 वर परत जा, अन्यथा: हुर्रे! आता, तुमच्याकडे 2 प्रश्न असू शकतात: @@ -69,18 +69,18 @@ Using **subgraph forking** we can essentially eliminate this step. Here is how i आणि मी उत्तर देतो: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. काटा काढणे सोपे आहे, घाम गाळण्याची गरज नाही: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! तर, मी काय करतो ते येथे आहे: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/mr/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/mr/subgraphs/cookbook/subgraph-uncrashable.mdx index 9a7e3d9f008e..55cf87cd0af1 100644 --- a/website/src/pages/mr/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/mr/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: सुरक्षित सबग्राफ कोड जनरेटर --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Subgraph Uncrashable सह समाकलित का? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - फ्रेमवर्कमध्ये एंटिटी व्हेरिएबल्सच्या गटांसाठी सानुकूल, परंतु सुरक्षित, सेटर फंक्शन्स तयार करण्याचा मार्ग (कॉन्फिग फाइलद्वारे) देखील समाविष्ट आहे. अशा प्रकारे वापरकर्त्याला जुना आलेख घटक लोड करणे/वापरणे अशक्य आहे आणि फंक्शनसाठी आवश्यक असलेले व्हेरिएबल सेव्ह करणे किंवा सेट करणे विसरणे देखील अशक्य आहे. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. ग्राफ CLI codegen कमांड वापरून Subgraph Uncrashable हा पर्यायी ध्वज म्हणून चालवला जाऊ शकतो. @@ -26,4 +26,4 @@ title: सुरक्षित सबग्राफ कोड जनरेट graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/mr/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/mr/subgraphs/cookbook/transfer-to-the-graph.mdx index d31f9d8864b5..d687370b93e6 100644 --- a/website/src/pages/mr/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/mr/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### उदाहरण -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### अतिरिक्त संसाधने -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/mr/subgraphs/developing/creating/advanced.mdx b/website/src/pages/mr/subgraphs/developing/creating/advanced.mdx index c24f72030078..e83051efd7a9 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## सविश्लेषण -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## गैर-घातक त्रुटी -आधीच समक्रमित केलेल्या सबग्राफ्सवर अनुक्रमणिका त्रुटी, डीफॉल्टनुसार, सबग्राफ अयशस्वी होण्यास आणि समक्रमण थांबवण्यास कारणीभूत ठरतील. सबग्राफ वैकल्पिकरित्या त्रुटींच्या उपस्थितीत समक्रमण सुरू ठेवण्यासाठी कॉन्फिगर केले जाऊ शकतात, हँडलरने केलेल्या बदलांकडे दुर्लक्ष करून, ज्यामुळे त्रुटी उद्भवली. हे सबग्राफ लेखकांना त्यांचे सबग्राफ दुरुस्त करण्यासाठी वेळ देते जेव्हा की नवीनतम ब्लॉकच्या विरूद्ध क्वेरी चालू ठेवल्या जातात, जरी त्रुटीमुळे परिणाम विसंगत असू शकतात. लक्षात घ्या की काही त्रुटी अजूनही नेहमीच घातक असतात. गैर-घातक होण्यासाठी, त्रुटी निश्चितपणे ज्ञात असणे आवश्यक आहे. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > हे ऑफ-चेन डेटाच्या निर्धारवादी अनुक्रमणिकेसाठी तसेच अनियंत्रित HTTP-स्रोत डेटाच्या संभाव्य परिचयासाठी देखील पाया घालते. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### तुमचे सबग्राफ उपयोजित करत आहे +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -फाइल डेटा स्रोत हँडलर आणि संस्था इतर सबग्राफ संस्थांपासून वेगळ्या केल्या जातात, ते कार्यान्वित केल्यावर ते निर्धारवादी आहेत याची खात्री करून आणि साखळी-आधारित डेटा स्रोतांचे दूषित होणार नाही याची खात्री करतात. विशिष्ट असणे: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> बहुतेक वापर-प्रकरणांसाठी ही मर्यादा समस्याप्रधान नसावी, परंतु काहींसाठी ते जटिलता आणू शकते. सबग्राफमध्‍ये तुमच्‍या फाईल-आधारित डेटाचे मॉडेल बनवण्‍यात तुम्‍हाला समस्या येत असल्‍यास कृपया डिस्‍कॉर्ड द्वारे संपर्क साधा! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! याव्यतिरिक्त, फाइल डेटा स्रोतावरून डेटा स्रोत तयार करणे शक्य नाही, मग ते ऑनचेन डेटा स्रोत असो किंवा अन्य फाइल डेटा स्रोत. भविष्यात हे निर्बंध उठवले जाऊ शकतात. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -बेस डेटा इंडेक्स करण्याऐवजी कॉपीचे ग्राफ्टिंग केल्यामुळे, सुरवातीपासून इंडेक्स करण्यापेक्षा इच्छित ब्लॉकमध्ये सबग्राफ मिळवणे खूप जलद आहे, जरी सुरुवातीच्या डेटा कॉपीला खूप मोठ्या सबग्राफसाठी बरेच तास लागू शकतात. ग्रॅफ्टेड सबग्राफ सुरू होत असताना, ग्राफ नोड आधीपासून कॉपी केलेल्या घटक प्रकारांबद्दल माहिती लॉग करेल. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -ग्राफ्टेड सबग्राफ GraphQL स्कीमा वापरू शकतो जो बेस सबग्राफपैकी एकाशी एकसारखा नसतो, परंतु त्याच्याशी फक्त सुसंगत असतो. ती स्वतःच्या अधिकारात वैध सबग्राफ स्कीमा असणे आवश्यक आहे, परंतु खालील प्रकारे बेस सबग्राफच्या स्कीमापासून विचलित होऊ शकते: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - हे घटक प्रकार जोडते किंवा काढून टाकते - हे घटक प्रकारातील गुणधर्म काढून टाकते @@ -560,4 +560,4 @@ When a subgraph whose manifest contains a `graft` block is deployed, Graph Node - हे इंटरफेस जोडते किंवा काढून टाकते - कोणत्या घटकासाठी इंटरफेस लागू केला जातो ते बदलते -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/mr/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/mr/subgraphs/developing/creating/assemblyscript-mappings.mdx index 682aec0ae2a5..e531b0f3d7c9 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## कोड जनरेशन -स्मार्ट कॉन्ट्रॅक्ट्स, इव्हेंट्स आणि संस्थांसोबत काम करणे सोपे आणि टाइप-सुरक्षित करण्यासाठी, ग्राफ CLI सबग्राफच्या GraphQL स्कीमा आणि डेटा स्रोतांमध्ये समाविष्ट केलेल्या कॉन्ट्रॅक्ट ABIs मधून असेंबलीस्क्रिप्ट प्रकार व्युत्पन्न करू शकतो. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. यासह केले जाते @@ -80,7 +80,7 @@ If no value is set for a field in the new entity with the same ID, the field wil graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx index a807b884e30c..c4c2e4f17471 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: असेंबलीस्क्रिप्ट API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### आवृत्त्या -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| आवृत्ती | रिलीझ नोट्स | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| आवृत्ती | रिलीझ नोट्स | +| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### अंगभूत प्रकार @@ -223,7 +223,7 @@ It adds the following method on top of the `Bytes` API: The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### अंदाज निर्मिती करणे @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### इथरियम प्रकारांसाठी समर्थन -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -पुढील उदाहरण हे स्पष्ट करते. सारखी सबग्राफ स्कीमा दिली +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### स्मार्ट कॉन्ट्रॅक्ट स्टेटमध्ये प्रवेश -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. कॉन्ट्रॅक्टमध्ये प्रवेश करणे हा एक सामान्य पॅटर्न आहे ज्यातून इव्हेंटची उत्पत्ती होते. हे खालील कोडसह साध्य केले आहे: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -सबग्राफचा भाग असलेला इतर कोणताही करार व्युत्पन्न केलेल्या कोडमधून आयात केला जाऊ शकतो आणि वैध पत्त्यावर बांधला जाऊ शकतो. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### रिव्हर्ट केलेले कॉल हाताळणे @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false '@graphprotocol/graph-ts' वरून { log } आयात करा ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### क्रिप्टो API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/common-issues.mdx index d291033f3ff0..868eab208423 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: सामान्य असेंब्लीस्क्रिप्ट समस्या --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/mr/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/mr/subgraphs/developing/creating/install-the-cli.mdx index 51dfb940edcb..c6892188ddfa 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## सविश्लेषण -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## प्रारंभ करणे @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## सबग्राफ तयार करा ### विद्यमान करारातून -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### सबग्राफच्या उदाहरणावरून -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| आवृत्ती | रिलीझ नोट्स | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/mr/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/mr/subgraphs/developing/creating/ql-schema.mdx index 0e96ef80d066..6af6f1fe497d 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## सविश्लेषण -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| प्रकार | वर्णन | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| प्रकार | वर्णन | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### एनम्स @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -एक-ते-अनेक संबंधांसाठी, संबंध नेहमी 'एका' बाजूला साठवले पाहिजेत आणि 'अनेक' बाजू नेहमी काढल्या पाहिजेत. 'अनेक' बाजूंवर संस्थांचा अ‍ॅरे संचयित करण्याऐवजी अशा प्रकारे नातेसंबंध संचयित केल्याने, अनुक्रमणिका आणि सबग्राफ क्वेरी या दोन्हीसाठी नाटकीयरित्या चांगले कार्यप्रदर्शन होईल. सर्वसाधारणपणे, घटकांचे अ‍ॅरे संग्रहित करणे जितके व्यावहारिक आहे तितके टाळले पाहिजे. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### उदाहरण @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -अनेक-ते-अनेक संबंध संचयित करण्याच्या या अधिक विस्तृत मार्गामुळे सबग्राफसाठी कमी डेटा संग्रहित केला जाईल आणि म्हणूनच अनुक्रमणिका आणि क्वेरीसाठी नाटकीयरित्या वेगवान असलेल्या सबग्राफमध्ये. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### स्कीमामध्ये टिप्पण्या जोडत आहे @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## भाषा समर्थित @@ -295,30 +295,30 @@ query { समर्थित भाषा शब्दकोश: -| Code | शब्दकोश | -| ---- | --------- | -| सोपे | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | पोर्तुगीज | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| Code | शब्दकोश | +| ------ | ---------- | +| सोपे | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | पोर्तुगीज | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | ### रँकिंग अल्गोरिदम परिणाम ऑर्डर करण्यासाठी समर्थित अल्गोरिदम: -| Algorithm | Description | -| ------------- | ---------------------------------------------------------------------- | -| rank | निकाल ऑर्डर करण्यासाठी फुलटेक्स्ट क्वेरीची जुळणी गुणवत्ता (0-1) वापरा. | -| proximityRank | Similar to rank but also includes the proximity of the matches. | +| Algorithm | Description | +| ------------- | ----------------------------------------------------------------------- | +| rank | निकाल ऑर्डर करण्यासाठी फुलटेक्स्ट क्वेरीची जुळणी गुणवत्ता (0-1) वापरा. | +| proximityRank | Similar to rank but also includes the proximity of the matches. | diff --git a/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx index 946093ef308b..8b40bdfde4fc 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## सविश्लेषण -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| आवृत्ती | रिलीझ नोट्स | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/mr/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/mr/subgraphs/developing/creating/subgraph-manifest.mdx index a09668000af7..d44504acb27b 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## सविश्लेषण -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). मॅनिफेस्टसाठी अद्यतनित करण्याच्या महत्त्वाच्या नोंदी आहेत: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## हँडलर्सना कॉल करा -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. कॉल हँडलर्स फक्त दोनपैकी एका प्रकरणात ट्रिगर होतील: जेव्हा निर्दिष्ट केलेल्या फंक्शनला कॉन्ट्रॅक्ट व्यतिरिक्त इतर खात्याद्वारे कॉल केले जाते किंवा जेव्हा ते सॉलिडिटीमध्ये बाह्य म्हणून चिन्हांकित केले जाते आणि त्याच कॉन्ट्रॅक्टमधील दुसर्‍या फंक्शनचा भाग म्हणून कॉल केले जाते. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### कॉल हँडलरची व्याख्या @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### मॅपिंग कार्य -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## ब्लॉक हँडलर -कॉन्ट्रॅक्ट इव्हेंट्स किंवा फंक्शन कॉल्सची सदस्यता घेण्याव्यतिरिक्त, सबग्राफला त्याचा डेटा अद्यतनित करायचा असेल कारण साखळीमध्ये नवीन ब्लॉक्स जोडले जातात. हे साध्य करण्यासाठी सबग्राफ प्रत्येक ब्लॉकनंतर किंवा पूर्व-परिभाषित फिल्टरशी जुळणार्‍या ब्लॉक्सनंतर फंक्शन चालवू शकतो. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### समर्थित फिल्टर @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. ब्लॉक हँडलरसाठी फिल्टरची अनुपस्थिती हे सुनिश्चित करेल की हँडलरला प्रत्येक ब्लॉक म्हटले जाईल. डेटा स्त्रोतामध्ये प्रत्येक फिल्टर प्रकारासाठी फक्त एक ब्लॉक हँडलर असू शकतो. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### मॅपिंग कार्य -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## ब्लॉक सुरू करा -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| आवृत्ती | रिलीझ नोट्स | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/mr/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/mr/subgraphs/developing/creating/unit-testing-framework.mdx index e09a384b8e6d..0b3909e9ff3b 100644 --- a/website/src/pages/mr/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/mr/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: युनिट चाचणी फ्रेमवर्क --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## प्रारंभ करणे @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI पर्याय @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### डेमो सबग्राफ +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### व्हिडिओ ट्यूटोरियल -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### वर्णन करणे() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im आम्ही तिथे जातो - आम्ही आमची पहिली चाचणी तयार केली आहे! 👏 -आता आमच्या चाचण्या चालवण्यासाठी तुम्हाला तुमच्या सबग्राफ रूट फोल्डरमध्ये खालील गोष्टी चालवाव्या लागतील: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## चाचणी कव्हरेज -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## अतिरिक्त संसाधने -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## अभिप्राय diff --git a/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx index 8d85033aeb01..3e34f743a6c0 100644 --- a/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/mr/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## एकाधिक नेटवर्कवर सबग्राफ तैनात करणे +## Deploying the Subgraph to multiple networks -काही प्रकरणांमध्ये, तुम्हाला समान सबग्राफ एकाधिक नेटवर्कवर त्याच्या कोडची नक्कल न करता उपयोजित करायचा असेल. यासह येणारे मुख्य आव्हान हे आहे की या नेटवर्कवरील कराराचे पत्ते वेगळे आहेत. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## सबग्राफ स्टुडिओ सबग्राफ संग्रहण धोरण +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -या धोरणामुळे प्रभावित झालेल्या प्रत्येक सबग्राफला प्रश्नातील आवृत्ती परत आणण्याचा पर्याय आहे. +Every Subgraph affected with this policy has an option to bring the version in question back. -## सबग्राफ आरोग्य तपासत आहे +## Checking Subgraph health -जर सबग्राफ यशस्वीरित्या समक्रमित झाला, तर ते कायमचे चांगले चालत राहण्याचे चांगले चिन्ह आहे. तथापि, नेटवर्कवरील नवीन ट्रिगर्समुळे तुमच्या सबग्राफची चाचणी न केलेली त्रुटी स्थिती येऊ शकते किंवा कार्यप्रदर्शन समस्यांमुळे किंवा नोड ऑपरेटरमधील समस्यांमुळे ते मागे पडू शकते. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx index 4769cbc3408b..2319974d45ed 100644 --- a/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/mr/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- विशिष्ट सबग्राफसाठी तुमच्या API की तयार करा आणि व्यवस्थापित करा +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### ग्राफ नेटवर्कसह सबग्राफ सुसंगतता -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- खालीलपैकी कोणतीही वैशिष्ट्ये वापरू नयेत: - - ipfs.cat & ipfs.map - - गैर-घातक त्रुटी - - कलम करणे +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## आलेख प्रमाणीकरण -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## सबग्राफ आवृत्त्यांचे स्वयंचलित संग्रहण -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/mr/subgraphs/developing/developer-faq.mdx b/website/src/pages/mr/subgraphs/developing/developer-faq.mdx index 8578be282aad..4f3e183375b9 100644 --- a/website/src/pages/mr/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/mr/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. सबग्राफ म्हणजे काय? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. मी माझ्या सबग्राफशी संबंधित गिटहब खाते बदलू शकतो का? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -तुम्हाला सबग्राफ पुन्हा तैनात करावा लागेल, परंतु सबग्राफ आयडी (IPFS हॅश) बदलत नसल्यास, त्याला सुरुवातीपासून सिंक करण्याची गरज नाही. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -सबग्राफमध्‍ये, इव्‍हेंट नेहमी ब्लॉकमध्‍ये दिसण्‍याच्‍या क्रमाने संसाधित केले जातात, ते एकाधिक कॉन्ट्रॅक्टमध्‍ये असले किंवा नसले तरीही. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ When new dynamic data source are created, the handlers defined for dynamic data If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? होय! खालील आदेश वापरून पहा, "संस्था/सबग्राफनेम" च्या जागी त्याखालील संस्था प्रकाशित झाली आहे आणि तुमच्या सबग्राफचे नाव: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/mr/subgraphs/developing/introduction.mdx b/website/src/pages/mr/subgraphs/developing/introduction.mdx index 3123dd66f2a7..9b6155152843 100644 --- a/website/src/pages/mr/subgraphs/developing/introduction.mdx +++ b/website/src/pages/mr/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/mr/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/mr/subgraphs/developing/managing/deleting-a-subgraph.mdx index cabf1261970a..b8c2330ca49d 100644 --- a/website/src/pages/mr/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/mr/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- क्युरेटर यापुढे सबग्राफवर सिग्नल करू शकणार नाहीत. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/mr/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/mr/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/mr/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/mr/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 50c8077f371a..78b641e5ae0a 100644 --- a/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/mr/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: विकेंद्रीकृत नेटवर्कवर सबग्राफ प्रकाशित करणे +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### प्रकाशित सबग्राफसाठी मेटाडेटा अपडेट करत आहे +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/mr/subgraphs/developing/subgraphs.mdx b/website/src/pages/mr/subgraphs/developing/subgraphs.mdx index 737fb1347ada..982e3dd36207 100644 --- a/website/src/pages/mr/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/mr/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: सबग्राफ ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## सबग्राफ लाइफसायकल -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/mr/subgraphs/explorer.mdx b/website/src/pages/mr/subgraphs/explorer.mdx index 6f30c3ea0ea3..afcc80c29f35 100644 --- a/website/src/pages/mr/subgraphs/explorer.mdx +++ b/website/src/pages/mr/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## सविश्लेषण -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signal/Un-signal on subgraphs +- Signal/Un-signal on Subgraphs - चार्ट, वर्तमान उपयोजन आयडी आणि इतर मेटाडेटा यासारखे अधिक तपशील पहा -- Switch versions to explore past iterations of the subgraph -- GraphQL द्वारे सबग्राफ क्वेरी करा -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - कमाल डेलिगेशन क्षमता - इंडेक्सर उत्पादकपणे स्वीकारू शकणारी जास्तीत जास्त डेलिगेटेड स्टेक. वाटप किंवा बक्षिसे गणनेसाठी जास्तीचा वाटप केला जाऊ शकत नाही. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraphs Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. या विभागात तुमच्या निव्वळ इंडेक्सर रिवॉर्ड्स आणि नेट क्वेरी फीबद्दल तपशील देखील समाविष्ट असतील. तुम्हाला खालील मेट्रिक्स दिसतील: @@ -223,13 +223,13 @@ In the Delegators tab, you can find the details of your active and historical de ### Curating Tab -क्युरेशन टॅबमध्ये, तुम्ही सिग्नल करत असलेले सर्व सबग्राफ तुम्हाला सापडतील (अशा प्रकारे तुम्हाला क्वेरी शुल्क प्राप्त करण्यास सक्षम करते). सिग्नलिंगमुळे क्युरेटर्स इंडेक्सर्सना कोणते सबग्राफ मौल्यवान आणि विश्वासार्ह आहेत हे ठळकपणे दाखवू देते, अशा प्रकारे ते इंडेक्स केले जाणे आवश्यक असल्याचे संकेत देते. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. या टॅबमध्ये, तुम्हाला याचे विहंगावलोकन मिळेल: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/mr/subgraphs/guides/arweave.mdx b/website/src/pages/mr/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..be076ab8f655 --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Arweave वर सबग्राफ तयार करणे +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +या मार्गदर्शकामध्ये, तुम्ही Arweave ब्लॉकचेन इंडेक्स करण्यासाठी सबग्राफ कसे तयार करावे आणि कसे तैनात करावे ते शिकाल. + +## Arweave काय आहे? + +Arweave प्रोटोकॉल विकसकांना कायमस्वरूपी डेटा संचयित करण्याची परवानगी देतो आणि Arweave आणि IPFS मधील मुख्य फरक आहे, जेथे IPFS मध्ये वैशिष्ट्याचा अभाव आहे; कायमस्वरूपी, आणि Arweave वर संचयित केलेल्या फायली बदलल्या किंवा हटवल्या जाऊ शकत नाहीत. + +अनेक वेगवेगळ्या प्रोग्रामिंग भाषांमध्ये प्रोटोकॉल समाकलित करण्यासाठी Arweave ने आधीच असंख्य लायब्ररी तयार केल्या आहेत. अधिक माहितीसाठी तुम्ही तपासू शकता: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## Arweave Subgraphs काय आहेत? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Arweave Subgraph तयार करणे + +Arweave Subgraphs तयार आणि तैनात करण्यात सक्षम होण्यासाठी, तुम्हाला दोन पॅकेजेसची आवश्यकता आहे: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## सबग्राफचे घटक + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +स्वारस्य असलेल्या डेटा स्रोतांची व्याख्या करते आणि त्यांची प्रक्रिया कशी करावी. Arweave हा एक नवीन प्रकारचा डेटा स्रोत आहे. + +### 2. Schema - `schema.graphql` + +GraphQL वापरून तुमचा सबग्राफ इंडेक्स केल्यानंतर तुम्ही कोणता डेटा क्वेरी करू इच्छिता ते येथे तुम्ही परिभाषित करता. हे प्रत्यक्षात API च्या मॉडेलसारखेच आहे, जेथे मॉडेल विनंती मुख्य भागाची रचना परिभाषित करते. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +जेव्हा तुम्ही ऐकत असलेल्या डेटा स्रोतांशी कोणीतरी संवाद साधते तेव्हा डेटा कसा पुनर्प्राप्त आणि संग्रहित केला जावा हे हे तर्कशास्त्र आहे. डेटा अनुवादित केला जातो आणि तुम्ही सूचीबद्ध केलेल्या स्कीमावर आधारित संग्रहित केला जातो. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## सबग्राफ मॅनिफेस्ट व्याख्या + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave डेटा स्रोत पर्यायी source.owner फील्ड सादर करतात, जी Arweave वॉलेटची सार्वजनिक की आहे + +Arweave डेटा स्रोत दोन प्रकारच्या हँडलरला समर्थन देतात: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> source.owner हा मालकाचा पत्ता किंवा त्यांची सार्वजनिक की असू शकतो. +> +> व्यवहार हे Arweave permaweb चे बिल्डिंग ब्लॉक्स आहेत आणि ते अंतिम वापरकर्त्यांनी तयार केलेल्या वस्तू आहेत. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## स्कीमा व्याख्या + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## असेंबलीस्क्रिप्ट मॅपिंग + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## प्रश्न करत आहे Arweave सबग्राफ + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## उदाहरणे सबग्राफ + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### मी Arweave वर संग्रहित फाइल्स अनुक्रमित करू शकतो? + +सध्या, ग्राफ फक्त ब्लॉकचेन (त्याचे ब्लॉक्स आणि व्यवहार) म्हणून Arweave अनुक्रमित करत आहे. + +### Can I identify Bundlr bundles in my Subgraph? + +हे सध्या समर्थित नाही. + +### मी विशिष्ट खात्यातील व्यवहार कसे फिल्टर करू शकतो? + +source.owner वापरकर्त्याची सार्वजनिक की किंवा खाते पत्ता असू शकतो. + +### सध्याचे एन्क्रिप्शन स्वरूप काय आहे? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/mr/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/mr/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..2135cd023def --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## सविश्लेषण + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +किंवा + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/mr/subgraphs/guides/enums.mdx b/website/src/pages/mr/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..c2f2a41791f3 --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## अतिरिक्त संसाधने + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/mr/subgraphs/guides/grafting.mdx b/website/src/pages/mr/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..1fd0c6d49932 --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: करार बदला आणि त्याचा इतिहास ग्राफ्टिंगसह ठेवा +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## ग्राफ्टिंग म्हणजे काय? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- हे घटक प्रकार जोडते किंवा काढून टाकते +- हे घटक प्रकारातील गुणधर्म काढून टाकते +- हे अस्तित्व प्रकारांमध्ये रद्द करण्यायोग्य विशेषता जोडते +- हे नॉन-नलेबल अॅट्रिब्यूट्सना न्युलेबल अॅट्रिब्यूट्समध्ये बदलते +- हे enums मध्ये मूल्ये जोडते +- हे इंटरफेस जोडते किंवा काढून टाकते +- कोणत्या घटकासाठी इंटरफेस लागू केला जातो ते बदलते + +अधिक माहितीसाठी, तुम्ही तपासू शकता: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Important Note on Grafting When Upgrading to the Network + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Why Is This Important? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Best Practices + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +By adhering to these guidelines, you minimize risks and ensure a smoother migration process. + +## विद्यमान सबग्राफ तयार करणे + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## सबग्राफ मॅनिफेस्ट व्याख्या + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Grafting मॅनिफेस्ट व्याख्या + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## बेस सबग्राफ तैनात करणे + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +हे असे काहीतरी परत करते: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## ग्राफ्टिंग सबग्राफ तैनात करणे + +कलम बदली subgraph.yaml मध्ये नवीन करार पत्ता असेल. जेव्हा तुम्ही तुमचा dapp अपडेट करता, करार पुन्हा लागू करता तेव्हा असे होऊ शकते. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It should return the following: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## अतिरिक्त संसाधने + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/mr/subgraphs/guides/near.mdx b/website/src/pages/mr/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..4a183fca2e16 --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: NEAR वर सबग्राफ तयार करणे +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## जवळ म्हणजे काय? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- ब्लॉक हँडलर: हे प्रत्येक नवीन ब्लॉकवर चालवले जातात +- पावती हँडलर्स: निर्दिष्ट खात्यावर संदेश कार्यान्वित झाल्यावर प्रत्येक वेळी चालवा + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> प्रणालीमध्ये पावती ही एकमेव क्रिया करण्यायोग्य वस्तू आहे. जेव्हा आम्ही जवळच्या प्लॅटफॉर्मवर "व्यवहारावर प्रक्रिया करणे" बद्दल बोलतो, तेव्हा याचा अर्थ शेवटी "पावत्या लागू करणे" असा होतो. + +## एक NEAR सबग्राफतयार करणे + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### सबग्राफ मॅनिफेस्ट व्याख्या + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +जवळील डेटा स्रोत दोन प्रकारच्या हँडलरला समर्थन देतात: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### स्कीमा व्याख्या + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### असेंबलीस्क्रिप्ट मॅपिंग + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## NEAR सबग्राफ डिप्लॉय करण्यासाठी + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### स्थानिक आलेख नोड (डीफॉल्ट कॉन्फिगरेशनवर आधारित) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### स्थानिक ग्राफ नोडशी NEAR चे सूचीकरण करणे + +NEAR ची अनुक्रमणिका देणारा आलेख नोड चालवण्यासाठी खालील ऑपरेशनल आवश्यकता आहेत: + +- फायरहोस इंस्ट्रुमेंटेशनसह इंडेक्सर फ्रेमवर्क जवळ +- NEAR फायरहोज घटकाज(वळ) +- फायरहोस एंडपॉइंटसह आलेख नोड कॉन्फिगर केले आहे + +वरील घटक चालवण्याबाबत आम्ही लवकरच अधिक माहिती देऊ. + +## NEAR सबग्राफची क्वेरी करणे + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## उदाहरणे सबग्राफ + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### बीटा कसे कार्य करते? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +सध्या, फक्त ब्लॉक आणि पावती ट्रिगर समर्थित आहेत. आम्ही एका निर्दिष्ट खात्यावर फंक्शन कॉलसाठी ट्रिगर तपासत आहोत. आम्‍हाला इव्‍हेंट ट्रिगरला सपोर्ट करण्‍यात देखील रस आहे, एकदा NEAR ला नेटिव्ह इव्‍हेंट सपोर्ट असेल. + +### पावती हँडलर खाती आणि त्यांच्या उप-खात्यांसाठी ट्रिगर करतील का? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +हे समर्थित नाही. अनुक्रमणिकेसाठी ही कार्यक्षमता आवश्यक आहे का याचे आम्ही मूल्यमापन करत आहोत. + +### Can I use data source templates in my NEAR Subgraph? + +हे सध्या समर्थित नाही. अनुक्रमणिकेसाठी ही कार्यक्षमता आवश्यक आहे का याचे आम्ही मूल्यमापन करत आहोत. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## संदर्भ + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/mr/subgraphs/guides/polymarket.mdx b/website/src/pages/mr/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/mr/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/mr/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..b6b043fa29f1 --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## सविश्लेषण + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/mr/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/mr/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..52da13032a9c --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## सुरु करूया + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## अतिरिक्त संसाधने + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/mr/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/mr/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..7007c6021580 --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: फॉर्क्स वापरून जलद आणि सुलभ सबग्राफ डीबगिंग +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## ठीक आहे, ते काय आहे? + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## काय?! कसे? + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## कृपया, मला काही कोड दाखवा! + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +निराकरण करण्याचा प्रयत्न करण्याचा नेहमीचा मार्ग आहे: + +1. मॅपिंग स्त्रोतामध्ये बदल करा, जो तुम्हाला विश्वास आहे की समस्या सोडवेल (जेव्हा मला माहित आहे की ते होणार नाही). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. ते समक्रमित होण्याची प्रतीक्षा करा. +4. तो पुन्हा खंडित झाल्यास 1 वर परत जा, अन्यथा: हुर्रे! + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. मॅपिंग स्त्रोतामध्ये बदल करा, जो तुम्हाला विश्वास आहे की समस्या सोडवेल. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. तो पुन्हा खंडित झाल्यास, 1 वर परत जा, अन्यथा: हुर्रे! + +आता, तुमच्याकडे 2 प्रश्न असू शकतात: + +1. fork-base काय??? +2. फोर्किंग होईल कोणासोबत?! + +आणि मी उत्तर देतो: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. काटा काढणे सोपे आहे, घाम गाळण्याची गरज नाही: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +तर, मी काय करतो ते येथे आहे: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/mr/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/mr/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..55cf87cd0af1 --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: सुरक्षित सबग्राफ कोड जनरेटर +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Subgraph Uncrashable सह समाकलित का? + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- फ्रेमवर्कमध्ये एंटिटी व्हेरिएबल्सच्या गटांसाठी सानुकूल, परंतु सुरक्षित, सेटर फंक्शन्स तयार करण्याचा मार्ग (कॉन्फिग फाइलद्वारे) देखील समाविष्ट आहे. अशा प्रकारे वापरकर्त्याला जुना आलेख घटक लोड करणे/वापरणे अशक्य आहे आणि फंक्शनसाठी आवश्यक असलेले व्हेरिएबल सेव्ह करणे किंवा सेट करणे विसरणे देखील अशक्य आहे. + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +ग्राफ CLI codegen कमांड वापरून Subgraph Uncrashable हा पर्यायी ध्वज म्हणून चालवला जाऊ शकतो. + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/mr/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/mr/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..d687370b93e6 --- /dev/null +++ b/website/src/pages/mr/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfer to The Graph +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### उदाहरण + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### अतिरिक्त संसाधने + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/mr/subgraphs/querying/best-practices.mdx b/website/src/pages/mr/subgraphs/querying/best-practices.mdx index 484f1a2d891a..db52212384b1 100644 --- a/website/src/pages/mr/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/mr/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/mr/subgraphs/querying/from-an-application.mdx b/website/src/pages/mr/subgraphs/querying/from-an-application.mdx index f867964cb39b..521a7717da49 100644 --- a/website/src/pages/mr/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/mr/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### 1 ली पायरी @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### 1 ली पायरी @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### 1 ली पायरी diff --git a/website/src/pages/mr/subgraphs/querying/graph-client/README.md b/website/src/pages/mr/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d39228c34c0f 100644 --- a/website/src/pages/mr/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/mr/subgraphs/querying/graph-client/README.md @@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## प्रारंभ करणे You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/mr/subgraphs/querying/graph-client/live.md b/website/src/pages/mr/subgraphs/querying/graph-client/live.md index e6f726cb4352..2139013e97d0 100644 --- a/website/src/pages/mr/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/mr/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## प्रारंभ करणे Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/mr/subgraphs/querying/graphql-api.mdx b/website/src/pages/mr/subgraphs/querying/graphql-api.mdx index c506e4c260a8..0cb4f07b2393 100644 --- a/website/src/pages/mr/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/mr/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Fulltext search operators: -| Symbol | Operator | वर्णन | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | वर्णन | +| ------ | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/mr/subgraphs/querying/introduction.mdx b/website/src/pages/mr/subgraphs/querying/introduction.mdx index d33c11a8fd26..f395b4ad9b8d 100644 --- a/website/src/pages/mr/subgraphs/querying/introduction.mdx +++ b/website/src/pages/mr/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## सविश्लेषण -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx index 167edacef164..0cd0d779e8bb 100644 --- a/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/mr/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## सविश्लेषण -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/mr/subgraphs/querying/python.mdx b/website/src/pages/mr/subgraphs/querying/python.mdx index 020814827402..bfeabae0b868 100644 --- a/website/src/pages/mr/subgraphs/querying/python.mdx +++ b/website/src/pages/mr/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/mr/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/mr/subgraphs/quick-start.mdx b/website/src/pages/mr/subgraphs/quick-start.mdx index 586b37afa265..b14954bc11a4 100644 --- a/website/src/pages/mr/subgraphs/quick-start.mdx +++ b/website/src/pages/mr/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: क्विक स्टार्ट --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. आलेख CLI स्थापित करा @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -तुमचा सबग्राफ सुरू करताना काय अपेक्षा करावी याच्या उदाहरणासाठी खालील स्क्रीनशॉट पहा: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -तुमचा सबग्राफ लिहिल्यानंतर, खालील आदेश चालवा: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/mr/substreams/developing/dev-container.mdx b/website/src/pages/mr/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/mr/substreams/developing/dev-container.mdx +++ b/website/src/pages/mr/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/mr/substreams/developing/sinks.mdx b/website/src/pages/mr/substreams/developing/sinks.mdx index 5bea8dabfb0f..5df4eb5fa3eb 100644 --- a/website/src/pages/mr/substreams/developing/sinks.mdx +++ b/website/src/pages/mr/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/mr/substreams/developing/solana/account-changes.mdx b/website/src/pages/mr/substreams/developing/solana/account-changes.mdx index 6170435942de..e37f80ee352a 100644 --- a/website/src/pages/mr/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/mr/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/mr/substreams/developing/solana/transactions.mdx b/website/src/pages/mr/substreams/developing/solana/transactions.mdx index 42b225167fb7..79dd7c6b24ea 100644 --- a/website/src/pages/mr/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/mr/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### सबग्राफ 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/mr/substreams/introduction.mdx b/website/src/pages/mr/substreams/introduction.mdx index f29771cc4a59..f1625a7f69dc 100644 --- a/website/src/pages/mr/substreams/introduction.mdx +++ b/website/src/pages/mr/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/mr/substreams/publishing.mdx b/website/src/pages/mr/substreams/publishing.mdx index ea2846d412ae..b662fc083c98 100644 --- a/website/src/pages/mr/substreams/publishing.mdx +++ b/website/src/pages/mr/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/mr/supported-networks.mdx b/website/src/pages/mr/supported-networks.mdx index 02e45c66ca42..ef2c28393033 100644 --- a/website/src/pages/mr/supported-networks.mdx +++ b/website/src/pages/mr/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Supported Networks hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/mr/token-api/_meta-titles.json b/website/src/pages/mr/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/mr/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/mr/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/mr/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/mr/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/mr/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/mr/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/mr/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/mr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/mr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/mr/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/mr/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/mr/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/mr/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/mr/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/mr/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/mr/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/mr/token-api/faq.mdx b/website/src/pages/mr/token-api/faq.mdx new file mode 100644 index 000000000000..d7683aa77768 --- /dev/null +++ b/website/src/pages/mr/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## सामान्य + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/mr/token-api/mcp/claude.mdx b/website/src/pages/mr/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..bd3781333707 --- /dev/null +++ b/website/src/pages/mr/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## कॉन्फिगरेशन + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/mr/token-api/mcp/cline.mdx b/website/src/pages/mr/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..970df7997b52 --- /dev/null +++ b/website/src/pages/mr/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## कॉन्फिगरेशन + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/mr/token-api/mcp/cursor.mdx b/website/src/pages/mr/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..a243820cf998 --- /dev/null +++ b/website/src/pages/mr/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## कॉन्फिगरेशन + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/mr/token-api/monitoring/get-health.mdx b/website/src/pages/mr/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/mr/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/mr/token-api/monitoring/get-networks.mdx b/website/src/pages/mr/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/mr/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/mr/token-api/monitoring/get-version.mdx b/website/src/pages/mr/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/mr/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/mr/token-api/quick-start.mdx b/website/src/pages/mr/token-api/quick-start.mdx new file mode 100644 index 000000000000..427bd0f2a59b --- /dev/null +++ b/website/src/pages/mr/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: क्विक स्टार्ट +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/nl/about.mdx b/website/src/pages/nl/about.mdx index ab5a9033cdac..7fde3b3d507d 100644 --- a/website/src/pages/nl/about.mdx +++ b/website/src/pages/nl/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The flow follows these steps: 1. A dapp adds data to Ethereum through a transaction on a smart contract. 2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/nl/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/nl/archived/arbitrum/arbitrum-faq.mdx index ee8b300ccb87..0e19e7062073 100644 --- a/website/src/pages/nl/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/nl/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Veiligheid overgenomen van Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph gemeenschap heeft vorig jaar besloten om door te gaan met Arbitrum na de uitkomst van [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussie. @@ -39,7 +39,7 @@ Om gebruik te maken van The Graph op L2, gebruik deze keuzeschakelaar om te wiss ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Als een subgraph ontwikkelaar, data consument, Indexer, Curator, of Delegator, wat moet ik nu doen? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Alles is grondig getest, en een eventualiteiten plan is gemaakt en klaargezet voor een veilige en naadloze transitie. Details kunnen [hier](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20) gevonden worden. -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-faq.mdx index 2c7df434e45c..846ddd61273d 100644 --- a/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con De L2 Transfer Tools gebruiken Arbitrum's eigen mechanismen op berichten te sturen van L1 naar L2. Dit mechanisme heet een "retryable ticket" en is gebruikt door alle eigen token bruggen, inclusief de Arbitrum GRT brug. Je kunt meer lezen over retryable tickets in de [Arbiturm docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Wanneer je jouw activa (subgraph, inzet, delegatie of curatie) overdraagt naar L2, wordt er een bericht via de Arbitrum GRT-brug gestuurd dat een herhaalbaar ticket in L2 aanmaakt. De overdrachtstool bevat een bepaalde hoeveelheid ETH in de transactie, die gebruikt wordt om 1) te betalen voor de creatie van de ticket en 2) te betalen voor de gas voor de uitvoer van de ticket in L2. Omdat de gasprijzen kunnen variëren in de tijd tot het ticket gereed is om in L2 uit te voeren, is het mogelijk dat deze automatische uitvoerpoging mislukt. Als dat gebeurt, zal de Arbitrum-brug het herhaalbare ticket tot 7 dagen lang actief houden, en iedereen kan proberen het ticket te "inlossen" (wat een portemonnee met wat ETH dat naar Arbitrum is overgebracht, vereist). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Dit is wat we de "Bevestigen"-stap noemen in alle overdrachtstools - deze zal in de meeste gevallen automatisch worden uitgevoerd, omdat de automatische uitvoering meestal succesvol is, maar het is belangrijk dat je terugkeert om te controleren of het is gelukt. Als het niet lukt en er zijn geen succesvolle herhaalpogingen in 7 dagen, zal de Arbitrum-brug het ticket verwerpen, en je activa (subgraph, inzet, delegatie of curatie) zullen verloren gaan en kunnen niet worden hersteld. De kernontwikkelaars van The Graph hebben een bewakingssysteem om deze situaties te detecteren en proberen de tickets in te lossen voordat het te laat is, maar uiteindelijk ben jij verantwoordelijk om ervoor te zorgen dat je overdracht op tijd is voltooid. Als je problemen hebt met het bevestigen van je transactie, neem dan contact op via [dit formulier](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) en de kernontwikkelaars zullen er zijn om je te helpen. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Ik ben mijn delegatie/inzet/curatie overdracht begonnen en ik ben niet zeker of deze door is gekomen naar L2, hoe kan ik bevestigen dat deze correct is overgedragen? @@ -36,43 +36,43 @@ Als je de L1 transactie-hash hebt (die je kunt vinden door naar de recente trans ## Subgraph Overdracht -### Hoe verplaats ik mijn subgraphs? +### How do I transfer my Subgraph? -Om je subgraph te verplaatsen, moet je de volgende stappen volgen: +To transfer your Subgraph, you will need to complete the following steps: 1. Start de overdracht op het Ethereum mainnet 2. Wacht 20 minuten op bevestiging -3. Bevestig subgraph overdracht op Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Maak het publiceren van subrgaph op Arbitrum af +4. Finish publishing Subgraph on Arbitrum 5. Update Query URL (aanbevolen) -\*Let op dat je de overdracht binnen 7 dagen moet bevestigen, anders kan je subgraph verloren gaan. In de meeste gevallen zal deze stap automatisch verlopen, maar een handmatige bevestiging kan nodig zijn als er een gasprijsstijging is op Arbitrum. Als er tijdens dit proces problemen zijn, zijn er bronnen beschikbaar om te helpen: neem contact op met de ondersteuning via support@thegraph.com of op [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Waarvandaan moet ik mijn overdracht vanaf starten? -Je kan je overdracht starten vanaf de [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) of elke subgraph details pagina. Klik de "Transfer Subgraph" knop in de subgraph details pagina om de overdracht te starten. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Hoe lang moet ik wachten to mijn subrgaph overgedragen is +### How long do I need to wait until my Subgraph is transferred De overdracht duurt ongeveer 20 minuten. De Arbitrum brug werkt momenteel op de achtergrond om de brug overdracht automatisch te laten voltooien. In sommige gevallen kunnen gaskosten pieken en zul je de overdracht opnieuw moeten bevestigen. -### Is mijn subgraph nog te ontdekken nadat ik het naar L2 overgedragen heb? +### Will my Subgraph still be discoverable after I transfer it to L2? -Jouw subgraph zal alleen te ontdekken zijn op het netwerk waarnaar deze gepubliceerd is. Bijvoorbeeld, als jouw subgraph gepubliceerd is op Arbitrum One, dan kan je deze alleen vinden via de Explorer op Arbitrum One en zul je deze niet kunnen vinden op Ethereum. Zorg ervoor dat je Arbitrum One hebt geselecteerd in de netwerkschakelaar bovenaan de pagina om er zeker van te zijn dat je op het juiste netwerk bent.  Na de overdracht zal de L1 subgraph als verouderd worden weergegeven. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Moet mijn subgraph gepubliceerd zijn om deze te kunnen overdragen? +### Does my Subgraph need to be published to transfer it? -Om gebruik te maken van de subgraph transfer tool, moet jouw subgraph al gepubliceerd zijn op het Ethereum mainnet en moet het enige curatie-signalen hebben die eigendom zijn van de wallet die de subgraph bezit. Als jouw subgraph nog niet is gepubliceerd, wordt het aanbevolen om het direct op Arbitrum One te publiceren - de bijbehorende gas fees zullen aanzienlijk lager zijn. Als je een gepubliceerde subgraph wilt overdragen maar het eigenaarsaccount heeft nog geen enkel curatie-signalen, kun je een klein bedrag signaleren (bv.: 1 GRT) vanaf dat account; zorg ervoor dat je "auto-migrating" signalen kiest. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Wat gebeurt er met de Ethereum mainnet versie van mijn subgraph nadat ik overdraag naar Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Nadat je je subgraph naar Arbitrum hebt overgezet, zal de versie op het Ethereum mainnet als verouderd worden beschouwd. We raden aan om je query URL binnen 48 uur bij te werken. Er is echter een overgangsperiode waardoor je mainnet URL nog steeds werkt, zodat ondersteuning voor externe dapps kan worden bijgewerkt. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Nadat ik overgedragen heb, moet ik opnieuw publiceren op Arbitrum? @@ -80,21 +80,21 @@ Na de overdracht periode van 20 minuten, zul je de overdracht moeten bevestigen ### Zal mijn eindpunt downtime ervaren tijdens het opnieuw publiceren? -Het is onwaarschijnlijk, maar mogelijk om een korte downtime te ervaren afhankelijk van welke Indexers de subgraph op L1 ondersteunen en of zij blijven indexen totdat de subgraph volledig ondersteund wordt op L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Is het publiceren en versiebeheer hetzelfde op L2 als Ethereum mainnet? -Ja. Selecteer Arbiturm One als jou gepubliceerde netwerk tijdens het publiceren in Subrgaph Studio. In de studio, de laatste endpoint die beschikbaar is zal wijzen naar de meest recentelijk bijgewerkte versie van de subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Zal mijn subgraphs curatie mee verplaatsen met mijn subgraph? +### Will my Subgraph's curation move with my Subgraph? -Als je gekozen hebt voor auto-migrating signal, dan zal 100% van je eigen curatie mee verplaatsen met jouw subgraph naar Arbitrum One. Alle curatie signalen van de subgraph zullen worden omgezet naar GRT tijdens de overdracht en alle GRT die corresponderen met jouw curatie signaal zullen worden gebruikt om signalen te minten op de L2 subgraph. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Andere curators kunnen kiezen of ze hun deel van GRT kunnen opnemen, of overdragen naar L2 om signalen te minten op dezelfde subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Kan ik nadat ik mijn subgraph overgedragen heb deze weer terug overdragen naar Ethereum mainnet? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Wanneer overgedragen, zal jouw Ethereum mainnet versie van deze subgraph als verouderd worden beschouwd. Als je terug wilt gaan naar het mainnet, zul je deze opnieuw moeten implementeren en publiceren op het mainnet. Echter, het wordt sterk afgeraden om terug naar het Ethereum mainnet over te dragen gezien index beloningen uiteindelijk op Arbitrum One zullen worden verdeeld. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Waarom heb ik gebrugd ETH nodig om mijn transactie te voltooien? @@ -206,19 +206,19 @@ Om je curatie over te dragen, moet je de volgende stappen volgen: \*indien nodig - bv. als je een contract adres gebruikt hebt. -### Hoe weet ik of de subgraph die ik cureer verplaatst is naar L2? +### How will I know if the Subgraph I curated has moved to L2? -Bij het bekijken van de details pagina van de subgraph zal er een banner verschijnen om je te laten weten dat deze subgraph is overgedragen. Je kunt de instructies volgen om je curatie over te zetten. Deze informatie is ook te vinden op de detailspagina van elke subgraph die is overgezet. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Wat als ik niet mijn curatie wil overdragen naar L2? -Wanneer een subgraph is verouderd, heb je de optie om je signaal terug te trekken. Op dezelfde manier, als een subgraph naar L2 is verhuisd, kun je ervoor kiezen om je signaal op het Ethereum-mainnet terug te trekken of het signaal naar L2 te sturen. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Hoe weet ik of mijn curatie succesvol is overgedragen? Signaal details zullen toegankelijk zijn via Explorer ongeveer 20 minuten nadat de L2 transfer tool is gestart. -### Kan ik mijn curatie overdragen op meer dan een subgraph per keer? +### Can I transfer my curation on more than one Subgraph at a time? Op dit moment is er geen bulk overdracht optie. @@ -266,7 +266,7 @@ Het duurt ongeveer 20 minuten voordat de L2-overdrachtstool je inzet heeft overg ### Moet ik indexeren op Arbitrum voordat ik mijn inzet overdraag? -Je kunt je inzet effectief overdragen voordat je indexing opzet, maar je zult geen beloningen kunnen claimen op L2 totdat je toewijst aan subgraphs op L2, ze indexeert en POI's presenteert. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Kunnen Delegators hun delegatie overdragen voordat ik mijn index inzet overdraag? diff --git a/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-guide.mdx index 67a7011010e7..d8828c547837 100644 --- a/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/nl/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph heeft het eenvoudig gemaakt om naar L2 op Arbitrum One over te stappen Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Hoe zet je je subgraph over naar Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Voordelen van het overzetten van uw subgraphs +## Benefits of transferring your Subgraphs De community en ontwikkelaars van The Graph hebben [zich voorbereid](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) op de transitie naar Arbitrum gedurende het afgelopen jaar. Arbitrum, een layer 2 of "L2" blockchain, erft de beveiliging van Ethereum maar biedt aanzienlijk lagere gas fees. -Wanneer je je subgraph publiceert of bijwerkt naar the Graph Network, interacteer je met smart contracts op het protocol en dit vereist het betalen van gas met ETH. Door je subgraphs naar Arbitrum te verplaatsen, zullen eventuele toekomstige updates aan de subgraph veel lagere gas fees vereisen. De lagere kosten, en het feit dat de curatie bonding curves op L2 vlak zijn, maken het ook makkelijker voor andere curatoren om te cureren op uw subgraph, waardoor de beloningen voor indexeerders op uw subgraph toenemen. Deze omgeving met lagere kosten maakt het ook goedkoper voor indexeerders om de subgraph te indexeren en query's te beantwoorden. Indexeringsbeloningen zullen op Arbitrum toenemen en op Ethereum mainnet afnemen in de komden maanden, dus meer en meer indexeerders zullen hun GRT overzetten en hun operaties op L2 opzetten. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Begrijpen wat er gebeurt met signalen, de L1 subgraph en query URL's +## Understanding what happens with signal, your L1 Subgraph and query URLs -Het overzetten van een subgraph naar Arbitrum gebruikt de Arbitrum GRT brug, die op zijn beurt de natuurlijke Arbitrum brug gebruikt, om de subgraph naar L2 te sturen. De "transfer" zal de subgraph op mainnet verwijderen en de informatie versturen om de subgraph op L2 opnieuw te creëren met de bridge. Het zal ook de gesignaleerde GRT van de eigenaar van de subgraph bevatten, wat meer dan nul moet zijn voor de brug om de overdracht te accepteren. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Wanneer je kiest om de subgraph over te dragen, zal dit alle curatie van de subgraph omzetten in GRT. Dit staat gelijk aan het "degraderen" van de subgraph op mainnet. De GRT die overeenkomt met je curatie zal samen met de subgraph naar L2 worden gestuurd, waar ze zullen worden gebruikt om signaal namens u te munten. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Andere curatoren kunnen kiezen of ze hun fractie van GRT willen opnemen, of het ook naar L2 willen overzetten om signaal op dezelfde subgraph te munten. Als een eigenaar van een subgraph hun subgraph niet naar L2 overzet en handmatig verwijderd via een contract call, dan zullen curatoren worden genotificeerd en zullen ze in staat zijn om hun curatie op te nemen. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Zodra de subgraph is overgedragen, aangezien alle curatie is omgezet in GRT, zullen indexeerders geen beloningen meer ontvangen voor het indexeren van de subgraph. Er zullen echter indexeerders zijn die 1) overgedragen subgraphs 24 uur blijven ondersteunen, en 2) onmiddelijk beginnen met het indexeren van de subgraph op L2. Aangezien deze indexeerders de subgraph al hebben geïndexeerd, zou er geen noodzaak moeten zijn om te wachten tot de subgraph is gesynchroniseerd, en het zal mogelijk zijn om de L2 subgraph bijna onmiddelijk te queryen. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Query's naar de L2 subgraph moeten worden gedaan naar een andere URL (op `arbitrum-gateway.thegraph.com`), maar het L1 URL zal minimaal 48 uur blijven werken. Daarna zal de L1 gateway query's doorsturen naar de L2 gateway (voor enige tijd), maar dit zal latentie toevoegen dus het wordt aanbevolen om al uw query's zo snel mogelijk naar de nieuwe URL over te schakelen. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Jouw L2 wallet kiezen -Wanneer je je subgraph op mainnet publiceerde, gebruikte je een verbonden wallet om de subgraph te creëren, en deze wallet bezit de NFT die deze subgraph vertegenwoordigt en dit zorgt er voor dat je updates kunt publiceren. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Bij het overzetten van de subgraph naar Arbitrum, kunt u een andere wallet kiezen die deze subgraph NFT op L2 zal bezitten. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Als je een "reguliere" wallet gebruikt zoals MetaMask (een Externally Owned Account of EOA, d.w.z. een wallet die geen smart contract is), dan is dit optioneel en wordt het aanbevolen om dezelfde wallet te gebruiken als in L1. -Als je een smart contract wallet gebruikt, zoals een multisig (bijv. een Safe) dan is het kiezen van een ander L2 wallet adres verplicht, aangezien het waarschijnlijk is dat de multisig alleen op mainnet bestaat en je geen transacties op Arbitrum kunt maken met deze wallet. Als je een smart contract wallet of multisig wilt blijven gebruiken, maak dan een nieuwe wallet aan op Arbitrum en gebruik het adres ervan als de L2 eigenaar van jouw subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Het is erg belangrijk om een wallet adres te gebruiken dat u controleert, en dat transacties op Arbitrum kan maken. Anders zal de subgraph verloren gaan en kan niet worden hersteld.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Voorbereiden op de overdracht: ETH verplaatsen van L1 naar L2 -Het overzetten van de subgraph houdt in dat je een transactie verstuurt via de brug, en vervolgens een andere transactie uitvoert op Arbitrum. De eerste transactie gebruikt ETH op mainnet, en bevat wat ETH om te betalen voor gas wanneer het op L2 wordt ontvangen. Echter, als dit onvoldoende is, zul je de transactie opnieuw moeten proberen en betalen voor het gas direct op L2 (dit is "Stap 3: De overdracht bevestigen" hieronder). Deze stap **moet worden uitgevoerd binnen 7 dagen na het starten van de overdracht**. Bovendien, de tweede transactie ("Stap 4: De overdracht op L2 afronden") zal direct op Arbitrum worden gedaan. Om deze redenen, zul je wat ETH nodig hebben op een Arbitrum wallet. Als je een multisig of smart contract wallet gebruikt, zal de ETH in de reguliere (EOA) wallet moeten zijn die je gebruikt om de transacties uit te voeren, niet op de multisig wallet zelf. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Je kunt ETH kopen op sommige exchanges en direct naar Arbitrum opnemen, of je kunt de Arbitrum bridge gebruiken om ETH van een mainnet wallet naar L2 te sturen: [bridge.arbitrum.io](http://bridge.arbitrum.io). Aangezien de gasprijzen op Arbitrum lager zijn, zou u slechts een kleine hoeveelheid nodig moeten hebben. Het wordt aanbevolen om te beginnen met een lage drempel (e.g. 0.1 ETH) voor uw transactie om te worden goedgekeurd. -## Het vinden van de Transfer Tool voor subgraphs +## Finding the Subgraph Transfer Tool -Je kunt de L2 Transfer Tool vinden als je naar de pagina van je subgraph kijkt in de Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -Het is ook beschikbaar in de Explorer als je verbonden bent met de wallet die een subgraph bezit en op de pagina van die subgraph in de Explorer kijkt: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Door op de knop 'Transfer to L2' te klikken, wordt de Transfer Tool geopend waar ## Stap 1: Het transfer proces starten -Voordat je met het transfer proces begint, moet je beslissen welk adres de subgraph op L2 zal bezitten (zie "Je L2 portemonnee kiezen" hierboven), en het wordt sterk aanbevolen om al wat ETH voor gas op Arbitrum te hebben (zie "Voorbereiden op de overdracht: ETH verplaatsen van L1 naar L2" hierboven). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Let ook op dat het overzetten van de subgraph vereist dat je een hoeveelheid signaal groter dan nul op de subgraph hebt met dezelfde account die de subgraph bezit; als je nog geen signaal op de subgraph hebt, moet je een klein beetje curatie toevoegen (een kleine hoeveelheid zoals 1 GRT zou voldoende zijn). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Na het openen van de Transfer Tool, kun je het adres van de L2 wallet invoeren in het veld "Receiving wallet address" - **zorg ervoor dat je het juiste adres hier invoert**. Door op 'Transfer Subgraph' te klikken, wordt je gevraagd de transactie op je wallet uit te voeren (let op dat er wel wat ETH in je wallet zit om te betalen voor L2 gas); dit zal de transfer initiëren en je L1 subgraph verwijderen (zie "Begrijpen wat er gebeurt met signalen, de L1 subgraph en query URL's" hierboven voor meer details over wat er achter de schermen gebeurt). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Als je deze stap uitvoert, **zorg ervoor dat je doorgaat tot het voltooien van stap 3 in minder dan 7 dagen, of de subgraph en je signaal GRT zullen verloren gaan.** Dit komt door hoe L1-L2 berichtgeving werkt op Arbitrum: berichten die via de bridge worden verzonden, zijn "retry-able tickets" die binnen 7 dagen uitgevoerd moeten worden, en de initiële uitvoering zou een nieuwe poging nodig kunnen hebben als er pieken zijn in de prijs voor gas op Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Stap 2: Wachten tot de transfer van de subgraph naar L2 voltooid is +## Step 2: Waiting for the Subgraph to get to L2 -Nadat je de transfer gestart bent, moet het bericht dat je L1-subgraph naar L2 stuurt, via de Arbitrum brug worden doorgestuurd. Dit duurt ongeveer 20 minuten (de brug wacht tot het mainnet block dat de transactie bevat "veilig" is van potentiële chain reorganisaties). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Zodra deze wachttijd voorbij is, zal Arbitrum proberen de transfer automatisch uit te voeren op de L2 contracten. @@ -80,7 +80,7 @@ Zodra deze wachttijd voorbij is, zal Arbitrum proberen de transfer automatisch u ## Stap 3: De transfer bevestigen -In de meeste gevallen zal deze stap automatisch worden uitgevoerd aangezien de L2 gas kosten die bij stap 1 zijn inbegrepen, voldoende zouden moeten zijn om de transactie die de subgraph op de Arbitrum contracten ontvangt, uit te voeren. In sommige gevallen kan het echter zo zijn dat een piek in de gasprijzen op Arbitrum ervoor zorgt dat deze automatische uitvoering mislukt. In dat geval zal het "ticket" dat je subgraph naar L2 stuurt, in behandeling blijven en is nodig het binnen 7 dagen nogmaals te proberen. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Als dit het geval is, moet je verbinding maken met een L2 wallet die wat ETH op Arbitrum heeft, je walletnetwerk naar Arbitrum overschakelen en op "Bevestig Transfer" klikken op de transactie opnieuw te proberen. @@ -88,33 +88,33 @@ Als dit het geval is, moet je verbinding maken met een L2 wallet die wat ETH op ## Stap 4: De transfer op L2 afronden -Na de vorige stappen zijn je subgraph en GRT ontvangen op Arbitrum, maar de subgraph is nog niet gepubliceerd. Je moet verbinding maken met de L2 wallet die je hebt gekozen als ontvangende wallet, je walletnetwerk naar Arbitrum overschakelen en op "Publiceer Subgraph" klikken +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Dit zal de subgraph publiceren zodat Indexeerders die op Arbitrum actief zijn, deze kunnen indexeren. Het zal ook curatie signaal munten met de GRT die van L1 zijn overgedragen. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Stap 5: De query-URL bijwerken -Je subgraph is succesvol overgedragen naar Arbitrum! Om query's naar de subgraph te sturen, kun je deze nieuwe URL gebruiken: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Let op dat de subgraph ID op Arbitum anders zal zijn dan degene die je op mainnet had, maar je kunt deze altijd vinden op de Explorer of in de Studio. Zoals hierboven vermeld (zie "Begrijpen wat er gebeurt met signalen, de L1 subgraph en query URL's") zal de oude L1-URL nog een korte tijd worden ondersteund, maar je zou zo snel mogelijk al je query's naar het nieuwe adres moeten overschakelen zodra de subgraph op L2 is gesynchroniseerd. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Hoe je je curatie signaal naar Arbitrum (L2) overzet -## Begrijpen wat er gebeurt met curatie bij subgraph transfers naar L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Wanneer de eigenaar van een subgraph een subgraph naar Arbitrum verplaatst, wordt al het signaal van de subgraph tegelijkertijd omgezet in GRT. Dit is van toepassing op "automatisch gemigreerd" signaal, dus signaal dat niet specifiek is voor een subgraph versie, maar automatisch op de nieuwste versie van de subgraph signaleerd. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -De conversie van signaal naar GRT is hetzelfde als wat zou gebeuren als de eigenaar van de subgraph de subgraph van L1 zou verwijderen. Wanneer de subgraph wordt verwijderd of verplaatst wordt naar L2, wordt al het curatie signaal tegelijkertijd "verbrand" (met behulp van de curation bonding curve) en wordt de GRT vastgehouden door het GNS smart contract (dat is het contract dat subgraph upgrades en automatisch gemigreerd signaal afhandeld). Elke Curator op die subgraph heeft daarom recht op die GRT naar rato van het aantal aandelen dat ze voor de subgraph hadden. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Een deel van de GRT, dat behoort tot de eigenaar van de subgraph, wordt samen met de subgraph naar L2 gestuurd. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -Op dit punt zal de gesignaleerde GRT niet langer query kosten verzamelen, dus curatoren kunnen kiezen om hun GRT op te nemen of het naar dezelfde subgraph op L2 over te dragen, waar het gebruikt kan worden om nieuw curatie signaal te creëren. Er is geen haast bij, aangezien de GRT voor onbepaalde tijd kan worden bewaard en iedereen krijgt een hoeveelheid naar rato van hun aandelen, ongeacht wanneer ze het doen. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Jouw L2 wallet kiezen @@ -130,9 +130,9 @@ Als je een smart contract wallet gebruikt, zoals een multisig (bijv. een Safe) d Voordat je de transfer start, moet je beslissen welk wallet adres de curatie op L2 zal bezitting (zie "De L2 wallet kiezen" hierboven) en wordt het aanbevolen om al wat ETH voor gas op Arbitrum te hebben voor het geval je de uitvoering van het bericht op L2 opnieuw moet uitvoeren. Je kunt ETH kopen op sommige beurzen en deze rechstreeks naar je Arbitrum wallet sturen, of je kunt de Arbitrum bridge gebruiken om ETH van een mainnet wallet naar L2 te sturen: [bridge.arbitrum.io](http://bridge.arbitrum.io) - aangezien de gasprijzen op Arbitrum zo laag zijn, heb je waarschijnlijk maar een kleine hoeveelheid nodig, 0.01 ETH is waarschijnlijk meer dan genoeg. -Als een subgraph waar je curatie signaal op hebt naar L2 is verstuurd, zie je een bericht op de Explorer die je verteld dat je curatie hebt op een subgraph die een transfer heeft gemaakt naar L2. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -Wanneer je naar de subgraph pagina kijkt, kun je ervoor kiezen om de curatie op te nemen of over te dragen naar L2. Door op "Transfer Signal to Arbitrum" te klikken, worden de Transfer Tools geopend. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Als dit het geval is, moet je verbinding maken met een L2 wallet die wat ETH op ## Jouw curatie opnemen op L1 -Als je je GRT liever niet naar L2 stuurt, of als je de GRT handmatig over de brug wilt sturen, kunt je je gecureerde GRT op L1 opnemen. Kies op de banner op de subgraph pagina "Withdraw Signal" en bevestig de transactie; de GRT wordt naar uw Curator adres gestuurd. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/nl/archived/sunrise.mdx b/website/src/pages/nl/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/nl/archived/sunrise.mdx +++ b/website/src/pages/nl/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/nl/global.json b/website/src/pages/nl/global.json index c90c8a637061..cbe24cf340a5 100644 --- a/website/src/pages/nl/global.json +++ b/website/src/pages/nl/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgraphs", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Description", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Description", + "liveResponse": "Live Response", + "example": "Example" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/nl/index.json b/website/src/pages/nl/index.json index c6000a7b4c14..200a19192e1c 100644 --- a/website/src/pages/nl/index.json +++ b/website/src/pages/nl/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgraphs", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -37,10 +37,86 @@ }, "supportedNetworks": { "title": "Ondersteunde Netwerken", + "details": "Network Details", + "services": "Services", + "type": "Type", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Documentatie", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { "base": "The Graph supports {0}. To add a new network, {1}", "networks": "networks", "completeThisForm": "complete this form" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Name", + "id": "ID", + "subgraphs": "Subgraphs", + "substreams": "Substreams", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Substreams", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Billing", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { @@ -80,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/nl/indexing/chain-integration-overview.mdx b/website/src/pages/nl/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/nl/indexing/chain-integration-overview.mdx +++ b/website/src/pages/nl/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/nl/indexing/new-chain-integration.mdx b/website/src/pages/nl/indexing/new-chain-integration.mdx index e45c4b411010..c401fa57b348 100644 --- a/website/src/pages/nl/indexing/new-chain-integration.mdx +++ b/website/src/pages/nl/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, in a JSON-RPC batch request -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ ### 2. Firehose Integration @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/nl/indexing/overview.mdx b/website/src/pages/nl/indexing/overview.mdx index f797c80855e5..891a4b0dba3d 100644 --- a/website/src/pages/nl/indexing/overview.mdx +++ b/website/src/pages/nl/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexeers zijn node-operators in The Graph Netwerk die Graph Tokens (GRT) inzett GRT dat in het protocol wordt ingezet, is onderheven aan een ontdooiperiode en kan worden geslashed als Indexers schadelijke acties ondernemen, onjuiste data aan applicaties leveren of als ze onjuist indexeren. Indexers verdienen ook beloningen voor gedelegeerde inzet van Delegators om te contributeren aan het netwerk. -Indexeerders selecteren subgraphs om te indexeren op basis van het curatiesignaal van de subgraph, waar Curatoren GRT inzetten om aan te geven welke subgraphs van hoge kwaliteit zijn en prioriteit moeten krijgen. Consumenten (bijv. applicaties) kunnen ook parameters instellen voor welke Indexeerders queries voor hun subgraphs verwerken en voorkeuren instellen voor de prijs van querykosten. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/nl/indexing/supported-network-requirements.mdx b/website/src/pages/nl/indexing/supported-network-requirements.mdx index 9bfbc8d0fefd..5564f615adc7 100644 --- a/website/src/pages/nl/indexing/supported-network-requirements.mdx +++ b/website/src/pages/nl/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Netwerk | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Netwerk | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/nl/indexing/tap.mdx b/website/src/pages/nl/indexing/tap.mdx index 3bab672ab211..477534d63201 100644 --- a/website/src/pages/nl/indexing/tap.mdx +++ b/website/src/pages/nl/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Overview -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/nl/indexing/tooling/graph-node.mdx b/website/src/pages/nl/indexing/tooling/graph-node.mdx index 0250f14a3d08..f5778789213d 100644 --- a/website/src/pages/nl/indexing/tooling/graph-node.mdx +++ b/website/src/pages/nl/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/nl/indexing/tooling/graphcast.mdx b/website/src/pages/nl/indexing/tooling/graphcast.mdx index cbc12c17f95b..9a712c6dd64a 100644 --- a/website/src/pages/nl/indexing/tooling/graphcast.mdx +++ b/website/src/pages/nl/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Leer Meer diff --git a/website/src/pages/nl/resources/benefits.mdx b/website/src/pages/nl/resources/benefits.mdx index c02a029cb137..6a3068d67e2c 100644 --- a/website/src/pages/nl/resources/benefits.mdx +++ b/website/src/pages/nl/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Kostenvergelijking | Zelf hosten | De Graph Netwerk | -| :-: | :-: | :-: | -| Maandelijkse serverkosten | $350 per maand | $0 | -| Querykosten | $0+ | $0 per month | -| Onderhoud tijd | $400 per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | -| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | 100,000 (Free Plan) | -| Kosten per query | $0 | $0 | -| Infrastructure | Gecentraliseerd | Gedecentraliseerd | -| Geografische redundantie | $750+ per extra node | Inbegrepen | -| Uptime | Wisselend | 99,9%+ | -| Totale maandelijkse kosten | $750+ | $0 | +| Kostenvergelijking | Zelf hosten | De Graph Netwerk | +| :------------------------: | :-------------------------------------: | :----------------------------------------------------------------------------------------------: | +| Maandelijkse serverkosten | $350 per maand | $0 | +| Querykosten | $0+ | $0 per month | +| Onderhoud tijd | $400 per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | +| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | 100,000 (Free Plan) | +| Kosten per query | $0 | $0 | +| Infrastructure | Gecentraliseerd | Gedecentraliseerd | +| Geografische redundantie | $750+ per extra node | Inbegrepen | +| Uptime | Wisselend | 99,9%+ | +| Totale maandelijkse kosten | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Kostenvergelijking | Zelf hosten | De Graph Netwerk | -| :-: | :-: | :-: | -| Maandelijkse serverkosten | $350 per maand | $0 | -| Querykosten | $500 per maand | $120 per month | -| Onderhoud tijd | $800 per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | -| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | ~3,000,000 | -| Kosten per query | $0 | $0.00004 | -| Infrastructure | Gecentraliseerd | Gedecentraliseerd | -| Technische personeelskosten | $200 per uur | Inbegrepen | -| Geografische redundantie | $1200 totale kosten per extra node | Inbegrepen | -| Uptime | Wisselend | 99,9%+ | -| Totale maandelijkse kosten | $1,650+ | $120 | +| Kostenvergelijking | Zelf hosten | De Graph Netwerk | +| :-------------------------: | :----------------------------------------: | :----------------------------------------------------------------------------------------------: | +| Maandelijkse serverkosten | $350 per maand | $0 | +| Querykosten | $500 per maand | $120 per month | +| Onderhoud tijd | $800 per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | +| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | ~3,000,000 | +| Kosten per query | $0 | $0.00004 | +| Infrastructure | Gecentraliseerd | Gedecentraliseerd | +| Technische personeelskosten | $200 per uur | Inbegrepen | +| Geografische redundantie | $1200 totale kosten per extra node | Inbegrepen | +| Uptime | Wisselend | 99,9%+ | +| Totale maandelijkse kosten | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Kostenvergelijking | Zelf hosten | De Graph Netwerk | -| :-: | :-: | :-: | -| Maandelijkse serverkosten | $1100 per maand, per node | $0 | -| Querykosten | $4000 | $1,200 per month | -| Aantal benodigde nodes | 10 | Niet van toepassing | -| Onderhoud tijd | $6000 of meer per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | -| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | ~30,000,000 | -| Kosten per query | $0 | $0.00004 | -| Infrastructure | Gecentraliseerd | Gedecentraliseerd | -| Geografische redundantie | $1200 totale kosten per extra node | Inbegrepen | -| Uptime | Wisselend | 99,9%+ | -| Totale maandelijkse kosten | $11,000+ | $1,200 | +| Kostenvergelijking | Zelf hosten | De Graph Netwerk | +| :------------------------: | :-----------------------------------------: | :----------------------------------------------------------------------------------------------: | +| Maandelijkse serverkosten | $1100 per maand, per node | $0 | +| Querykosten | $4000 | $1,200 per month | +| Aantal benodigde nodes | 10 | Niet van toepassing | +| Onderhoud tijd | $6000 of meer per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | +| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | ~30,000,000 | +| Kosten per query | $0 | $0.00004 | +| Infrastructure | Gecentraliseerd | Gedecentraliseerd | +| Geografische redundantie | $1200 totale kosten per extra node | Inbegrepen | +| Uptime | Wisselend | 99,9%+ | +| Totale maandelijkse kosten | $11,000+ | $1,200 | \*inclusief kosten voor een back-up: $50-$100 per maand @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Signaal cureren op een subgraph is een optionele eenmalige, kostenneutrale actie (bijv. $1000 aan signaal kan worden gecureerd op een subgraph en later worden opgenomen - met het potentieel om rendementen te verdienen tijdens het proces). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/nl/resources/glossary.mdx b/website/src/pages/nl/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/nl/resources/glossary.mdx +++ b/website/src/pages/nl/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/nl/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/nl/resources/migration-guides/assemblyscript-migration-guide.mdx index 85f6903a6c69..aead2514ff51 100644 --- a/website/src/pages/nl/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/nl/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/nl/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/nl/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/nl/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/nl/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/nl/resources/roles/curating.mdx b/website/src/pages/nl/resources/roles/curating.mdx index 99c74778c9bd..a2f4fff13893 100644 --- a/website/src/pages/nl/resources/roles/curating.mdx +++ b/website/src/pages/nl/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Cureren --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Hoe werkt het Signaleren -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Een curator kan ervoor kiezen om een signaal af te geven voor een specifieke subgraph versie, of ze kunnen ervoor kiezen om hun signaal automatisch te laten migreren naar de nieuwste versie van de subgraph. Beide strategieën hebben voordelen en nadelen. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Automatische migratie van je signalering naar de nieuwste subgraphversie kan waardevol zijn om ervoor te zorgen dat je querykosten blijft ontvangen. Elke keer dat je signaleert, wordt een curatiebelasting van 1% in rekening gebracht. Je betaalt ook een curatiebelasting van 0,5% bij elke migratie. Subgraphontwikkelaars worden ontmoedigd om vaak nieuwe versies te publiceren - ze moeten een curatiebelasting van 0,5% betalen voor alle automatisch gemigreerde curatie-aandelen. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risico's 1. De querymarkt is nog jong bij het Graph Netwerk en er bestaat een risico dat je %APY lager kan zijn dan je verwacht door opkomende marktdynamiek. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Een subgraph kan stuk gaan door een bug. Een subgraph die stuk is gegenereerd geen querykosten. Als gevolg hiervan moet je wachten tot de ontwikkelaar de bug repareert en een nieuwe versie implementeert. - - Als je bent geabonneerd op de nieuwste versie van een subgraph, worden je curatieaandelen automatisch gemigreerd naar die nieuwe versie. Er is een curatiebelasting van 0,5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Veelgestelde Vragen over Curatie ### Welk percentage van de querykosten verdienen curatoren? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### Hoe bepaal ik welke subgraphs van hoge kwaliteit zijn om op te signaleren? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### Wat zijn de kosten voor het updaten van een subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### Hoe vaak kan ik mijn subgraph updaten? +### 4. How often can I update my Subgraph? -Het wordt aanbevolen om je subgraphs niet te vaak bij te werken. Zie de bovenstaande vraag voor meer details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### Kan ik mijn curatieaandelen verkopen? diff --git a/website/src/pages/nl/resources/roles/delegating/undelegating.mdx b/website/src/pages/nl/resources/roles/delegating/undelegating.mdx index c3e31e653941..6a361c508450 100644 --- a/website/src/pages/nl/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/nl/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## Additional Resources diff --git a/website/src/pages/nl/resources/subgraph-studio-faq.mdx b/website/src/pages/nl/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/nl/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/nl/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/nl/resources/tokenomics.mdx b/website/src/pages/nl/resources/tokenomics.mdx index 4a9b42ca6e0d..dac3383a28e7 100644 --- a/website/src/pages/nl/resources/tokenomics.mdx +++ b/website/src/pages/nl/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Overview -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Backbone of blockchain data @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/nl/sps/introduction.mdx b/website/src/pages/nl/sps/introduction.mdx index b11c99dfb8e5..92d8618165dd 100644 --- a/website/src/pages/nl/sps/introduction.mdx +++ b/website/src/pages/nl/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/nl/sps/sps-faq.mdx b/website/src/pages/nl/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/nl/sps/sps-faq.mdx +++ b/website/src/pages/nl/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/nl/sps/triggers.mdx b/website/src/pages/nl/sps/triggers.mdx index 816d42cb5f12..66687aa21889 100644 --- a/website/src/pages/nl/sps/triggers.mdx +++ b/website/src/pages/nl/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/nl/sps/tutorial.mdx b/website/src/pages/nl/sps/tutorial.mdx index 9d568f422d31..81c23fae5508 100644 --- a/website/src/pages/nl/sps/tutorial.mdx +++ b/website/src/pages/nl/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Begin @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/nl/subgraphs/_meta-titles.json b/website/src/pages/nl/subgraphs/_meta-titles.json index 0556abfc236c..3fd405eed29a 100644 --- a/website/src/pages/nl/subgraphs/_meta-titles.json +++ b/website/src/pages/nl/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", + "guides": "How-to Guides", "best-practices": "Best Practices" } diff --git a/website/src/pages/nl/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/nl/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/nl/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/nl/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/nl/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/nl/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/nl/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/nl/subgraphs/best-practices/grafting-hotfix.mdx index d514e1633c75..674cf6b87c62 100644 --- a/website/src/pages/nl/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/nl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/nl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/nl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/nl/subgraphs/best-practices/pruning.mdx b/website/src/pages/nl/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/nl/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/nl/subgraphs/best-practices/timeseries.mdx b/website/src/pages/nl/subgraphs/best-practices/timeseries.mdx index cacdc44711fe..9732199531a8 100644 --- a/website/src/pages/nl/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/nl/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Overview @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/nl/subgraphs/billing.mdx b/website/src/pages/nl/subgraphs/billing.mdx index c9f380bb022c..ec654ca63f55 100644 --- a/website/src/pages/nl/subgraphs/billing.mdx +++ b/website/src/pages/nl/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/nl/subgraphs/cookbook/arweave.mdx b/website/src/pages/nl/subgraphs/cookbook/arweave.mdx index 1ff7fdd460fc..e957c2d61226 100644 --- a/website/src/pages/nl/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/nl/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Bouwen van Subgraphs op Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! In deze gids, zul je leren hoe je Subgraphs bouwt en implementeer om de Arweave blockchain te indexeren. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are Voor het kunnen bouwen en implementeren van Arweave Subgraphs, heb je twee paketten nodig: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Subgraph's componenten -Er zijn drie componenten van een subgraph: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,49 +40,49 @@ Definieert gegevensbronnen die van belang zijn en hoe deze verwerkt moeten worde Hier definieer je welke gegevens je wilt kunnen opvragen na het indexeren van je subgraph door het gebruik van GraphQL. Dit lijkt eigenlijk op een model voor een API, waarbij het model de structuur van een verzoek definieert. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` Dit is de logica die definieert hoe data zou moeten worden opgevraagd en opgeslagen wanneer iemand met de gegevens communiceert waarnaar jij aan het luisteren bent. De gegevens worden vertaald en is opgeslagen gebaseerd op het schema die je genoteerd hebt. -Tijdens subgraph ontwikkeling zijn er twee belangrijke commando's: +During Subgraph development there are two key commands: ``` -$ graph codegen # genereert types van het schema bestand die geïdentificeerd is in het manifest -$ graph build # genereert Web Assembly vanuit de AssemblyScript-bestanden, en bereidt alle Subgraph-bestanden voor in een /build map +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersie: 0.0.5 -omschrijving: Arweave Blocks Indexing +specVersion: 1.3.0 +description: Arweave Blocks Indexing schema: - bestand: ./schema.graphql # link to the schema file + file: ./schema.graphql # link to the schema file dataSources: - - type: arweave - naam: arweave-blocks - netwerk: arweave-mainnet # The Graph only supports Arweave Mainnet - bron: - eigenaar: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis - toewijzing: - apiVersie: 0.0.5 - taal: wasm/assemblyscript - bestand: ./src/blocks.ts # link to the file with the Assemblyscript mappings - entiteit: + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: - Block - Transaction - blockAfhandelaar: - - afhandelaar: handleBlock # the function name in the mapping file - transactieAfhandelaar: - - afhandelaar: handleTx # the function name in the mapping file + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave data bronnen introduceert een optionele bron.eigenaar veld, dat de openbare sleutel is van een Arweave wallet @@ -99,7 +99,7 @@ Arweave data bronnen ondersteunt twee typen verwerkers: ## Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript Mappings @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Querying an Arweave Subgraph -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here is an example subgraph for reference: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a subgraph index Arweave and other chains? +### Can a Subgraph index Arweave and other chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. ### Can I index the stored files on Arweave? Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). -### Can I identify Bundlr bundles in my subgraph? +### Can I identify Bundlr bundles in my Subgraph? This is not currently supported. @@ -188,7 +188,7 @@ The source.owner can be the user's public key or account address. ### What is the current encryption format? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/nl/subgraphs/cookbook/enums.mdx b/website/src/pages/nl/subgraphs/cookbook/enums.mdx index a10970c1539f..9f55ae07c54b 100644 --- a/website/src/pages/nl/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/nl/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/nl/subgraphs/cookbook/grafting.mdx b/website/src/pages/nl/subgraphs/cookbook/grafting.mdx index 57d5169830a7..d9abe0e70d2a 100644 --- a/website/src/pages/nl/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/nl/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Replace a Contract and Keep its History With Grafting --- -In this guide, you will learn how to build and deploy new subgraphs by grafting existing subgraphs. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## What is Grafting? -Grafting reuses the data from an existing subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. Also, it can be used when adding a feature to a subgraph that takes long to index from scratch. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -22,38 +22,38 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## Building an Existing Subgraph -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Grafting Manifest Definition -Grafting requires adding two new items to the original subgraph manifest: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Deploying the Base Subgraph -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the subgraph is indexing properly, you can quickly update the subgraph with grafting. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Deploying the Grafting Subgraph The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ It should return the following: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Additional Resources diff --git a/website/src/pages/nl/subgraphs/cookbook/near.mdx b/website/src/pages/nl/subgraphs/cookbook/near.mdx index 75f966e7a597..e78a69eb7fa2 100644 --- a/website/src/pages/nl/subgraphs/cookbook/near.mdx +++ b/website/src/pages/nl/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Building Subgraphs on NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## What is NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## What are NEAR subgraphs? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Block handlers: these are run on every new block - Receipt handlers: run every time a message is executed at a specified account @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Building a NEAR Subgraph -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -There are three aspects of subgraph definition: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -Tijdens subgraph ontwikkeling zijn er twee belangrijke commando's: +During Subgraph development there are two key commands: ```bash -$ graph codegen # genereert types van het schema bestand die geïdentificeerd is in het manifest -$ graph build # genereert Web Assembly vanuit de AssemblyScript-bestanden, en bereidt alle Subgraph-bestanden voor in een /build map +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Subgraph Manifest Definition -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ NEAR data sources support two types of handlers: ### Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript Mappings @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -The node configuration will depend on where the subgraph is being deployed. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ We will provide more information on running the above components soon. ## Querying a NEAR Subgraph -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### How does the beta work? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Can a subgraph index both NEAR and EVM chains? +### Can a Subgraph index both NEAR and EVM chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. -### Can subgraphs react to more specific triggers? +### Can Subgraphs react to more specific triggers? Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### Can NEAR subgraphs make view calls to NEAR accounts during mappings? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? This is not supported. We are evaluating whether this functionality is required for indexing. -### Can I use data source templates in my NEAR subgraph? +### Can I use data source templates in my NEAR Subgraph? This is not currently supported. We are evaluating whether this functionality is required for indexing. -### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### My question hasn't been answered, where can I get more help building NEAR subgraphs? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## References diff --git a/website/src/pages/nl/subgraphs/cookbook/polymarket.mdx b/website/src/pages/nl/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/nl/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/nl/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/nl/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/nl/subgraphs/cookbook/secure-api-keys-nextjs.mdx index fc7e0ff52eb4..e17e594408ff 100644 --- a/website/src/pages/nl/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/nl/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## Overview -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/nl/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/nl/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..bbfe48b615cd --- /dev/null +++ b/website/src/pages/nl/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Overview + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Begin + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/nl/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/nl/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..406b1b862eba --- /dev/null +++ b/website/src/pages/nl/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Begin + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/nl/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/nl/subgraphs/cookbook/subgraph-debug-forking.mdx index 6610f19da66d..91aa7484d2ec 100644 --- a/website/src/pages/nl/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/nl/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Quick and Easy Subgraph Debugging Using Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## Ok, what is it? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## What?! How? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## Please, show me some code! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. The usual way to attempt a fix is: 1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Wait for it to sync-up. 4. If it breaks again go back to 1, otherwise: Hooray! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. If it breaks again, go back to 1, otherwise: Hooray! Now, you may have 2 questions: @@ -69,18 +69,18 @@ Now, you may have 2 questions: And I answer: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Forking is easy, no need to sweat: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! So, here is what I do: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/nl/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/nl/subgraphs/cookbook/subgraph-uncrashable.mdx index 0cc91a0fa2c3..a08e2a7ad8c9 100644 --- a/website/src/pages/nl/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/nl/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Safe Subgraph Code Generator --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Why integrate with Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. @@ -26,4 +26,4 @@ Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/nl/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/nl/subgraphs/cookbook/transfer-to-the-graph.mdx index 194deb018404..9a4b037cafbc 100644 --- a/website/src/pages/nl/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/nl/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Example -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Additional Resources -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/nl/subgraphs/developing/creating/advanced.mdx b/website/src/pages/nl/subgraphs/developing/creating/advanced.mdx index ee9918f5f254..8dbc48253034 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/nl/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/nl/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2ac894695fe1..cd81dc118f28 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx index 35bb04826c98..2e256ae18190 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/nl/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/nl/subgraphs/developing/creating/install-the-cli.mdx index 8bf0b4dfca9f..004d0f94c99e 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Creëer een Subgraph ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/nl/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/nl/subgraphs/developing/creating/ql-schema.mdx index 27562f970620..2eb805320753 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/nl/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/nl/subgraphs/developing/creating/starting-your-subgraph.mdx index 4823231d9a40..4931e6b1fd34 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Overview -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/nl/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/nl/subgraphs/developing/creating/subgraph-manifest.mdx index a42a50973690..085eaf2fb533 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Overview -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). The important entries to update for the manifest are: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Defining a Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/nl/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/nl/subgraphs/developing/creating/unit-testing-framework.mdx index 2133c1d4b5c9..e56e1109bc04 100644 --- a/website/src/pages/nl/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/nl/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/nl/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/nl/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/nl/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/nl/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/nl/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/nl/subgraphs/developing/deploying/using-subgraph-studio.mdx index 04fca3fb140a..370e428284cc 100644 --- a/website/src/pages/nl/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/nl/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Must not use any of the following features: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/nl/subgraphs/developing/developer-faq.mdx b/website/src/pages/nl/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/nl/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/nl/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/nl/subgraphs/developing/introduction.mdx b/website/src/pages/nl/subgraphs/developing/introduction.mdx index 615b6cec4c9c..06bc2b76104d 100644 --- a/website/src/pages/nl/subgraphs/developing/introduction.mdx +++ b/website/src/pages/nl/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/nl/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/nl/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/nl/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/nl/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/nl/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/nl/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/nl/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/nl/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/nl/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/nl/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/nl/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/nl/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/nl/subgraphs/developing/subgraphs.mdx b/website/src/pages/nl/subgraphs/developing/subgraphs.mdx index 951ec74234d1..b5a75a88e94f 100644 --- a/website/src/pages/nl/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/nl/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/nl/subgraphs/explorer.mdx b/website/src/pages/nl/subgraphs/explorer.mdx index be848f2d0201..3df0b99d43ca 100644 --- a/website/src/pages/nl/subgraphs/explorer.mdx +++ b/website/src/pages/nl/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Verkenner --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Het toevoegen/weghalen van signaal op een subgraph +- Signal/Un-signal on Subgraphs - Details zoals grafieken, huidige implementatie-ID en andere metadata -- Schakel tussen versies om eerdere iteraties van de subgraph te verkennen -- Query subgraphs via GraphQL -- Subgraphs testen in de playground -- Bekijk de indexeerders die indexeren op een bepaalde subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraphstatistieken (allocaties, curatoren, etc.) -- Bekijk de entiteit die de subgraph heeft gepubliceerd +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - het maximale bedrag aan gedelegeerde inzet dat de Indexer productief kan accepteren. Een teveel aan gedelegeerde inzet kan niet worden gebruikt voor allocaties of beloningsberekeningen. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curatoren -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraph Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Dit gedeelte bevat ook details over uw netto indexeringsbeloningen en netto querykosten. U zult de volgende metrics zien: @@ -223,13 +223,13 @@ Houd er rekening mee dat deze grafiek horizontaal scrollbaar is, dus als u helem ### Curating Tab -Op de Curating Tab vind je alle subgraphs waarop je signaleert (dit stelt je in staat om querykosten te ontvangen). Singaleren stelt Curatoren in staat om aan Indexeerders te laten zien welke subgraphs waardevol en betrouwbaar zijn, wat aangeeft dat ze geïndexeerd moeten worden. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Binnen deze tab vind je een overzicht van: -- Alle subgraphs waarop je cureert met signaaldetails -- Totale aandelen per subgraph -- Querybeloningen per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Gegevens van de bijwerkdatum ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/nl/subgraphs/guides/arweave.mdx b/website/src/pages/nl/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..e957c2d61226 --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Bouwen van Subgraphs op Arweave +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +In deze gids, zul je leren hoe je Subgraphs bouwt en implementeer om de Arweave blockchain te indexeren. + +## Wat is Arweave? + +Het Arweave protocol stelt ontwikkelaars in staat om gegevens permanent op te slaan, dat is het voornaamste verschil tussen Arweave en IPFS, waar IPFS deze functie mist, en bestanden die op Arweave zijn opgeslagen, kunnen niet worden gewijzigd of verwijderd. + +Arweave heeft al talloze bibliotheken gebouwd voor het integreren van het protocol in verschillende programmeertalen. Voor meer informatie kun je kijken op: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## Wat zijn Arweave Subgraphs? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Bouwen van een Arweave Subgraph + +Voor het kunnen bouwen en implementeren van Arweave Subgraphs, heb je twee paketten nodig: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## Subgraph's componenten + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +Definieert gegevensbronnen die van belang zijn en hoe deze verwerkt moeten worden. Arweave is een nieuw type gegevensbron. + +### 2. Schema - `schema.graphql` + +Hier definieer je welke gegevens je wilt kunnen opvragen na het indexeren van je subgraph door het gebruik van GraphQL. Dit lijkt eigenlijk op een model voor een API, waarbij het model de structuur van een verzoek definieert. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +Dit is de logica die definieert hoe data zou moeten worden opgevraagd en opgeslagen wanneer iemand met de gegevens communiceert waarnaar jij aan het luisteren bent. De gegevens worden vertaald en is opgeslagen gebaseerd op het schema die je genoteerd hebt. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave data bronnen introduceert een optionele bron.eigenaar veld, dat de openbare sleutel is van een Arweave wallet + +Arweave data bronnen ondersteunt twee typen verwerkers: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> The source.owner can be the owner's address, or their Public Key. +> +> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Querying an Arweave Subgraph + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can I index the stored files on Arweave? + +Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). + +### Can I identify Bundlr bundles in my Subgraph? + +This is not currently supported. + +### How can I filter transactions to a specific account? + +The source.owner can be the user's public key or account address. + +### What is the current encryption format? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/nl/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/nl/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..ab5076c5ebf4 --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Overview + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +or + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/nl/subgraphs/guides/enums.mdx b/website/src/pages/nl/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..9f55ae07c54b --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Additional Resources + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/nl/subgraphs/guides/grafting.mdx b/website/src/pages/nl/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..d9abe0e70d2a --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Replace a Contract and Keep its History With Grafting +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## What is Grafting? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- It changes for which entity types an interface is implemented + +For more information, you can check: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Important Note on Grafting When Upgrading to the Network + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Why Is This Important? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Best Practices + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +By adhering to these guidelines, you minimize risks and ensure a smoother migration process. + +## Building an Existing Subgraph + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Grafting Manifest Definition + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## Deploying the Base Subgraph + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It returns something like this: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## Deploying the Grafting Subgraph + +The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It should return the following: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## Additional Resources + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/nl/subgraphs/guides/near.mdx b/website/src/pages/nl/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..e78a69eb7fa2 --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Building Subgraphs on NEAR +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## What is NEAR? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- Block handlers: these are run on every new block +- Receipt handlers: run every time a message is executed at a specified account + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. + +## Building a NEAR Subgraph + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### Subgraph Manifest Definition + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +NEAR data sources support two types of handlers: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## Deploying a NEAR Subgraph + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Local Graph Node (based on default configuration) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexing NEAR with a Local Graph Node + +Running a Graph Node that indexes NEAR has the following operational requirements: + +- NEAR Indexer Framework with Firehose instrumentation +- NEAR Firehose Component(s) +- Graph Node with Firehose endpoint configured + +We will provide more information on running the above components soon. + +## Querying a NEAR Subgraph + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### How does the beta work? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. + +### Will receipt handlers trigger for accounts and their sub-accounts? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +This is not supported. We are evaluating whether this functionality is required for indexing. + +### Can I use data source templates in my NEAR Subgraph? + +This is not currently supported. We are evaluating whether this functionality is required for indexing. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## References + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/nl/subgraphs/guides/polymarket.mdx b/website/src/pages/nl/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/nl/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/nl/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..e17e594408ff --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## Overview + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/nl/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/nl/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..bf62dc4dde30 --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Begin + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/nl/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/nl/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..91aa7484d2ec --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Quick and Easy Subgraph Debugging Using Forks +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## Ok, what is it? + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## What?! How? + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## Please, show me some code! + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +The usual way to attempt a fix is: + +1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. Wait for it to sync-up. +4. If it breaks again go back to 1, otherwise: Hooray! + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. Make a change in the mappings source, which you believe will solve the issue. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. If it breaks again, go back to 1, otherwise: Hooray! + +Now, you may have 2 questions: + +1. fork-base what??? +2. Forking who?! + +And I answer: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. Forking is easy, no need to sweat: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +So, here is what I do: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/nl/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/nl/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..a08e2a7ad8c9 --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Safe Subgraph Code Generator +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Why integrate with Subgraph Uncrashable? + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/nl/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/nl/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..9a4b037cafbc --- /dev/null +++ b/website/src/pages/nl/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfer to The Graph +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### Example + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### Additional Resources + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/nl/subgraphs/querying/best-practices.mdx b/website/src/pages/nl/subgraphs/querying/best-practices.mdx index ff5f381e2993..ab02b27cbc03 100644 --- a/website/src/pages/nl/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/nl/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/nl/subgraphs/querying/from-an-application.mdx b/website/src/pages/nl/subgraphs/querying/from-an-application.mdx index 27ee9b282f9a..bf6f8f1a5817 100644 --- a/website/src/pages/nl/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/nl/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Stap 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Stap 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Stap 1 diff --git a/website/src/pages/nl/subgraphs/querying/graph-client/README.md b/website/src/pages/nl/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/nl/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/nl/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/nl/subgraphs/querying/graphql-api.mdx b/website/src/pages/nl/subgraphs/querying/graphql-api.mdx index b3003ece651a..b82afcfa252c 100644 --- a/website/src/pages/nl/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/nl/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ------ | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/nl/subgraphs/querying/introduction.mdx b/website/src/pages/nl/subgraphs/querying/introduction.mdx index 36ea85c37877..2c9c553293fa 100644 --- a/website/src/pages/nl/subgraphs/querying/introduction.mdx +++ b/website/src/pages/nl/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/nl/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/nl/subgraphs/querying/managing-api-keys.mdx index 6964b1a7ad9b..aed3d10422e1 100644 --- a/website/src/pages/nl/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/nl/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/nl/subgraphs/querying/python.mdx b/website/src/pages/nl/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/nl/subgraphs/querying/python.mdx +++ b/website/src/pages/nl/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/nl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/nl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/nl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/nl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/nl/subgraphs/quick-start.mdx b/website/src/pages/nl/subgraphs/quick-start.mdx index 746891a192bb..7efec0891fa6 100644 --- a/website/src/pages/nl/subgraphs/quick-start.mdx +++ b/website/src/pages/nl/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Snelle Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/nl/substreams/developing/dev-container.mdx b/website/src/pages/nl/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/nl/substreams/developing/dev-container.mdx +++ b/website/src/pages/nl/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/nl/substreams/developing/sinks.mdx b/website/src/pages/nl/substreams/developing/sinks.mdx index 5f6f9de21326..45e5471f0d09 100644 --- a/website/src/pages/nl/substreams/developing/sinks.mdx +++ b/website/src/pages/nl/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/nl/substreams/developing/solana/account-changes.mdx b/website/src/pages/nl/substreams/developing/solana/account-changes.mdx index a282278c7d91..8c821acaee3f 100644 --- a/website/src/pages/nl/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/nl/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/nl/substreams/developing/solana/transactions.mdx b/website/src/pages/nl/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/nl/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/nl/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/nl/substreams/introduction.mdx b/website/src/pages/nl/substreams/introduction.mdx index e11174ee07c8..0bd1ea21c9f6 100644 --- a/website/src/pages/nl/substreams/introduction.mdx +++ b/website/src/pages/nl/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/nl/substreams/publishing.mdx b/website/src/pages/nl/substreams/publishing.mdx index 3d1a3863c882..3d93e6f9376f 100644 --- a/website/src/pages/nl/substreams/publishing.mdx +++ b/website/src/pages/nl/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/nl/supported-networks.mdx b/website/src/pages/nl/supported-networks.mdx index f0ee53a26864..9ba4b8d0ab99 100644 --- a/website/src/pages/nl/supported-networks.mdx +++ b/website/src/pages/nl/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Ondersteunde Netwerken hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/nl/token-api/_meta-titles.json b/website/src/pages/nl/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/nl/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/nl/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/nl/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/nl/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/nl/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/nl/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/nl/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/nl/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/nl/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/nl/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/nl/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/nl/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/nl/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/nl/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/nl/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/nl/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/nl/token-api/faq.mdx b/website/src/pages/nl/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/nl/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/nl/token-api/mcp/claude.mdx b/website/src/pages/nl/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..12a036b6fc24 --- /dev/null +++ b/website/src/pages/nl/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Configuration + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/nl/token-api/mcp/cline.mdx b/website/src/pages/nl/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..ef98e45939fe --- /dev/null +++ b/website/src/pages/nl/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Configuration + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/nl/token-api/mcp/cursor.mdx b/website/src/pages/nl/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..658108d1337b --- /dev/null +++ b/website/src/pages/nl/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Configuration + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/nl/token-api/monitoring/get-health.mdx b/website/src/pages/nl/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/nl/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/nl/token-api/monitoring/get-networks.mdx b/website/src/pages/nl/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/nl/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/nl/token-api/monitoring/get-version.mdx b/website/src/pages/nl/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/nl/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/nl/token-api/quick-start.mdx b/website/src/pages/nl/token-api/quick-start.mdx new file mode 100644 index 000000000000..b1b07812ba97 --- /dev/null +++ b/website/src/pages/nl/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Snelle Start +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/pl/about.mdx b/website/src/pages/pl/about.mdx index 199bc6a77400..abfc28d9390b 100644 --- a/website/src/pages/pl/about.mdx +++ b/website/src/pages/pl/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Grafika wyjaśniająca sposób w jaki protokół The Graph wykorzystuje węzeł Graph Node by obsługiwać zapytania dla konsumentów danych](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Proces ten przebiega według poniższych kroków: 1. Aplikacja dApp dodaje dane do sieci Ethereum za pomocą transakcji w smart kontrakcie. 2. Inteligentny kontrakt emituje jedno lub więcej zdarzeń podczas przetwarzania transakcji. -3. Graph Node nieprzerwanie skanuje sieć Ethereum w poszukiwaniu nowych bloków i danych dla Twojego subgraphu, które mogą one zawierać. -4. Graph Node znajduje zdarzenia Ethereum dla Twojego subgraphu w tych blokach i uruchamia dostarczone przez Ciebie procedury mapowania. Mapowanie to moduł WASM, który tworzy lub aktualizuje jednostki danych przechowywane przez węzeł Graph Node w odpowiedzi na zdarzenia Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Aplikacja dApp wysyła zapytanie do węzła Graph Node o dane zindeksowane na blockchainie, korzystając z [punktu końcowego GraphQL](https://graphql.org/learn/). Węzeł Graph Node przekształca zapytania GraphQL na zapytania do swojego podstawowego magazynu danych w celu pobrania tych danych, wykorzystując zdolności indeksowania magazynu. Aplikacja dApp wyświetla te dane w interfejsie użytkownika dla użytkowników końcowych, którzy używają go do tworzenia nowych transakcji w sieci Ethereum. Cykl się powtarza. ## Kolejne kroki -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/pl/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/pl/archived/arbitrum/arbitrum-faq.mdx index 8e3f51fe99c9..8322010a2d88 100644 --- a/website/src/pages/pl/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/pl/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Bezpieczeństwo jako spuścizna sieci Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. W zeszłym roku społeczność The Graph postanowiła pójść o krok do przodu z Arbitrum po wynikach dyskusji [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ By w pełni wykorzystać wszystkie zalety używania protokołu The Graph na L2 w ![Przejście do listy zawierającej Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Co powinien wiedzieć na ten temat subgraf developer, konsument danych, indekser, kurator lub delegator? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Wszystko zostało dokładnie przetestowane i przygotowano plan awaryjny, aby zapewnić bezpieczne i płynne przeniesienie. Szczegóły można znaleźć [tutaj](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-faq.mdx index c7f851bd8d87..50b904d5ef38 100644 --- a/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con Narzędzia przesyłania L2 używają natywnego mechanizmu Arbitrum do wysyłania wiadomości z L1 do L2. Mechanizm ten nazywany jest "ponowny bilet" i jest używany przez wszystkie natywne mosty tokenowe, w tym most Arbitrum GRT. Więcej informacji na temat "ponownych biletów" można znaleźć w [dokumentacji Arbitrum](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Kiedy przenosisz swoje aktywa (subgraph, stake, delegowanie lub kuratorstwo) do L2, wiadomość jest wysyłana przez most Arbitrum GRT, który tworzy bilet z możliwością ponownej próby w L2. Narzędzie transferu zawiera pewną wartość ETH w transakcji, która jest wykorzystywana do 1) zapłaty za utworzenie biletu i 2) zapłaty za gaz do wykonania biletu w L2. Ponieważ jednak ceny gazu mogą się różnić w czasie do momentu, gdy bilet będzie gotowy do zrealizowania w L2, możliwe jest, że ta próba automatycznego wykonania zakończy się niepowodzeniem. Gdy tak się stanie, most Arbitrum utrzyma ten bilet aktywnym przez maksymalnie 7 dni, i każdy może ponowić próbę "zrealizowania" biletu (co wymaga portfela z pewną ilością ETH pzesłanego do Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Nazywamy to etapem "Potwierdzenia" we wszystkich narzędziach do przesyłania - w większości przypadków będzie on wykonywany automatycznie, ponieważ najczęściej kończy się sukcesem, ale ważne jest, aby sprawdzić i upewnić się, że się powiódł. Jeśli się nie powiedzie i w ciągu 7 dni nie będzie skutecznych ponownych prób, most Arbitrum odrzuci bilet, a twoje zasoby ( subgraf, stake, delegowanie lub kuratorstwo) zostaną utracone i nie będzie można ich odzyskać. Główni programiści Graph mają system monitorowania, który wykrywa takie sytuacje i próbuje zrealizować bilety, zanim będzie za późno, ale ostatecznie to ty jesteś odpowiedzialny za zapewnienie, że przesyłanie zostanie zakończone na czas. Jeśli masz problemy z potwierdzeniem transakcji, skontaktuj się z nami za pomocą [tego formularza] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms), a nasi deweloperzy udzielą Ci pomocy. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## Subgraph Transfer -### Jak mogę przenieść swój subgraph? +### How do I transfer my Subgraph? -Aby przesłać swój subgraf, należy wykonać następujące kroki: +To transfer your Subgraph, you will need to complete the following steps: 1. Zainicjuj przesyłanie w sieci głównej Ethereum 2. Poczekaj 20 minut na potwierdzenie -3. Potwierdź przesyłanie subgrafu na Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Zakończ publikowanie subgrafu na Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Zaktualizuj adres URL zapytania (zalecane) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Skąd powinienem zainicjować przesyłanie? -Przesyłanie można zainicjować ze strony [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) lub dowolnej strony zawierającej szczegóły subgrafu. Kliknij przycisk "Prześlij subgraf " na tej stronie, aby zainicjować proces przesyłania. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Jak długo muszę czekać, aż mój subgraf zostanie przesłany +### How long do I need to wait until my Subgraph is transferred Przesyłanie trwa około 20 minut. Most Arbitrum działa w tle, automatycznie kończąc przesyłanie danych. W niektórych przypadkach koszty gazu mogą wzrosnąć i konieczne będzie ponowne potwierdzenie transakcji. -### Czy mój subgraf będzie nadal wykrywalny po przesłaniu go do L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Twój subgraf będzie można znaleźć tylko w sieci, w której został opublikowany. Na przykład, jeśli subgraf znajduje się w Arbitrum One, można go znaleźć tylko w Eksploratorze w Arbitrum One i nie będzie można go znaleźć w Ethereum. Upewnij się, że wybrałeś Arbitrum One w przełączniku sieci u góry strony i że jesteś we właściwej sieci. Po przesłaniu subgraf L1 będzie oznaczony jako nieaktualny. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Czy mój subgraf musi zostać opublikowany, aby móc go przesłać? +### Does my Subgraph need to be published to transfer it? -Aby skorzystać z narzędzia do przesyłania subgrafów, musi on być już opublikowany w sieci głównej Ethereum i musi mieć jakiś sygnał kuratorski należący do portfela, który jest właścicielem subgrafu. Jeśli subgraf nie został opublikowany, zaleca się po prostu opublikowanie go bezpośrednio na Arbitrum One - związane z tym opłaty za gaz będą znacznie niższe. Jeśli chcesz przesłać opublikowany subgraf, ale konto właściciela nie ma na nim żadnego sygnału, możesz zasygnalizować niewielką kwotę (np. 1 GRT) z tego konta; upewnij się, że wybrałeś sygnał "automatycznej migracji". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Co stanie się z wersją mojego subgrafu w sieci głównej Ethereum po przesłaniu go do Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Po przesłaniu subgrafu do Arbitrum, wersja głównej sieci Ethereum zostanie wycofana. Zalecamy zaktualizowanie adresu URL zapytania w ciągu 48 godzin. Istnieje jednak okres prolongaty, dzięki któremu adres URL sieci głównej będzie dalej funkcjonował, tak aby można było zaktualizować obsługę innych aplikacji. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Czy po przesłaniu muszę również ponownie opublikować na Arbitrum? @@ -80,21 +80,21 @@ Po upływie 20-minutowego okna przesyłania konieczne będzie jego potwierdzenie ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Czy publikowanie i wersjonowanie jest takie samo w L2 jak w sieci głównej Ethereum? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Czy kurator mojego subgrafu będzie się przemieszczał wraz z moim subgrafem? +### Will my Subgraph's curation move with my Subgraph? -Jeśli wybrałeś automatyczną migrację sygnału, 100% twojego własnego kuratorstwa zostanie przeniesione wraz z subgrafem do Arbitrum One. Cały sygnał kuratorski subgrafu zostanie przekonwertowany na GRT w momencie transferu, a GRT odpowiadający sygnałowi kuratorskiemu zostanie użyty do zmintowania sygnału na subgrafie L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Inni kuratorzy mogą zdecydować, czy wycofać swoją część GRT, czy też przesłać ją do L2 w celu zmintowania sygnału na tym samym subgrafie. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Czy mogę przenieść swój subgraf z powrotem do głównej sieci Ethereum po jego przesłaniu? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Po przesłaniu, wersja tego subgrafu w sieci głównej Ethereum zostanie wycofana. Jeśli chcesz ją przywrócić do sieci głównej, musisz ją ponownie wdrożyć i opublikować. Jednak przeniesienie z powrotem do sieci głównej Ethereum nie jest zalecane, ponieważ nagrody za indeksowanie zostaną całkowicie rozdzielone na Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Dlaczego potrzebuję bridgowanego ETH do przesłania? @@ -206,19 +206,19 @@ Aby przesłać swoje kuratorstwo, należy wykonać następujące kroki: \*Jeżeli będzie wymagane - np. w przypadku korzystania z adresu kontraktu. -### Skąd będę wiedzieć, czy subgraf, którego jestem kuratorem, został przeniesiony do L2? +### How will I know if the Subgraph I curated has moved to L2? -Podczas przeglądania strony ze szczegółami subgrafu pojawi się baner informujący, że subgraf został przeniesiony. Możesz postępować zgodnie z wyświetlanymi instrukcjami, aby przesłać swoje kuratorstwo. Informacje te można również znaleźć na stronie ze szczegółami subgrafu każdego z tych, które zostały przeniesione. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Co jeśli nie chcę przenosić swojego kuratorstwa do L2? -Gdy subgraf jest nieaktualny, masz możliwość wycofania swojego sygnału. Podobnie, jeśli subgraf został przeniesiony do L2, możesz wycofać swój sygnał w sieci głównej Ethereum lub wysłać sygnał do L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Skąd mam wiedzieć, że moje kuratorstwo zostało pomyślnie przesłane? Szczegóły sygnału będą dostępne za pośrednictwem Eksploratora po upływie ok. 20 minut od uruchomienia narzędzia do przesyłania L2. -### Czy mogę przesłać swoje kuratorstwo do więcej niż jednego subgrafu na raz? +### Can I transfer my curation on more than one Subgraph at a time? Obecnie nie ma opcji zbiorczego przesyłania. @@ -266,7 +266,7 @@ Przesyłanie stake'a przez narzędzie do przesyłania L2 zajmie około 20 minut. ### Czy muszę indeksować na Arbitrum, zanim przekażę swój stake? -Możesz skutecznie przesłać swój stake przed skonfigurowaniem indeksowania, lecz nie będziesz w stanie odebrać żadnych nagród na L2, dopóki nie alokujesz do subgrafów na L2, nie zindeksujesz ich i nie podasz POI. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Czy delegaci mogą przenieść swoje delegacje, zanim ja przeniosę swój indeksujący stake? diff --git a/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-guide.mdx index 2e4e4050450e..91e2f52b8525 100644 --- a/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/pl/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ Graph ułatwił przeniesienie danych do L2 na Arbitrum One. Dla każdego uczestn Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Jak przenieść swój subgraph do Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefits of transferring your subgraphs +## Benefits of transferring your Subgraphs Społeczność i deweloperzy Graph [przygotowywali się](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) do przejścia na Arbitrum w ciągu ostatniego roku. Arbitrum, blockchain warstwy 2 lub "L2", dziedziczy bezpieczeństwo po Ethereum, ale zapewnia znacznie niższe opłaty za gaz. -When you publish or upgrade your subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your subgraphs to Arbitrum, any future updates to your subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your subgraph, increasing the rewards for Indexers on your subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Understanding what happens with signal, your L1 subgraph and query URLs +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -When you choose to transfer the subgraph, this will convert all of the subgraph's curation signal to GRT. This is equivalent to "deprecating" the subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the subgraph, where they will be used to mint signal on your behalf. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. If a subgraph owner does not transfer their subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -As soon as the subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the subgraph. However, there will be Indexers that will 1) keep serving transferred subgraphs for 24 hours, and 2) immediately start indexing the subgraph on L2. Since these Indexers already have the subgraph indexed, there should be no need to wait for the subgraph to sync, and it will be possible to query the L2 subgraph almost immediately. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries to the L2 subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Choosing your L2 wallet -When you published your subgraph on mainnet, you used a connected wallet to create the subgraph, and this wallet owns the NFT that represents this subgraph and allows you to publish updates. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -When transferring the subgraph to Arbitrum, you can choose a different wallet that will own this subgraph NFT on L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. If you're using a "regular" wallet like MetaMask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same owner address as in L1. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the subgraph will be lost and cannot be recovered.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparing for the transfer: bridging some ETH -Transferring the subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Since gas fees on Arbitrum are lower, you should only need a small amount. It is recommended that you start at a low threshold (0.e.g. 01 ETH) for your transaction to be approved. -## Finding the subgraph Transfer Tool +## Finding the Subgraph Transfer Tool -You can find the L2 Transfer Tool when you're looking at your subgraph's page on Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -It is also available on Explorer if you're connected with the wallet that owns a subgraph and on that subgraph's page on Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Clicking on the Transfer to L2 button will open the transfer tool where you can ## Step 1: Starting the transfer -Before starting the transfer, you must decide which address will own the subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Also please note transferring the subgraph requires having a nonzero amount of signal on the subgraph with the same account that owns the subgraph; if you haven't signaled on the subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 subgraph (see "Understanding what happens with signal, your L1 subgraph and query URLs" above for more details on what goes on behind the scenes). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Step 2: Waiting for the subgraph to get to L2 +## Step 2: Waiting for the Subgraph to get to L2 -After you start the transfer, the message that sends your L1 subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. @@ -80,7 +80,7 @@ Once this wait time is over, Arbitrum will attempt to auto-execute the transfer ## Step 3: Confirming the transfer -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your subgraph to L2 will be pending and require a retry within 7 days. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. @@ -88,33 +88,33 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Step 4: Finishing the transfer on L2 -At this point, your subgraph and GRT have been received on Arbitrum, but the subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -This will publish the subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Step 5: Updating the query URL -Your subgraph has been successfully transferred to Arbitrum! To query the subgraph, the new URL will be : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note that the subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the subgraph has been synced on L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## How to transfer your curation to Arbitrum (L2) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Choosing your L2 wallet @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/pl/archived/sunrise.mdx b/website/src/pages/pl/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/pl/archived/sunrise.mdx +++ b/website/src/pages/pl/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/pl/global.json b/website/src/pages/pl/global.json index 9b22568b5199..5c981e17bd1c 100644 --- a/website/src/pages/pl/global.json +++ b/website/src/pages/pl/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgrafy", "substreams": "Substreams", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Description", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Description", + "liveResponse": "Live Response", + "example": "Example" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/pl/index.json b/website/src/pages/pl/index.json index 8670eb1a59ae..ca9ba66107b7 100644 --- a/website/src/pages/pl/index.json +++ b/website/src/pages/pl/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgrafy", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -37,10 +37,86 @@ }, "supportedNetworks": { "title": "Wspierane sieci", + "details": "Network Details", + "services": "Services", + "type": "Type", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Dokumenty", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { "base": "The Graph supports {0}. To add a new network, {1}", "networks": "networks", "completeThisForm": "complete this form" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Name", + "id": "ID", + "subgraphs": "Subgrafy", + "substreams": "Substreams", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Substreams", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Billing", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { @@ -80,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/pl/indexing/chain-integration-overview.mdx b/website/src/pages/pl/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/pl/indexing/chain-integration-overview.mdx +++ b/website/src/pages/pl/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/pl/indexing/new-chain-integration.mdx b/website/src/pages/pl/indexing/new-chain-integration.mdx index e45c4b411010..c401fa57b348 100644 --- a/website/src/pages/pl/indexing/new-chain-integration.mdx +++ b/website/src/pages/pl/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, in a JSON-RPC batch request -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ ### 2. Firehose Integration @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/pl/indexing/overview.mdx b/website/src/pages/pl/indexing/overview.mdx index 914b04e0bf47..0b9b31f5d22d 100644 --- a/website/src/pages/pl/indexing/overview.mdx +++ b/website/src/pages/pl/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/pl/indexing/supported-network-requirements.mdx b/website/src/pages/pl/indexing/supported-network-requirements.mdx index df15ef48d762..ce9919503666 100644 --- a/website/src/pages/pl/indexing/supported-network-requirements.mdx +++ b/website/src/pages/pl/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/pl/indexing/tap.mdx b/website/src/pages/pl/indexing/tap.mdx index 3bab672ab211..477534d63201 100644 --- a/website/src/pages/pl/indexing/tap.mdx +++ b/website/src/pages/pl/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Overview -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requirements +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/pl/indexing/tooling/graph-node.mdx b/website/src/pages/pl/indexing/tooling/graph-node.mdx index 0250f14a3d08..f5778789213d 100644 --- a/website/src/pages/pl/indexing/tooling/graph-node.mdx +++ b/website/src/pages/pl/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/pl/indexing/tooling/graphcast.mdx b/website/src/pages/pl/indexing/tooling/graphcast.mdx index 18639dc9acc8..a790c5800c7e 100644 --- a/website/src/pages/pl/indexing/tooling/graphcast.mdx +++ b/website/src/pages/pl/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Obecnie koszt przekazywania informacji innym uczestnikom sieci jest uzależniony SDK Graphcast (Software Development Kit) umożliwia programistom budowanie "Radios", czyli aplikacji opartych na przekazywaniu plotek, które indekserzy mogą uruchamiać w celu spełnienia określonego zadania. Planujemy również stworzyć kilka takich aplikacji Radios (lub udzielać wsparcia innym programistom/zespołom, które chcą w ich budowaniu uczestniczyć) dla następujących przypadków użycia: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Przeprowadzanie aukcji i koordynacja synchronizacji warp subgrafów, substreamów oraz danych Firehose od innych indekserów. -- Raportowanie na temat aktywnej analizy zapytań, w tym wolumenów zapytań do subgrafów, wolumenów opłat itp. -- Raportowanie na temat analizy indeksowania, w tym czasu indeksowania subgrafów, kosztów gazu dla osób obsługujących zapytanie, napotkanych błędów indeksowania itp. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Raportowanie informacji na temat stosu, w tym wersji graph-node, wersji Postgres oraz wersji klienta Ethereum itp. ### Dowiedz się więcej diff --git a/website/src/pages/pl/resources/benefits.mdx b/website/src/pages/pl/resources/benefits.mdx index d788b11bcd7a..8eb098f7a76c 100644 --- a/website/src/pages/pl/resources/benefits.mdx +++ b/website/src/pages/pl/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | Sieć The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | Sieć The Graph | +| :--------------------------: | :-------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | Sieć The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | Sieć The Graph | +| :--------------------------: | :----------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | Sieć The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | Sieć The Graph | +| :--------------------------: | :-----------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/pl/resources/glossary.mdx b/website/src/pages/pl/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/pl/resources/glossary.mdx +++ b/website/src/pages/pl/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/pl/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/pl/resources/migration-guides/assemblyscript-migration-guide.mdx index 85f6903a6c69..aead2514ff51 100644 --- a/website/src/pages/pl/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/pl/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/pl/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/pl/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/pl/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/pl/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/pl/resources/roles/curating.mdx b/website/src/pages/pl/resources/roles/curating.mdx index 1cc05bb7b62f..a228ebfb3267 100644 --- a/website/src/pages/pl/resources/roles/curating.mdx +++ b/website/src/pages/pl/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## How to Signal -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Curation FAQs ### 1. What % of query fees do Curators earn? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Can I sell my curation shares? diff --git a/website/src/pages/pl/resources/roles/delegating/undelegating.mdx b/website/src/pages/pl/resources/roles/delegating/undelegating.mdx index c3e31e653941..6a361c508450 100644 --- a/website/src/pages/pl/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/pl/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## Additional Resources diff --git a/website/src/pages/pl/resources/subgraph-studio-faq.mdx b/website/src/pages/pl/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/pl/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/pl/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/pl/resources/tokenomics.mdx b/website/src/pages/pl/resources/tokenomics.mdx index 4a9b42ca6e0d..dac3383a28e7 100644 --- a/website/src/pages/pl/resources/tokenomics.mdx +++ b/website/src/pages/pl/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Overview -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Backbone of blockchain data @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/pl/sps/introduction.mdx b/website/src/pages/pl/sps/introduction.mdx index 3e59ddaa10af..8c9483eb8feb 100644 --- a/website/src/pages/pl/sps/introduction.mdx +++ b/website/src/pages/pl/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Wstęp --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/pl/sps/sps-faq.mdx b/website/src/pages/pl/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/pl/sps/sps-faq.mdx +++ b/website/src/pages/pl/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/pl/sps/triggers.mdx b/website/src/pages/pl/sps/triggers.mdx index 816d42cb5f12..66687aa21889 100644 --- a/website/src/pages/pl/sps/triggers.mdx +++ b/website/src/pages/pl/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/pl/sps/tutorial.mdx b/website/src/pages/pl/sps/tutorial.mdx index f1126226dbcb..229b643ecda5 100644 --- a/website/src/pages/pl/sps/tutorial.mdx +++ b/website/src/pages/pl/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Jak zacząć? @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/pl/subgraphs/_meta-titles.json b/website/src/pages/pl/subgraphs/_meta-titles.json index 0556abfc236c..3fd405eed29a 100644 --- a/website/src/pages/pl/subgraphs/_meta-titles.json +++ b/website/src/pages/pl/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", + "guides": "How-to Guides", "best-practices": "Best Practices" } diff --git a/website/src/pages/pl/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/pl/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/pl/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/pl/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/pl/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/pl/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/pl/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/pl/subgraphs/best-practices/grafting-hotfix.mdx index d514e1633c75..674cf6b87c62 100644 --- a/website/src/pages/pl/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/pl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/pl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/pl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/pl/subgraphs/best-practices/pruning.mdx b/website/src/pages/pl/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/pl/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/pl/subgraphs/best-practices/timeseries.mdx b/website/src/pages/pl/subgraphs/best-practices/timeseries.mdx index cacdc44711fe..9732199531a8 100644 --- a/website/src/pages/pl/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/pl/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Overview @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/pl/subgraphs/billing.mdx b/website/src/pages/pl/subgraphs/billing.mdx index 511ac8067271..4dff0690a1ba 100644 --- a/website/src/pages/pl/subgraphs/billing.mdx +++ b/website/src/pages/pl/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/pl/subgraphs/cookbook/arweave.mdx b/website/src/pages/pl/subgraphs/cookbook/arweave.mdx index 2372025621d1..e59abffa383f 100644 --- a/website/src/pages/pl/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/pl/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Building Subgraphs on Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are To be able to build and deploy Arweave Subgraphs, you need two packages: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Subgraph's components -There are three components of a subgraph: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ Defines the data sources of interest, and how they should be processed. Arweave Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ``` $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet @@ -99,7 +99,7 @@ Arweave data sources support two types of handlers: ## Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript Mappings @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Querying an Arweave Subgraph -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here is an example subgraph for reference: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a subgraph index Arweave and other chains? +### Can a Subgraph index Arweave and other chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. ### Can I index the stored files on Arweave? Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). -### Can I identify Bundlr bundles in my subgraph? +### Can I identify Bundlr bundles in my Subgraph? This is not currently supported. @@ -188,7 +188,7 @@ The source.owner can be the user's public key or account address. ### What is the current encryption format? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/pl/subgraphs/cookbook/enums.mdx b/website/src/pages/pl/subgraphs/cookbook/enums.mdx index a10970c1539f..9f55ae07c54b 100644 --- a/website/src/pages/pl/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/pl/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/pl/subgraphs/cookbook/grafting.mdx b/website/src/pages/pl/subgraphs/cookbook/grafting.mdx index 57d5169830a7..d9abe0e70d2a 100644 --- a/website/src/pages/pl/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/pl/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Replace a Contract and Keep its History With Grafting --- -In this guide, you will learn how to build and deploy new subgraphs by grafting existing subgraphs. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## What is Grafting? -Grafting reuses the data from an existing subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. Also, it can be used when adding a feature to a subgraph that takes long to index from scratch. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -22,38 +22,38 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## Building an Existing Subgraph -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Grafting Manifest Definition -Grafting requires adding two new items to the original subgraph manifest: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Deploying the Base Subgraph -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the subgraph is indexing properly, you can quickly update the subgraph with grafting. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Deploying the Grafting Subgraph The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ It should return the following: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Additional Resources diff --git a/website/src/pages/pl/subgraphs/cookbook/near.mdx b/website/src/pages/pl/subgraphs/cookbook/near.mdx index 6060eb27e761..e78a69eb7fa2 100644 --- a/website/src/pages/pl/subgraphs/cookbook/near.mdx +++ b/website/src/pages/pl/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Building Subgraphs on NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## What is NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## What are NEAR subgraphs? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Block handlers: these are run on every new block - Receipt handlers: run every time a message is executed at a specified account @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Building a NEAR Subgraph -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -There are three aspects of subgraph definition: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ```bash $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Subgraph Manifest Definition -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ NEAR data sources support two types of handlers: ### Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript Mappings @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -The node configuration will depend on where the subgraph is being deployed. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ We will provide more information on running the above components soon. ## Querying a NEAR Subgraph -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### How does the beta work? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Can a subgraph index both NEAR and EVM chains? +### Can a Subgraph index both NEAR and EVM chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. -### Can subgraphs react to more specific triggers? +### Can Subgraphs react to more specific triggers? Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### Can NEAR subgraphs make view calls to NEAR accounts during mappings? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? This is not supported. We are evaluating whether this functionality is required for indexing. -### Can I use data source templates in my NEAR subgraph? +### Can I use data source templates in my NEAR Subgraph? This is not currently supported. We are evaluating whether this functionality is required for indexing. -### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### My question hasn't been answered, where can I get more help building NEAR subgraphs? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## References diff --git a/website/src/pages/pl/subgraphs/cookbook/polymarket.mdx b/website/src/pages/pl/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/pl/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/pl/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/pl/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/pl/subgraphs/cookbook/secure-api-keys-nextjs.mdx index fc7e0ff52eb4..e17e594408ff 100644 --- a/website/src/pages/pl/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/pl/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## Overview -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/pl/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/pl/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..9b28440ba95a --- /dev/null +++ b/website/src/pages/pl/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Overview + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Jak zacząć? + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/pl/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/pl/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..c1bf9633d63f --- /dev/null +++ b/website/src/pages/pl/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Wstęp + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Jak zacząć? + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/pl/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/pl/subgraphs/cookbook/subgraph-debug-forking.mdx index 6610f19da66d..91aa7484d2ec 100644 --- a/website/src/pages/pl/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/pl/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Quick and Easy Subgraph Debugging Using Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## Ok, what is it? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## What?! How? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## Please, show me some code! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. The usual way to attempt a fix is: 1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Wait for it to sync-up. 4. If it breaks again go back to 1, otherwise: Hooray! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. If it breaks again, go back to 1, otherwise: Hooray! Now, you may have 2 questions: @@ -69,18 +69,18 @@ Now, you may have 2 questions: And I answer: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Forking is easy, no need to sweat: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! So, here is what I do: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/pl/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/pl/subgraphs/cookbook/subgraph-uncrashable.mdx index 0cc91a0fa2c3..a08e2a7ad8c9 100644 --- a/website/src/pages/pl/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/pl/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Safe Subgraph Code Generator --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Why integrate with Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. @@ -26,4 +26,4 @@ Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/pl/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/pl/subgraphs/cookbook/transfer-to-the-graph.mdx index 194deb018404..9a4b037cafbc 100644 --- a/website/src/pages/pl/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/pl/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Example -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Additional Resources -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/pl/subgraphs/developing/creating/advanced.mdx b/website/src/pages/pl/subgraphs/developing/creating/advanced.mdx index ee9918f5f254..8dbc48253034 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/pl/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/pl/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2ac894695fe1..cd81dc118f28 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/api.mdx index 35bb04826c98..2e256ae18190 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/pl/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/pl/subgraphs/developing/creating/install-the-cli.mdx index d4509815a845..112f0952a1e8 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Jak stworzyć subgraf ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/pl/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/pl/subgraphs/developing/creating/ql-schema.mdx index 27562f970620..2eb805320753 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/pl/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/pl/subgraphs/developing/creating/starting-your-subgraph.mdx index 4823231d9a40..4931e6b1fd34 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Overview -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/pl/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/pl/subgraphs/developing/creating/subgraph-manifest.mdx index a42a50973690..085eaf2fb533 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Overview -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). The important entries to update for the manifest are: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Defining a Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/pl/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/pl/subgraphs/developing/creating/unit-testing-framework.mdx index 2133c1d4b5c9..e56e1109bc04 100644 --- a/website/src/pages/pl/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/pl/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/pl/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/pl/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/pl/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/pl/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/pl/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/pl/subgraphs/developing/deploying/using-subgraph-studio.mdx index d2023c7b4a09..c21ff6dc2358 100644 --- a/website/src/pages/pl/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/pl/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Must not use any of the following features: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/pl/subgraphs/developing/developer-faq.mdx b/website/src/pages/pl/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/pl/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/pl/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/pl/subgraphs/developing/introduction.mdx b/website/src/pages/pl/subgraphs/developing/introduction.mdx index 509b25654e82..92b39857a7f1 100644 --- a/website/src/pages/pl/subgraphs/developing/introduction.mdx +++ b/website/src/pages/pl/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/pl/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/pl/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/pl/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/pl/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/pl/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/pl/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/pl/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/pl/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/pl/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/pl/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/pl/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/pl/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/pl/subgraphs/developing/subgraphs.mdx b/website/src/pages/pl/subgraphs/developing/subgraphs.mdx index b81dc8a2d83e..e55dffd8111f 100644 --- a/website/src/pages/pl/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/pl/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgrafy ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/pl/subgraphs/explorer.mdx b/website/src/pages/pl/subgraphs/explorer.mdx index f29f2a3602d9..499fcede88d3 100644 --- a/website/src/pages/pl/subgraphs/explorer.mdx +++ b/website/src/pages/pl/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signal/Un-signal on subgraphs +- Signal/Un-signal on Subgraphs - View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraphs Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -223,13 +223,13 @@ Keep in mind that this chart is horizontally scrollable, so if you scroll all th ### Curating Tab -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Within this tab, you’ll find an overview of: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/pl/subgraphs/guides/arweave.mdx b/website/src/pages/pl/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..e59abffa383f --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Building Subgraphs on Arweave +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. + +## What is Arweave? + +The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. + +Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## What are Arweave Subgraphs? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Building an Arweave Subgraph + +To be able to build and deploy Arweave Subgraphs, you need two packages: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## Subgraph's components + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. + +### 2. Schema - `schema.graphql` + +Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet + +Arweave data sources support two types of handlers: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> The source.owner can be the owner's address, or their Public Key. +> +> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Querying an Arweave Subgraph + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can I index the stored files on Arweave? + +Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). + +### Can I identify Bundlr bundles in my Subgraph? + +This is not currently supported. + +### How can I filter transactions to a specific account? + +The source.owner can be the user's public key or account address. + +### What is the current encryption format? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/pl/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/pl/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..ab5076c5ebf4 --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Overview + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +or + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/pl/subgraphs/guides/enums.mdx b/website/src/pages/pl/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..9f55ae07c54b --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Additional Resources + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/pl/subgraphs/guides/grafting.mdx b/website/src/pages/pl/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..d9abe0e70d2a --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Replace a Contract and Keep its History With Grafting +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## What is Grafting? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- It changes for which entity types an interface is implemented + +For more information, you can check: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Important Note on Grafting When Upgrading to the Network + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Why Is This Important? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Best Practices + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +By adhering to these guidelines, you minimize risks and ensure a smoother migration process. + +## Building an Existing Subgraph + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Grafting Manifest Definition + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## Deploying the Base Subgraph + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It returns something like this: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## Deploying the Grafting Subgraph + +The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It should return the following: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## Additional Resources + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/pl/subgraphs/guides/near.mdx b/website/src/pages/pl/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..e78a69eb7fa2 --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Building Subgraphs on NEAR +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## What is NEAR? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- Block handlers: these are run on every new block +- Receipt handlers: run every time a message is executed at a specified account + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. + +## Building a NEAR Subgraph + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### Subgraph Manifest Definition + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +NEAR data sources support two types of handlers: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## Deploying a NEAR Subgraph + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Local Graph Node (based on default configuration) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexing NEAR with a Local Graph Node + +Running a Graph Node that indexes NEAR has the following operational requirements: + +- NEAR Indexer Framework with Firehose instrumentation +- NEAR Firehose Component(s) +- Graph Node with Firehose endpoint configured + +We will provide more information on running the above components soon. + +## Querying a NEAR Subgraph + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### How does the beta work? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. + +### Will receipt handlers trigger for accounts and their sub-accounts? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +This is not supported. We are evaluating whether this functionality is required for indexing. + +### Can I use data source templates in my NEAR Subgraph? + +This is not currently supported. We are evaluating whether this functionality is required for indexing. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## References + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/pl/subgraphs/guides/polymarket.mdx b/website/src/pages/pl/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/pl/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/pl/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..e17e594408ff --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## Overview + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/pl/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/pl/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..fb8427b04be9 --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Wstęp + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Jak zacząć? + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/pl/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/pl/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..91aa7484d2ec --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Quick and Easy Subgraph Debugging Using Forks +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## Ok, what is it? + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## What?! How? + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## Please, show me some code! + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +The usual way to attempt a fix is: + +1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. Wait for it to sync-up. +4. If it breaks again go back to 1, otherwise: Hooray! + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. Make a change in the mappings source, which you believe will solve the issue. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. If it breaks again, go back to 1, otherwise: Hooray! + +Now, you may have 2 questions: + +1. fork-base what??? +2. Forking who?! + +And I answer: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. Forking is easy, no need to sweat: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +So, here is what I do: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/pl/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/pl/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..a08e2a7ad8c9 --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Safe Subgraph Code Generator +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Why integrate with Subgraph Uncrashable? + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/pl/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/pl/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..9a4b037cafbc --- /dev/null +++ b/website/src/pages/pl/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfer to The Graph +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### Example + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### Additional Resources + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/pl/subgraphs/querying/best-practices.mdx b/website/src/pages/pl/subgraphs/querying/best-practices.mdx index ff5f381e2993..ab02b27cbc03 100644 --- a/website/src/pages/pl/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/pl/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/pl/subgraphs/querying/from-an-application.mdx b/website/src/pages/pl/subgraphs/querying/from-an-application.mdx index 56be718d0fb8..48f4b6561ac7 100644 --- a/website/src/pages/pl/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/pl/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Krok 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Krok 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Krok 1 diff --git a/website/src/pages/pl/subgraphs/querying/graph-client/README.md b/website/src/pages/pl/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/pl/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/pl/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/pl/subgraphs/querying/graphql-api.mdx b/website/src/pages/pl/subgraphs/querying/graphql-api.mdx index b3003ece651a..b82afcfa252c 100644 --- a/website/src/pages/pl/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/pl/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ------ | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/pl/subgraphs/querying/introduction.mdx b/website/src/pages/pl/subgraphs/querying/introduction.mdx index e66fe896db2d..fc96956cda46 100644 --- a/website/src/pages/pl/subgraphs/querying/introduction.mdx +++ b/website/src/pages/pl/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/pl/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/pl/subgraphs/querying/managing-api-keys.mdx index 6964b1a7ad9b..aed3d10422e1 100644 --- a/website/src/pages/pl/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/pl/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/pl/subgraphs/querying/python.mdx b/website/src/pages/pl/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/pl/subgraphs/querying/python.mdx +++ b/website/src/pages/pl/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/pl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/pl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/pl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/pl/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/pl/subgraphs/quick-start.mdx b/website/src/pages/pl/subgraphs/quick-start.mdx index 6db0b1437e5e..412b09bade3a 100644 --- a/website/src/pages/pl/subgraphs/quick-start.mdx +++ b/website/src/pages/pl/subgraphs/quick-start.mdx @@ -1,8 +1,8 @@ --- -title: ' Na start' +title: " Na start" --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/pl/substreams/developing/dev-container.mdx b/website/src/pages/pl/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/pl/substreams/developing/dev-container.mdx +++ b/website/src/pages/pl/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/pl/substreams/developing/sinks.mdx b/website/src/pages/pl/substreams/developing/sinks.mdx index 5f6f9de21326..45e5471f0d09 100644 --- a/website/src/pages/pl/substreams/developing/sinks.mdx +++ b/website/src/pages/pl/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/pl/substreams/developing/solana/account-changes.mdx b/website/src/pages/pl/substreams/developing/solana/account-changes.mdx index b31eafd2b064..20a3fe7373e5 100644 --- a/website/src/pages/pl/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/pl/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/pl/substreams/developing/solana/transactions.mdx b/website/src/pages/pl/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/pl/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/pl/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/pl/substreams/introduction.mdx b/website/src/pages/pl/substreams/introduction.mdx index 84fe81909fc8..3f22bea5db7a 100644 --- a/website/src/pages/pl/substreams/introduction.mdx +++ b/website/src/pages/pl/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/pl/substreams/publishing.mdx b/website/src/pages/pl/substreams/publishing.mdx index 3d1a3863c882..3d93e6f9376f 100644 --- a/website/src/pages/pl/substreams/publishing.mdx +++ b/website/src/pages/pl/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/pl/substreams/quick-start.mdx b/website/src/pages/pl/substreams/quick-start.mdx index 96846fccaf05..93c2bd0afaad 100644 --- a/website/src/pages/pl/substreams/quick-start.mdx +++ b/website/src/pages/pl/substreams/quick-start.mdx @@ -1,6 +1,6 @@ --- title: Substreams Quick Start -sidebarTitle: ' Na start' +sidebarTitle: " Na start" --- Discover how to utilize ready-to-use substream packages or develop your own. diff --git a/website/src/pages/pl/supported-networks.mdx b/website/src/pages/pl/supported-networks.mdx index e00bc0f33477..c49e9c3853b2 100644 --- a/website/src/pages/pl/supported-networks.mdx +++ b/website/src/pages/pl/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Wspierane sieci hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/pl/token-api/_meta-titles.json b/website/src/pages/pl/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/pl/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/pl/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/pl/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/pl/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/pl/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/pl/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/pl/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/pl/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/pl/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/pl/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/pl/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/pl/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/pl/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/pl/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/pl/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/pl/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/pl/token-api/faq.mdx b/website/src/pages/pl/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/pl/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/pl/token-api/mcp/claude.mdx b/website/src/pages/pl/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..12a036b6fc24 --- /dev/null +++ b/website/src/pages/pl/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Configuration + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/pl/token-api/mcp/cline.mdx b/website/src/pages/pl/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..ef98e45939fe --- /dev/null +++ b/website/src/pages/pl/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Configuration + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/pl/token-api/mcp/cursor.mdx b/website/src/pages/pl/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..658108d1337b --- /dev/null +++ b/website/src/pages/pl/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Configuration + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/pl/token-api/monitoring/get-health.mdx b/website/src/pages/pl/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/pl/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/pl/token-api/monitoring/get-networks.mdx b/website/src/pages/pl/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/pl/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/pl/token-api/monitoring/get-version.mdx b/website/src/pages/pl/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/pl/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/pl/token-api/quick-start.mdx b/website/src/pages/pl/token-api/quick-start.mdx new file mode 100644 index 000000000000..71b7ea7548cb --- /dev/null +++ b/website/src/pages/pl/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: " Na start" +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/pt/about.mdx b/website/src/pages/pt/about.mdx index 6603713efd91..22d7582d014d 100644 --- a/website/src/pages/pt/about.mdx +++ b/website/src/pages/pt/about.mdx @@ -30,25 +30,25 @@ Propriedades de blockchain, como finalidade, reorganizações de chain, ou bloco ## The Graph Providencia uma Solução -O The Graph resolve este desafio com um protocolo descentralizado que indexa e permite queries eficientes e de alto desempenho de dados de blockchain. Estas APIs ("subgraphs" indexados) podem então ser consultados num query com uma API GraphQL padrão. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Hoje, há um protocolo descentralizado apoiado pela implementação de código aberto do [Graph Node](https://github.com/graphprotocol/graph-node) que facilita este processo. ### Como o The Graph Funciona -Indexar dados em blockchain é um processo difícil, mas facilitado pelo The Graph. O The Graph aprende como indexar dados no Ethereum com o uso de subgraphs. Subgraphs são APIs personalizadas construídas com dados de blockchain, que extraem, processam e armazenam dados de uma blockchain para poderem ser consultadas suavemente via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Especificações -- O The Graph usa descrições de subgraph, conhecidas como "manifests de subgraph" dentro do subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- A descrição do subgraph contorna os contratos inteligentes de interesse para o mesmo, os eventos dentro destes contratos para focar, e como mapear dados de evento para dados que o The Graph armazenará no seu banco de dados. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- Ao criar um subgraph, primeiro é necessário escrever um manifest de subgraph. +- When creating a Subgraph, you need to write a Subgraph manifest. -- Após escrever o `subgraph manifest`, é possível usar o Graph CLI para armazenar a definição no IPFS e instruir o Indexador para começar a indexar dados para o subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -O diagrama abaixo dá informações mais detalhadas sobre o fluxo de dados quando um manifest de subgraph for lançado com transações no Ethereum. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![Um gráfico que explica como o The Graph utiliza Graph Nodes para servir queries para consumidores de dados](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ O fluxo segue estes passos: 1. Um dApp adiciona dados à Ethereum através de uma transação em contrato inteligente. 2. O contrato inteligente emite um ou mais eventos enquanto processa a transação. -3. O Graph Node escaneia continuamente a Ethereum por novos blocos e os dados que podem conter para o seu subgraph. -4. O Graph Node encontra eventos na Ethereum para o seu subgraph nestes blocos e executa os handlers de mapeamento que forneceu. O mapeamento é um módulo WASM que cria ou atualiza as entidades de dados que o Graph Node armazena em resposta a eventos na Ethereum. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. O dApp consulta o Graph Node para dados indexados da blockchain, através do [endpoint GraphQL](https://graphql.org/learn/) do node. O Graph Node, por sua vez, traduz os queries GraphQL em queries para o seu armazenamento subjacente de dados para poder retirar estes dados, com o uso das capacidades de indexação do armazenamento. O dApp exibe estes dados em uma interface rica para utilizadores finais, que eles usam para emitir novas transações na Ethereum. E o ciclo se repete. ## Próximos Passos -As seguintes secções providenciam um olhar mais íntimo nos subgraphs, na sua publicação e no query de dados. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Antes de escrever o seu próprio subgraph, é recomendado explorar o [Graph Explorer](https://thegraph.com/explorer) e revir alguns dos subgraphs já publicados. A página de todo subgraph inclui um ambiente de teste em GraphQL que lhe permite consultar os dados dele. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/pt/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/pt/archived/arbitrum/arbitrum-faq.mdx index 0c1ba5b192ef..7932ad2508bd 100644 --- a/website/src/pages/pt/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/pt/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Herdar segurança do Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. A comunidade do The Graph prosseguiu com o Arbitrum no ano passado, após o resultado da discussão [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ Para aproveitar o The Graph na L2, use este switcher de dropdown para alternar e ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Como um programador, consumidor de dados, Indexador, Curador ou Delegante, o que devo fazer agora? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ Todos os contratos inteligentes já foram devidamente [auditados](https://github Tudo foi testado exaustivamente, e já está pronto um plano de contingência para garantir uma transição segura e suave. Mais detalhes [aqui](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-faq.mdx index d542d643adc4..a821b0e0b588 100644 --- a/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ A exceção é com carteiras de contrato inteligente como multisigs: estas são As Ferramentas de Transferência para L2 usam o mecanismo nativo do Arbitrum para enviar mensagens da L1 à L2. Este mecanismo é chamado de "retryable ticket" (bilhete retentável) e é usado por todos os bridges de tokens nativos, incluindo o bridge de GRT do Arbitrum. Leia mais na [documentação do Arbitrum](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Ao transferir os seus ativos (subgraph, stake, delegação ou curadoria) à L2, é enviada uma mensagem através do bridge de GRT do Arbitrum, que cria um retryable ticket na L2. A ferramenta de transferência inclui um valor de ETH na transação, que é usado para pagar 1) pela criação do ticket e 2) pelo gas da execução do ticket na L2. Porém, devido à possível variação dos preços de gas no tempo até a execução do ticket na L2, esta tentativa de execução automática pode falhar. Se isto acontecer, o bridge do Arbitrum tentará manter o retryable ticket ativo por até 7 dias; assim, qualquer pessoa pode tentar novamente o "resgate" do ticket (que requer uma carteira com algum ETH em bridge ao Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Este é o passo de "Confirmação" em todas as ferramentas de transferência. Ele será executado automaticamente e com êxito na maioria dos casos, mas é importante verificar que ele foi executado. Se não tiver êxito na primeira execução e nem em quaisquer das novas tentativas dentro de 7 dias, o bridge do Arbitrum descartará o ticket, e os seus ativos (subgraph, stake, delegação ou curadoria) serão perdidos sem volta. Os programadores-núcleo do The Graph têm um sistema de monitoria para detectar estas situações e tentar resgatar os tickets antes que seja tarde, mas no final, a responsabilidade é sua de que a sua transferência complete a tempo. Caso haja problemas ao confirmar a sua transação, contacte-nos com [este formulário](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) e o núcleo tentará lhe ajudar. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Eu comecei a transferir a minha delegação/meu stake/minha curadoria e não tenho certeza se ela chegou à L2, como posso ter certeza de que a mesma foi transferida corretamente? @@ -36,43 +36,43 @@ Se tiver o hash de transação da L1 (confira as transações recentes na sua ca ## Transferência de Subgraph -### Como transfiro o meu subgraph? +### How do I transfer my Subgraph? -Para transferir o seu subgraph, complete os seguintes passos: +To transfer your Subgraph, you will need to complete the following steps: 1. Inicie a transferência na mainnet Ethereum 2. Espere 20 minutos pela confirmação -3. Confirme a transferência do subgraph no Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Termine de editar o subgraph no Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Atualize o URL de Query (recomendado) -\*Você deve confirmar a transferência dentro de 7 dias, ou o seu subgraph poderá ser perdido. Na maioria dos casos, este passo será executado automaticamente, mas pode ser necessário confirmar manualmente caso haja um surto no preço de gas no Arbitrum. Caso haja quaisquer dificuldades neste processo, contacte o suporte em support@thegraph.com ou no [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### De onde devo iniciar a minha transferência? -Do [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) ou de qualquer página de detalhes de subgraph. Clique no botão "Transfer Subgraph" (Transferir Subgraph) na página de detalhes de subgraph para começar a transferência. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Quanto tempo devo esperar até que o meu subgraph seja transferido +### How long do I need to wait until my Subgraph is transferred A transferência leva cerca de 20 minutos. O bridge do Arbitrum trabalha em segundo plano para completar a transferência automaticamente. Às vezes, os custos de gas podem subir demais e a transação deverá ser confirmada novamente. -### O meu subgraph ainda poderá ser descoberto após ser transferido para a L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -O seu subgraph só será descobrível na rede em qual foi editado. Por exemplo, se o seu subgraph estiver no Arbitrum One, então só poderá encontrá-lo no Explorer do Arbitrum One e não no Ethereum. Garanta que o Arbitrum One está selecionado no seletor de rede no topo da página para garantir que está na rede correta.  Após a transferência, o subgraph na L1 aparecerá como depreciado. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### O meu subgraph precisa ser editado para poder ser transferido? +### Does my Subgraph need to be published to transfer it? -Para aproveitar a ferramenta de transferência de subgraph, o seu subgraph já deve estar editado na mainnet Ethereum e deve ter algum sinal de curadoria em posse da carteira titular do subgraph. Se o seu subgraph não estiver editado, edite-o diretamente no Arbitrum One - as taxas de gas associadas serão bem menores. Se quiser transferir um subgraph editado, mas a conta titular não curou qualquer sinal nele, você pode sinalizar uma quantidade pequena (por ex. 1 GRT) daquela conta; escolha o sinal "migração automática". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### O que acontece com a versão da mainnet Ethereum do meu subgraph após eu transferi-lo ao Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Após transferir o seu subgraph ao Arbitrum, a versão na mainnet Ethereum será depreciada. Recomendamos que atualize o seu URL de query em dentro de 28 horas. Porém, há um período que mantém o seu URL na mainnet em funcionamento, para que qualquer apoio de dapp de terceiros seja atualizado. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Após a transferência, preciso reeditar no Arbitrum? @@ -80,21 +80,21 @@ Após a janela de transferência de 20 minutos, confirme a transferência com um ### O meu endpoint estará fora do ar durante a reedição? -É improvável, mas é possível passar por um breve desligamento a depender de quais Indexadores apoiam o subgraph na L1, e de se eles continuarão a indexá-lo até o subgraph ter apoio total na L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Editar e versionar na L2 funcionam da mesma forma que na mainnet Ethereum? -Sim. Selcione o Arbitrum One como a sua rede editada ao editar no Subgraph Studio. No Studio, o último endpoint disponível apontará à versão atualizada mais recente do subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### A curadoria do meu subgraph se mudará com o meu subgraph? +### Will my Subgraph's curation move with my Subgraph? -Caso tenha escolhido o sinal automigratório, 100% da sua própria curadoria se mudará ao Arbitrum One junto com o seu subgraph. Todo o sinal de curadoria do subgraph será convertido em GRT na hora da transferência, e o GRT correspondente ao seu sinal de curadoria será usado para mintar sinais no subgraph na L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Outros Curadores podem escolher se querem sacar a sua fração de GRT, ou também transferi-la à L2 para mintar sinais no mesmo subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Posso devolver o meu subgraph à mainnet Ethereum após a transferência? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Após a transferência, a versão da mainnet Ethereum deste subgraph será depreciada. Se quiser devolvê-lo à mainnet, será necessário relançá-lo e editá-lo de volta à mainnet. Porém, transferir de volta à mainnet do Ethereum é muito arriscado, já que as recompensas de indexação logo serão distribuidas apenas no Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Por que preciso de ETH em bridge para completar a minha transferência? @@ -206,19 +206,19 @@ Para transferir a sua curadoria, complete os seguintes passos: \*Se necessário - por ex. se você usar um endereço de contrato. -### Como saberei se o subgraph que eu curei foi transferido para a L2? +### How will I know if the Subgraph I curated has moved to L2? -Ao visualizar a página de detalhes do subgraph, um banner notificará-lhe que este subgraph foi transferido. Siga o prompt para transferir a sua curadoria. Esta informação também aparece na página de detalhes de qualquer subgraph transferido. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### E se eu não quiser mudar a minha curadoria para a L2? -Quando um subgraph é depreciado, há a opção de retirar o seu sinal. Desta forma, se um subgraph for movido à L2, dá para escolher retirar o seu sinal na mainnet Ethereum ou enviar o sinal à L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Como sei se a minha curadoria foi transferida com êxito? Os detalhes do sinal serão acessíveis através do Explorer cerca de 20 minutos após a ativação da ferramenta de transferência à L2. -### Posso transferir a minha curadoria em vários subgraphs de uma vez? +### Can I transfer my curation on more than one Subgraph at a time? Não há opção de transferências em conjunto no momento. @@ -266,7 +266,7 @@ A ferramenta de transferência à L2 finalizará a transferência do seu stake e ### Devo indexar no Arbitrum antes de transferir o meu stake? -Você pode transferir o seu stake antes de preparar a indexação, mas não terá como resgatar recompensas na L2 até alocar para subgraphs na L2, indexá-los, e apresentar POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Os Delegadores podem mudar a sua delegação antes que eu mude o meu stake de indexação? diff --git a/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-guide.mdx index a6a744aeeb19..320c947532a4 100644 --- a/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/pt/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ O The Graph facilitou muito o processo de se mudar para a L2 no Arbitrum One. Pa Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Como transferir o seu subgraph ao Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefícios de transferir os seus subgraphs +## Benefits of transferring your Subgraphs A comunidade e os programadores centrais do The Graph andaram [preparando](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) as suas mudanças ao Arbitrum ao longo do último ano. O Arbitrum, uma blockchain layer 2, ou "L2", herda a segurança do Ethereum, mas providencia taxas de gas muito menores. -Ao publicar ou atualizar o seu subgraph na Graph Network, você interaje com contratos inteligentes no protocolo, e isto exige o pagamento de gas usando ETH. Ao mover os seus subgraphs ao Arbitrum, quaisquer atualizações futuras ao seu subgraph exigirão taxas de gas muito menores. As taxas menores, e o fato de que bonding curves de curadoria na L2 são planas, também facilitarão a curadoria no seu subgraph para outros Curadores, a fim de aumentar as recompensas para Indexadores no seu subgraph. Este ambiente de custo reduzido também barateia a indexação e o serviço de Indexadores no seu subgraph. As recompensas de indexação também aumentarão no Arbitrum e decairão na mainnet do Ethereum nos próximos meses, então mais e mais Indexadores transferirão o seu stake e preparando as suas operações na L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Como entender o que acontece com o sinal, o seu subgraph na L1 e URLs de query +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferir um subgraph ao Arbitrum usa a bridge de GRT do Arbitrum, que por sua vez usa a bridge nativa do Arbitrum, para enviar o subgraph à L2. A "transferência" depreciará o subgraph na mainnet e enviará a informação para recriar o subgraph na L2 com o uso da bridge. Ele também incluirá o GRT sinalizado do dono do subgraph, que deve ser maior que zero para que a bridge aceite a transferência. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Ao escolher transferir o subgraph, isto converterá todo o sinal de curadoria do subgraph em GRT. Isto é equivalente à "depreciação" do subgraph na mainnet. O GRT correspondente à sua curadoria será enviado à L2 junto com o subgraph, onde ele será usado para mintar sinais em seu nome. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Outros Curadores podem escolher retirar a sua fração de GRT, ou também transferi-la à L2 para mintar sinais no mesmo subgraph. Se um dono de subgraph não transferir o seu subgraph à L2 e depreciá-lo manualmente através de uma chamada de contrato, os Curadores serão notificados, e poderão retirar a sua curadoria. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Assim que o subgraph for transferido, como toda curadoria é convertida em GRT, Indexadores não receberão mais recompensas por indexar o subgraph. Porém, haverão Indexadores que 1) continuarão a servir subgraphs transferidos por 24 horas, e 2) começarão imediatamente a indexar o subgraph na L2. Como estes Indexadores já têm o subgraph indexado, não deve haver necessidade de esperar que o subgraph se sincronize, e será possível consultar o subgraph na L2 quase que imediatamente. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries no subgraph na L2 deverão ser feitas para uma URL diferente (or 'arbitrum-gateway.thegraph'), mas a URL na L1 continuará a trabalhar por no mínimo 48 horas. Após isto, o gateway na L1 encaminhará queries ao gateway na L2 (por um certo tempo), mas isto adicionará latência, então é recomendado trocar todas as suas queries para a nova URL o mais rápido possível. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Como escolher a sua carteira na L2 -Ao publicar o seu subgraph na mainnet, você usou uma carteira conectada para criar o subgraph, e esta carteira é dona do NFT que representa este subgraph e lhe permite publicar atualizações. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -Ao transferir o subgraph ao Arbitrum, você pode escolher uma carteira diferente que será dona deste NFT de subgraph na L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Se você usar uma carteira "regular" como o MetaMask (uma Conta de Titularidade Externa, ou EOA, por ex. uma carteira que não é um contrato inteligente), então isto é opcional, e é recomendado manter o mesmo endereço titular que o da L1. -Se você usar uma carteira de contrato inteligente, como uma multisig (por ex. uma Safe), então escolher um endereço de carteira diferente na L2 é obrigatório, pois as chances são altas desta conta só existir na mainnet, e você não poderá fazer transações no Arbitrum enquanto usar esta carteira. Se quiser continuar a usar uma carteira de contrato inteligente ou multisig, crie uma nova carteira no Arbitrum e use o seu endereço lá como o dono do seu subgraph na L2. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**É muito importante usar um endereço de carteira que você controle, e possa fazer transações no Arbitrum. Caso contrário, o subgraph será perdido e não poderá ser recuperado.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparações para a transferência: bridging de ETH -Transferir o subgraph envolve o envio de uma transação através da bridge, e depois, a execução de outra transação no Arbitrum. A primeira transação usa ETH na mainnet, e inclui um pouco de ETH para pagar por gas quando a mensagem for recebida na L2. Porém, se este gas for insuficiente, você deverá tentar executar a transação novamente e pagar o gas diretamente na L2 (este é o terceiro passo: "Confirmação da transação" abaixo). Este passo **deve ser executado até 7 dias depois do início da transação**. Além disto, a segunda transação ("4º passo: Finalização da transferência na L2") será feita diretamente no Arbitrum. Por estas razões, você precisará de um pouco de ETH em uma carteira Arbitrum. Se usar uma multisig ou uma conta de contrato inteligente, o ETH deverá estar na carteira regular (EOA) que você usar para executar as transações, e não na própria carteira multisig. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Você pode comprar ETH em algumas exchanges e retirá-la diretamente no Arbitrum, ou você pode usar a bridge do Arbitrum para enviar ETH de uma carteira na mainnet para a L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Como as taxas de gas no Arbitrum são menores, você só deve precisar de uma quantidade pequena. É recomendado começar em um limite baixo (por ex. 0.01 ETH) para que a sua transação seja aprovada. -## Como encontrar a Ferramenta de Transferência de Subgraphs +## Finding the Subgraph Transfer Tool -A Ferramenta de Transferência para L2 pode ser encontrada ao olhar a página do seu subgraph no Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![ferramenta de transferência](/img/L2-transfer-tool1.png) -Ela também está disponível no Explorer se você se conectar com a carteira dona de um subgraph, e na página daquele subgraph no Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferência para L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Clicar no botão Transfer to L2 (Transferir para L2) abrirá a ferramenta de tra ## 1º Passo: Como começar a transferência -Antes de começar a transferência, decida qual endereço será dono do subgraph na L2 (ver "Como escolher a sua carteira na L2" acima), e é altamente recomendado ter um pouco de ETH para o gas já em bridge no Arbitrum (ver "Preparações para a transferência: bridging de ETH" acima). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Note também que transferir o subgraph exige ter uma quantidade de sinal no subgraph maior que zero, com a mesma conta dona do subgraph; se você não tiver sinalizado no subgraph, você deverá adicionar um pouco de curadoria (uma adição pequena, como 1 GRT, seria o suficiente). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Após abrir a Ferramenta de Transferências, você poderá colocar o endereço da carteira na L2 no campo "Receiving wallet address" (endereço da carteira destinatária) - **certifique-se que inseriu o endereço correto**. Clicar em Transfer Subgraph (transferir subgraph) resultará em um pedido para executar a transação na sua carteira (note que um valor em ETH é incluído para pagar pelo gas na L2); isto iniciará a transferência e depreciará o seu subgraph na L1 (veja "Como entender o que acontece com o sinal, o seu subgraph na L1 e URLs de query" acima para mais detalhes sobre o que acontece nos bastidores). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Ao executar este passo, **garanta que executará o 3º passo em menos de 7 dias, ou o subgraph e o seu GRT de sinalização serão perdidos.** Isto se deve à maneira de como as mensagens L1-L2 funcionam no Arbitrum: mensagens enviadas através da bridge são "bilhetes de tentativas extras" que devem ser executadas dentro de 7 dias, e a execução inicial pode exigir outra tentativa se houver um surto no preço de gas no Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Comece a transferência à L2](/img/startTransferL2.png) -## 2º Passo: A espera do caminho do subgraph até a L2 +## Step 2: Waiting for the Subgraph to get to L2 -Após iniciar a transferência, a mensagem que envia o seu subgraph da L1 para a L2 deve propagar pela bridge do Arbitrum. Isto leva cerca de 20 minutos (a bridge espera que o bloco da mainnet que contém a transação esteja "seguro" de reorganizações potenciais da chain). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Quando esta espera acabar, o Arbitrum tentará executar a transferência automaticamente nos contratos na L2. @@ -80,7 +80,7 @@ Quando esta espera acabar, o Arbitrum tentará executar a transferência automat ## 3º Passo: Como confirmar a transferência -Geralmente, este passo será executado automaticamente, já que o gas na L2 incluído no primeiro passo deverá ser suficiente para executar a transação que recebe o subgraph nos contratos do Arbitrum. Porém, em alguns casos, é possível que um surto nos preços de gas do Arbitrum faça com que esta execução automática falhe. Neste caso, o "bilhete" que envia o seu subgraph à L2 estará pendente e exigirá outra tentativa dentro de 7 dias. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Se este for o caso, você deverá se conectar com uma carteira L2 que tenha um pouco de ETH no Arbitrum, trocar a rede da sua carteira para Arbitrum, e clicar em "Confirmar Transferência" para tentar a transação novamente. @@ -88,33 +88,33 @@ Se este for o caso, você deverá se conectar com uma carteira L2 que tenha um p ## 4º Passo: A finalização da transferência à L2 -Até aqui, o seu subgraph e GRT já foram recebidos no Arbitrum, mas o subgraph ainda não foi publicado. Você deverá se conectar com a carteira L2 que escolheu como a carteira destinatária, trocar a rede da carteira para Arbitrum, e clicar em "Publish Subgraph" (Publicar Subgraph). +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publicação do subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Espera para a publicação do subgraph](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Isto publicará o subgraph de forma que Indexadores operantes no Arbitrum comecem a servi-lo. Ele também mintará sinais de curadoria com o GRT que foi transferido da L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## 5º passo: Atualização da URL de query -Parabéns, o seu subgraph foi transferido ao Arbitrum com êxito! Para consultar o subgraph, a nova URL será: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note que a ID do subgraph no Arbitrum será diferente daquela que você tinha na mainnet, mas você pode sempre encontrá-la no Explorer ou no Studio. Como mencionado acima (ver "Como entender o que acontece com o sinal, o seu subgraph na L1 e URLs de query"), a URL antiga na L1 será apoiada por um período curto, mas você deve trocar as suas queries para o novo endereço assim que o subgraph for sincronizado na L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Como transferir a sua curadoria ao Arbitrum (L2) -## Como entender o que acontece com a curadoria ao transferir um subgraph à L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Quando o dono de um subgraph transfere um subgraph ao Arbitrum, todo o sinal do subgraph é convertido em GRT ao mesmo tempo. Isto se aplica a sinais "migrados automaticamente", por ex. sinais que não forem específicos a uma versão de um subgraph ou publicação, mas que segue a versão mais recente de um subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Esta conversão do sinal ao GRT é a mesma que aconteceria se o dono de um subgraph depreciasse o subgraph na L1. Quando o subgraph é depreciado ou transferido, todo o sinal de curadoria é "queimado" em simultâneo (com o uso da bonding curve de curadoria) e o GRT resultante fica em posse do contrato inteligente GNS (sendo o contrato que cuida de atualizações de subgraph e sinais migrados automaticamente). Cada Curador naquele subgraph então tem um direito àquele GRT, proporcional à quantidade de ações que tinham no subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Uma fração deste GRT correspondente ao dono do subgraph é enviado à L2 junto com o subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -Neste ponto, o GRT curado não acumulará mais taxas de query, então Curadores podem escolher sacar o seu GRT ou transferi-lo ao mesmo subgraph na L2, onde ele pode ser usado para mintar novos sinais de curadoria. Não há pressa para fazer isto, já que o GRT pode ser possuído por tempo indeterminado, e todos conseguem uma quantidade proporcional às suas ações, irrespectivo de quando a fizerem. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Como escolher a sua carteira na L2 @@ -130,9 +130,9 @@ Se você usar uma carteira de contrato inteligente, como uma multisig (por ex. u Antes de iniciar a transferência, você deve decidir qual endereço será titular da curadoria na L2 (ver "Como escolher a sua carteira na L2" acima), e é recomendado ter um pouco de ETH para o gas já em bridge no Arbitrum, caso seja necessário tentar a execução da mensagem na L2 novamente. Você pode comprar ETH em algumas exchanges e retirá-lo diretamente no Arbitrum, ou você pode usar a bridge do Arbitrum para enviar ETH de uma carteira na mainnet à L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - como as taxas de gas no Arbitrum são menores, você só deve precisar de uma quantidade pequena; por ex. 0.01 ETH deve ser mais que o suficiente. -Se um subgraph para o qual você cura já foi transferido para a L2, você verá uma mensagem no Explorer lhe dizendo que você curará para um subgraph transferido. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -Ao olhar a página do subgraph, você pode escolher retirar ou transferir a curadoria. Clicar em "Transfer Signal to Arbitrum" (transferir sinal ao Arbitrum) abrirá a ferramenta de transferência. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transferir sinall](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Se este for o caso, você deverá se conectar com uma carteira L2 que tenha um p ## Como retirar a sua curadoria na L1 -Se preferir não enviar o seu GRT à L2, ou preferir fazer um bridge do GRT de forma manual, você pode retirar o seu GRT curado na L1. No banner da página do subgraph, escolha "Withdraw Signal" (Retirar Sinal) e confirme a transação; o GRT será enviado ao seu endereço de Curador. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/pt/archived/sunrise.mdx b/website/src/pages/pt/archived/sunrise.mdx index f7e7a0faf5f5..280639c4a9e5 100644 --- a/website/src/pages/pt/archived/sunrise.mdx +++ b/website/src/pages/pt/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## O Que Foi o Nascer do Sol dos Dados Descentralizados? -O Nascer do Sol dos Dados Descentralizados foi uma iniciativa liderada pela Edge & Node, com a meta de garantir que os programadores de subgraphs fizessem uma atualização suave para a rede descentralizada do The Graph. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -Este plano teve base em desenvolvimentos anteriores do ecossistema do The Graph, e incluiu um Indexador de atualização para servir queries em subgraphs recém-editados. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### O que aconteceu com o serviço hospedado? -Os endpoints de query do serviço hospedado não estão mais disponíveis, e programadores não podem mais editar subgraphs novos no serviço hospedado. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -Durante o processo de atualização, donos de subgraphs no serviço hospedado puderam atualizar os seus subgraphs até a Graph Network. Além disto, programadores podiam resgatar subgraphs atualizados automaticamente. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### O Subgraph Studio foi atingido por esta atualização? Não, o Subgraph Studio não foi impactado pelo Nascer do Sol. Os subgraphs estavam disponíveis imediatamente para queries, movidos pelo Indexador de atualização, que usa a mesma infraestrutura do serviço hospedado. -### Por que subgraphs eram publicados ao Arbitrum, eles começaram a indexar uma rede diferente? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## Sobre o Indexador de Atualização > O Indexador de Atualização está atualmente ativo. -O Indexador de atualização foi construído para melhorar a experiência de atualizar subgraphs do serviço hospedado à Graph Network e apoiar novas versões de subgraphs existentes que ainda não foram indexados. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### O que o Indexador de atualização faz? -- Ele inicializa chains que ainda não tenham recompensas de indexação na Graph Network, e garante que um Indexador esteja disponível para servir queries o mais rápido possível após a publicação de um subgraph. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexadores que operam um Indexador de atualização o fazem como um serviço público, para apoiar novos subgraphs e chains adicionais que não tenham recompensas de indexação antes da aprovação do Graph Council. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Porque a Edge & Node executa o Indexador de atualização? -A Edge & Node operou historicamente o serviço hospedado, e como resultado, já sincronizou os dados de subgraphs do serviço hospedado. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### O que o Indexador de atualização significa para Indexadores existentes? Chains que antes só eram apoiadas no serviço hospedado foram disponibilizadas para programadores na Graph Network, inicialmente, sem recompensas de indexação. -Porém, esta ação liberou taxas de query para qualquer Indexador interessado e aumentou o número de subgraphs publicados na Graph Network. Como resultado, Indexadores têm mais oportunidades para indexar e servir estes subgraphs em troca de taxas de query, antes mesmo da ativação de recompensas de indexação para uma chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -O Indexador de atualização também fornece à comunidade de Indexadores informações sobre a demanda em potencial para subgraphs e novas chains na Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### O que isto significa para Delegantes? -O Indexador de atualização oferece uma forte oportunidade para Delegantes. Como ele permitiu que mais subgraphs fossem atualizados do serviço hospedado até a Graph Network, os Delegantes podem se beneficiar do aumento na atividade da rede. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### O Indexador de atualização concorreu com Indexadores existentes para recompensas? -Não, o Indexador de atualização só aloca a quantidade mínima por subgraph e não coleta recompensas de indexação. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -Ele opera numa base de "necessidade" e serve como uma reserva até que uma cota de qualidade de serviço seja alcançada por, no mínimo, três outros Indexadores na rede para chains e subgraphs respetivos. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### Como isto afeta os programadores de subgraph? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### Como o Indexador de atualizações beneficia consumidores de dados? @@ -71,10 +71,10 @@ O Indexador de atualização ativa, na rede, chains que antes só tinham apoio n O Indexador de atualização precifica queries no preço do mercado, para não influenciar o mercado de taxas de queries. -### Quando o Indexador de atualização parará de apoiar um subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -O Indexador de atualização apoia um subgraph até que, no mínimo, 3 outros indexadores sirvam queries feitas nele com êxito e consistência. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Além disto, o Indexador de atualização para de apoiar um subgraph se ele não tiver sido consultado nos últimos 30 dias. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Outros Indexadores são incentivados a apoiar subgraphs com o volume de query atual. O volume de query ao Indexador de atualização deve se aproximar de zero, já que ele tem um tamanho de alocação pequeno e outros Indexadores devem ser escolhidos por queries antes disso. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/pt/contracts.json b/website/src/pages/pt/contracts.json index 134799f3dd0f..b660b0df679c 100644 --- a/website/src/pages/pt/contracts.json +++ b/website/src/pages/pt/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "Contrato", "address": "Address" } diff --git a/website/src/pages/pt/global.json b/website/src/pages/pt/global.json index dfa39b21d79b..4521a1053837 100644 --- a/website/src/pages/pt/global.json +++ b/website/src/pages/pt/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "Navegação principal", - "show": "Show navigation", - "hide": "Hide navigation", + "show": "Exibir navegação", + "hide": "Ocultar navegação", "subgraphs": "Subgraphs", "substreams": "Substreams", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", + "sps": "Subgraphs movidos por Substreams", + "tokenApi": "Token API", + "indexing": "Indexação", "resources": "Recursos", - "archived": "Archived" + "archived": "Arquivados" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "Ultima atualização", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "Tempo de leitura", + "minutes": "minutos" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "Página anterior", + "next": "Próxima página", + "edit": "Editar no GitHub", + "onThisPage": "Nesta página", + "tableOfContents": "Índice", + "linkToThisSection": "Link para esta secção" }, "content": { - "note": "Note", - "video": "Video" + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, + "video": "Vídeo" + }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Parâmetros de Query", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Descrição", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Descrição", + "liveResponse": "Live Response", + "example": "Exemplo" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "Ops! Esta página foi pro espaço...", + "subtitle": "Confira se o endereço está certo, ou clique o atalho abaixo para explorar o nosso sítio.", + "back": "Página Inicial" } } diff --git a/website/src/pages/pt/index.json b/website/src/pages/pt/index.json index 5b3df70bebad..0fe9ac551a34 100644 --- a/website/src/pages/pt/index.json +++ b/website/src/pages/pt/index.json @@ -1,99 +1,175 @@ { "title": "Início", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", + "title": "Documentação do The Graph", + "description": "Comece o seu projeto web3 com as ferramentas para extrair, transformar e carregar os dados da blockchain.", + "cta1": "Como o The Graph funciona", "cta2": "Construa o seu primeiro subgraph" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "Escolha uma solução adequada às suas necessidades — interaja com os dados da blockchain da sua maneira.", "subgraphs": { "title": "Subgraphs", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "Extraia, processe, e solicite queries de dados da blockchain com APIs abertas.", + "cta": "Programe um subgraph" }, "substreams": { "title": "Substreams", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "description": "Solicite e consuma dados de blockchain com execução paralela.", + "cta": "Programe com Substreams" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "Subgraphs movidos por Substreams", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "Monte um subgraph movido pelo Substreams" }, "graphNode": { "title": "Graph Node", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "description": "Indexe dados de blockchain e sirva via queries da GraphQL.", + "cta": "Monte um Graph Node local" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "Extraia dados de blockchain em arquivos simples para melhorar tempos de sincronização e capacidades de streaming de dados.", + "cta": "Comece com o Firehose" } }, "supportedNetworks": { "title": "Redes Apoiadas", + "details": "Network Details", + "services": "Services", + "type": "Tipo", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Documentação", + "shortName": "Short Name", + "guides": "Guias", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "The Graph tem apoio a {0}. Para adicionar uma nova rede, {1}", + "networks": "redes", + "completeThisForm": "complete este formulário" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Nome", + "id": "ID", + "subgraphs": "Subgraphs", + "substreams": "Substreams", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Substreams", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Cobranças", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { "title": "Guias", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "Encontre Dados no Graph Explorer", + "description": "Aproveite centenas de subgraphs públicos para obter dados existentes de blockchain." }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Edite um Subgraph", + "description": "Adicione o seu subgraph à rede descentralizada." }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "Edite Substreams", + "description": "Implante o seu pacote do Substreams ao Registo do Substreams." }, "queryingBestPractices": { "title": "Etiqueta de Query", - "description": "Optimize your subgraph queries for faster, better results." + "description": "Otimize os seus queries de subgraph para obter resultados melhores e mais rápidos." }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "Séries de Tempo e Agregações Otimizadas", + "description": "Simplifique o seu subgraph para aumentar a sua eficiência." }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "Gestão de chaves de API", + "description": "Crie, administre, e proteja chaves de API para os seus subgraphs com facilidade." }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "Faça uma transferência para o The Graph", + "description": "Migre o seu subgraph suavemente de qualquer plataforma para o The Graph." } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "Tutoriais de Vídeo", + "watchOnYouTube": "Assista no YouTube", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "The Graph Explicado em 1 Minuto", + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "O Que É Delegar?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Como Indexar na Solana com um Subgraph Movido por Substreams", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", + "reading": "Tempo de leitura", + "duration": "Duração", "minutes": "min (\"Mínimo\")" } } diff --git a/website/src/pages/pt/indexing/_meta-titles.json b/website/src/pages/pt/indexing/_meta-titles.json index 42f4de188fd4..cd4243ace5e6 100644 --- a/website/src/pages/pt/indexing/_meta-titles.json +++ b/website/src/pages/pt/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Ferramentas do Indexador" } diff --git a/website/src/pages/pt/indexing/new-chain-integration.mdx b/website/src/pages/pt/indexing/new-chain-integration.mdx index 388561fac3d7..b22602fad027 100644 --- a/website/src/pages/pt/indexing/new-chain-integration.mdx +++ b/website/src/pages/pt/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: Integração de Chains Novas --- -Chains podem trazer apoio a subgraphs para os seus ecossistemas ao iniciar uma nova integração de `graph-node`. Subgraphs são ferramentas poderosas de indexação que abrem infinitas possibilidades a programadores. O Graph Node já indexa dados das chains listadas aqui. Caso tenha interesse numa nova integração, há 2 estratégias para ela: +Chains podem trazer apoio a subgraphs para os seus ecossistemas, ao iniciar uma nova integração de `graph-node`. Subgraphs são ferramentas poderosas de indexação que abrem infinitas possibilidades a programadores. O Graph Node já indexa dados das chains listadas aqui. Caso tenha interesse numa nova integração, há 2 estratégias para ela: 1. **EVM JSON-RPC** 2. **Firehose**: Todas as soluções de integração do Firehose incluem Substreams, um motor de transmissão de grande escala com base no Firehose com apoio nativo ao `graph-node`, o que permite transformações paralelizadas. @@ -25,7 +25,7 @@ Para que o Graph Node possa ingerir dados de uma chain EVM, o node RPC deve expo - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, em um pedido conjunto em JSON-RPC -- `trace_filter` *(tracing limitado, e opcionalmente necessário, para o Graph Node)* +- `trace_filter` _(tracing limitado, e opcionalmente necessário, para o Graph Node)_ ### 2. Integração do Firehose @@ -55,7 +55,7 @@ Enquanto ambos o JSON-RPC e o Firehose são próprios para subgraphs, um Firehos ## Como Configurar um Graph Node -Configurar um Graph Node é tão fácil quanto preparar o seu ambiente local. Quando o seu ambiente local estiver pronto, será possível testar a integração com a edição local de um subgraph. +Configurar um Graph Node é tão fácil quanto preparar o seu ambiente local. Quando o seu ambiente local estiver pronto, será possível testar a integração com a implantação local de um subgraph. 1. [Clone o Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configurar um Graph Node é tão fácil quanto preparar o seu ambiente local. Qu ## Subgraphs movidos por Substreams -Para integrações do Substreams ou Firehose movidas ao StreamingFast, são inclusos: apoio básico a módulos do Substreams (por exemplo: transações, logs, e eventos de contrato inteligente decodificados); e ferramentas de geração de código do Substreams. Estas ferramentas permitem a habilidade de ativar [subgraphs movidos pelo Substreams](/substreams/sps/introduction/). Siga o [Passo-a-Passo](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) e execute `substreams codegen subgraph` para sentir um gostinho das ferramentas. +Para integrações do Substreams ou Firehose movidas pelo StreamingFast, são inclusos: apoio básico a módulos do Substreams (por exemplo: transações, logs, e eventos de contrato inteligente decodificados); e ferramentas de geração de código do Substreams. Estas ferramentas permitem a habilidade de ativar [subgraphs movidos pelo Substreams](/substreams/sps/introduction/). Siga o [Passo-a-Passo](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) e execute `substreams codegen subgraph` para sentir um gostinho das ferramentas. diff --git a/website/src/pages/pt/indexing/overview.mdx b/website/src/pages/pt/indexing/overview.mdx index adf55ea75a43..a97272ae9669 100644 --- a/website/src/pages/pt/indexing/overview.mdx +++ b/website/src/pages/pt/indexing/overview.mdx @@ -9,39 +9,39 @@ O GRT em staking no protocolo é sujeito a um período de degelo, e pode passar Indexadores selecionam subgraphs para indexar com base no sinal de curadoria do subgraph, onde Curadores depositam GRT em staking para indicar quais subgraphs são de qualidade alta e devem ser priorizados. Consumidores (por ex., aplicativos) também podem configurar parâmetros para os quais Indexadores processam queries para seus subgraphs, além de configurar preferências para o preço das taxas de query. -## FAQ +## Perguntas Frequentes -### What is the minimum stake required to be an Indexer on the network? +### Qual o stake mínimo exigido para ser um Indexador na rede? -The minimum stake for an Indexer is currently set to 100K GRT. +O stake mínimo atual para um Indexador é de 100 mil GRT. -### What are the revenue streams for an Indexer? +### Quais são as fontes de renda para um Indexador? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Rebates de taxas de query** — Pagamentos por serviço de queries na rede. Estes pagamentos são mediados por canais de estado entre um Indexador e um gateway. Cada pedido de query de um gateway contém um pagamento e a resposta correspondente: uma prova de validade de resultado de query. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Recompensas de indexação** — são distribuídas a Indexadores que indexam lançamentos de subgraph para a rede. São geradas através de uma inflação de 3% para todo o protocolo. -### How are indexing rewards distributed? +### Como são distribuídas as recompensas de indexação? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +As recompensas de indexação vêm da inflação do protocolo, que é configurada em 3% da emissão anual. Elas são distribuídas em subgraphs, com base na proporção de todos os sinais de curadoria em cada um, e depois distribuídos proporcionalmente a Indexadores baseado no stake que alocaram naquele subgraph. **Para ter direito a recompensas, uma alocação deve ser fechada com uma prova de indexação válida (POI) que atende aos padrões determinados pela carta de arbitragem.** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +A comunidade criou várias ferramentas para calcular recompensas, organizadas na [coleção de guias da comunidade](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). Há também uma lista atualizada de ferramentas nos canais #Delegators e #Indexers no [servidor do Discord](https://discord.gg/graphprotocol). No próximo link, temos um [otimizador de alocações recomendadas](https://github.com/graphprotocol/allocation-optimizer) integrado com o stack de software de indexador. -### What is a proof of indexing (POI)? +### O que é uma prova de indexação (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs (Provas de indexação) são usadas na rede para verificar que um Indexador está a indexar os subgraphs nos quais eles alocaram. Uma POI para o primeiro bloco da epoch atual deve ser enviada ao fechar uma alocação, para que aquela alocação seja elegível a recompensas de indexação. Uma POI para um bloco serve como resumo para todas as transações de armazenamento de entidade para uma implantação específica de subgraph, até, e incluindo, aquele bloco. -### When are indexing rewards distributed? +### Quando são distribuídas as recompensas de indexação? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +As alocações acumulam recompensas continuamente, enquanto permanecerem ativas e alocadas dentro de 28 epochs. As recompensas são coletadas pelos Indexadores, e distribuídas sempre que suas alocações são fechadas. Isto acontece ou manualmente, quando o Indexador quer fechá-las à força; ou após 28 epochs, quando um Delegante pode fechar a alocação para o Indexador, mas isto não rende recompensas. A vida máxima de uma alocação é de 28 epochs (no momento, um epoch dura cerca de 24 horas). -### Can pending indexing rewards be monitored? +### É possível monitorar recompensas de indexação pendentes? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +O contrato RewardsManager tem uma função de apenas-leitura — [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) — que pode ser usada para verificar as recompensas pendentes para uma alocação específica. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +Muitos dos painéis feitos pela comunidade incluem valores pendentes de recompensas, que podem facilmente ser conferidos de forma manual ao seguir os seguintes passos: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Faça um query do [subgraph da mainnet](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) para buscar as IDs de todas as alocações ativas: ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Use o Etherscan para chamar o `getRewards()`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- Navegue, na [interface do Etherscan, para o contrato Rewards](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- Para chamar o `getRewards()`: + - Abra o dropdown **9. getRewards**. + - Preencha o campo da **allocationID**. + - Clique no botão **Query**. -### What are disputes and where can I view them? +### O que são disputas e onde posso visualizá-las? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +As consultas em query e alocações de Indexadores podem ser disputadas no The Graph durante o período de disputa. O período de disputa varia a depender do tipo de disputa. Consultas/atestações têm uma janela de disputa de 7 epochs, enquanto alocações duram até 56 epochs. Após o vencimento destes períodos, não se pode abrir disputas contra alocações ou consultas. Quando uma disputa é aberta, um depósito mínimo de 10.000 GRT é exigido pelos Pescadores, que será trancado até ser finalizada a disputa e servida uma resolução. Pescadores são quaisquer participantes de rede que abrem disputas. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +Há **três** possíveis resultados para disputas, assim como para o depósito dos Pescadores. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- Se a disputa for rejeitada, o GRT depositado pelo Pescador será queimado, e o Indexador disputado não será penalizado. +- Se a disputa terminar em empate, o depósito do Pescador será retornado, e o Indexador disputado não será penalizado. +- Se a disputa for aceite, o GRT depositado pelo Pescador será retornado, o Indexador disputado será penalizado, e o(s) Pescador(es) ganhará(ão) 50% do GRT cortado. -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +As disputas podem ser visualizadas na interface na página de perfil de um Indexador, sob a aba `Disputes` (Disputas). -### What are query fee rebates and when are they distributed? +### O que são rebates de taxas de consulta e quando eles são distribuídos? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +As taxas de query são coletadas pelo gateway e distribuídas aos Indexadores de acordo com a função de rebate exponencial (veja o GIP [aqui](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). A tal função é proposta como uma maneira de garantir que indexadores alcancem o melhor resultado ao servir queries fieis. Ela funciona com o incentivo de Indexadores para alocarem uma grande quantia de stake (que pode ser cortada por errar ao servir um query) relativa à quantidade de taxas de query que possam colecionar. -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +Quando uma alocação é fechada, os rebates podem ser reivindicados pelo Indexador. Após serem resgatados, os rebates de taxa de consulta são distribuídos ao Indexador e os seus Delegantes com base na porção de taxas de query e na função de rebate exponencial. -### What is query fee cut and indexing reward cut? +### O que são porção de taxa de query e porção de recompensa de indexação? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +Os valores `queryFeeCut` e `indexingRewardCut` são parâmetros de delegação que o Indexador pode configurar junto com o `cooldownBlocks` para controlar a distribuição de GRT entre o Indexador e os seus Delegantes. Veja os últimos passos no [Staking no Protocolo](/indexing/overview/#stake-in-the-protocol) para instruções sobre como configurar os parâmetros de delegação. -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **queryFeeCut** — a % de rebates de taxas de query a ser distribuída ao Indexador. Se isto for configurado em 95%, o Indexador receberá 95% das taxas de query ganhas quando uma alocação for fechada, com os outros 5% destinados aos Delegantes. -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **indexingRewardCut** — a % de recompensas de indexação a ser distribuída ao Indexador. Se isto for configurado em 95%, o Indexador receberá 95% do pool de recompensas de indexação ao fechamento de uma alocação e os Delegantes dividirão os outros 5%. -### How do Indexers know which subgraphs to index? +### Como os Indexadores podem saber quais subgraphs indexar? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Os Indexadores podem se diferenciar ao aplicar técnicas avançadas para decidir indexações de subgraph, mas para dar uma ideia geral, vamos discutir várias métricas importantes usadas para avaliar subgraphs na rede: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Sinal de curadoria** — A proporção do sinal de curadoria na rede aplicado a um subgraph particular mede bem o interesse nesse subgraph; especialmente durante a fase de inicialização, quando o volume de queries começa a subir. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Taxas de query coletadas** — Os dados históricos para o volume de taxas de query coletadas para um subgraph específico indicam bem a demanda futura. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Quantidade em staking** - Ao monitorar o comportamento de outros Indexadores ou inspecionar proporções de stake total alocado a subgraphs específicos, um Indexador pode monitorar a reserva para queries nos subgraphs, para assim identificar subgraphs nos quais a rede mostra confiança ou subgraphs que podem necessitar de mais reservas. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs sem recompensas de indexação** - Alguns subgraphs não geram recompensas de indexação, principalmente porque eles usam recursos não apoiados, como o IPFS, ou porque consultam outra rede fora da mainnet. Se um subgraph não estiver a gerar recompensas de indexação, o Indexador será notificado a respeito. -### What are the hardware requirements? +### Quais são os requisitos de hardware? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Pequeno** — O suficiente para começar a indexar vários subgraphs. Provavelmente precisará de expansões. +- **Normal** — Setup normal. Este é o usado nos exemplos de manifests de implantação de k8s/terraform. +- **Médio** — Indexador de Produção. Apoia 100 subgraphs e de 200 a 500 solicitações por segundo. +- **Grande** — Preparado para indexar todos os subgraphs usados atualmente e servir solicitações para o tráfego relacionado. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Configuração | Postgres
(CPUs) | Postgres
(memória em GBs) | Postgres
(disco em TBs) | VMs
(CPUs) | VMs
(memória em GBs) | +| ------------ | :------------------: | :----------------------------: | :--------------------------: | :-------------: | :-----------------------: | +| Pequeno | 4 | 8 | 1 | 4 | 16 | +| Normal | 8 | 30 | 1 | 12 | 48 | +| Médio | 16 | 64 | 2 | 32 | 64 | +| Grande | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### Há alguma precaução básica de segurança que um Indexador deve tomar? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **Carteira de operador** — Configurar uma carteira de operador é importante, pois permite a um Indexador manter a separação entre as suas chaves que controlam o stake e aquelas no controlo das operações diárias. Mais informações em [Staking no Protocolo](/indexing/overview/#stake-in-the-protocol). -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- **Firewall** - O serviço de Indexadores é o único que precisa ser exposto publicamente, e o trancamento de portas de admin e acesso ao banco de dados exigem muito mais atenção: o endpoint JSON-RPC do Graph Node (porta padrão: 8030), o endpoint da API de gerenciamento do Indexador (porta padrão: 18000), e o endpoint do banco de dados Postgres (porta padrão: 5432) não devem ser expostos. -## Infrastructure +## Infraestrutura -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +O núcleo da infraestrutura de um Indexador é o Graph Node, que monitora as redes indexadas, extrai e carrega dados por uma definição de um subgraph, e o serve como uma [API da GraphQL](/about/#how-the-graph-works). O Graph Node deve estar conectado a endpoints que expõem dados de cada rede indexada; um node IPFS para abastecer os dados; um banco de dados PostgreSQL para o seu armazenamento; e componentes de Indexador que facilitem as suas interações com a rede. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **Banco de dados PostgreSQL** — O armazenamento principal para o Graph Node, onde dados de subgraph são armazenados. O serviço e o agente indexador também usam o banco de dados para armazenar dados de canal de estado, modelos de custo, regras de indexação, e ações de alocação. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Endpoint de dados** — Para redes compatíveis com EVMs, o Graph Node deve estar conectado a um endpoint que expõe uma API JSON-RPC compatível com EVMs. Isto pode ser um único cliente, ou um setup mais complexo que carrega saldos em várias redes. É importante saber que certos subgraphs exigirão capabilidades particulares de clientes, como um modo de arquivo e/ou uma API de rastreamento. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **Node IPFS (versão abaixo de 5)** — Os metadados de lançamento de subgraph são armazenados na rede IPFS. O Graph Node acessa primariamente o node IPFS durante o lançamento do subgraph, para retirar o manifest e todos os arquivos ligados. Indexadores de rede não precisam hospedar seu próprio node IPFS, pois já há um hospedado para a rede em https://ipfs.network.thegraph.com. -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Serviço de Indexador** — Cuida de todas as comunicações externas com a rede requeridas. Divide modelos de custo e estados de indexação, passa pedidos de query de gateways para um Graph Node, e monitora os pagamentos de query através de canais de estado com o gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Agente Indexador** — Facilita as interações de Indexadores on-chain, que incluem cadastros na rede, gestão de lançamentos de Subgraph ao(s) seu(s) Graph Node(s), e gestão de alocações. -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Servidor de métricas Prometheus** — O Graph Node e os componentes de Indexador registam as suas métricas ao servidor de métricas. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +Observe: Para apoiar o escalamento ágil, recomendamos que assuntos de query e de indexação sejam separados entre conjuntos diferentes de nodes: nodes de query e nodes de indexação. -### Ports overview +### Visão geral das portas -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **Importante:** Cuidado ao expor portas publicamente — as **portas de administração** devem ser trancadas a sete chaves. Isto inclui o endpoint JSON-RPC do Graph Node e os pontos finais de gestão de Indexador detalhados abaixo. #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | +| ----- | ----------------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | Servidor HTTP GraphQL
(para queries de subgraph) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | WS GraphQL
(para inscrições a subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(para gerir implantações) | / | \--admin-port | - | +| 8030 | API de estado de indexação do subgraph | /graphql | \--index-node-port | - | +| 8040 | Métricas Prometheus | /metrics | \--metrics-port | - | -#### Indexer Service +#### Serviço Indexador -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | +| ----- | ----------------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | Servidor HTTP GraphQL
(para queries pagos de subgraph) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Métricas Prometheus | /metrics | \--metrics-port | - | -#### Indexer Agent +#### Agente Indexador -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | +| ----- | -------------------------- | ------ | -------------------------- | --------------------------------------- | +| 8000 | API de gestão de Indexador | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Como preparar uma infraestrutura de servidor com o Terraform no Google Cloud -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> Nota: Como alternativa, os Indexadores podem usar o AWS, Microsoft Azure, ou Alibaba. -#### Install prerequisites +#### Pré-requisitos para a instalação - Google Cloud SDK -- Kubectl command line tool +- Ferramenta de linha de comando Kubectl - Terraform -#### Create a Google Cloud Project +#### Como criar um projeto no Google Cloud -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Clone ou navegue ao [repositório de Indexador](https://github.com/graphprotocol/indexer). -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- Navegue ao diretório `./terraform`, é aqui onde todos os comandos devem ser executados. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Autentique com o Google Cloud e crie um projeto novo. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Use a página de cobrança do Google Cloud Console para ativar cobranças para o novo projeto. -- Create a Google Cloud configuration. +- Crie uma configuração no Google Cloud. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Ative as APIs necessárias do Google Cloud. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Crie uma conta de serviço. ```sh svc_name= @@ -225,7 +225,7 @@ gcloud iam service-accounts create $svc_name \ --description="Service account for Terraform" \ --display-name="$svc_name" gcloud iam service-accounts list -# Get the email of the service account from the list +# Pegue o email da conta de serviço da lista svc=$(gcloud iam service-accounts list --format='get(email)' --filter="displayName=$svc_name") gcloud iam service-accounts keys create .gcloud-credentials.json \ @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Ative o peering entre o banco de dados e o cluster Kubernetes, que será criado no próximo passo. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,35 +249,35 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Crie o arquivo de configuração mínimo no terraform (atualize quando necessário). ```sh indexer= cat > terraform.tfvars < \ -f Dockerfile.indexer-service \ -t indexer-service:latest \ -# Indexer agent +# Agente indexador docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-agent \ -t indexer-agent:latest \ ``` -- Run the components +- Execute os componentes ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/). +**NOTA**: Após iniciar os containers, o serviço Indexador deve ser acessível no [http://localhost:7600](http://localhost:7600) e o agente indexador deve expor a API de gestão de Indexador no [http://localhost:18000/](http://localhost:18000/). -#### Using K8s and Terraform +#### Usando K8s e Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section +Veja a seção sobre [preparar infraestruturas de servidor com o Terraform no Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) -#### Usage +#### Uso -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **NOTA**: Todas as variáveis de configuração de runtime (tempo de execução) podem ser aplicadas como parâmetros ao comando na inicialização, ou usando variáveis de ambiente do formato `COMPONENT_NAME_VARIABLE_NAME`(por ex. `INDEXER_AGENT_ETHEREUM`). -#### Indexer agent +#### Agente Indexador ```sh graph-indexer-agent start \ @@ -488,7 +488,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Serviço Indexador ```sh SERVER_HOST=localhost \ @@ -516,56 +516,56 @@ graph-indexer-service start \ #### Indexer CLI -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +O Indexer CLI é um plugin para o [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli), acessível no terminal em `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### Gestão de Indexador com o Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +O programa recomendado para interagir com a **API de Gestão de Indexador** é o **Indexer CLI**, uma extensão ao **Graph CLI**. O agente precisa de comandos de um Indexador para poder interagir de forma autônoma com a rede em nome do Indexador. Os mecanismos que definem o comportamento de um agente indexador são **gestão de alocações** e **regras de indexamento**. No modo automático, um Indexador pode usar **regras de indexamento** para aplicar estratégias específicas para a escolha de subgraphs para indexar e servir consultas. Regras são administradas através de uma API GraphQL servida pelo agente, e conhecida como a API de Gestão de Indexador. No modo manual, um Indexador pode criar ações de alocação usando a **fila de ações**, além de aprová-las explicitamente antes de serem executadas. Sob o modo de supervisão, as **regras de indexação** são usadas para popular a **fila de ações** e também exigem aprovação explícita para executar. -#### Usage +#### Uso -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +O **Indexer CLI** se conecta ao agente indexador, normalmente através do redirecionamento de portas, para que a CLI não precise ser executada no mesmo servidor ou cluster. Para facilitar o seu começo, e para fins de contexto, a CLI será descrita brevemente aqui. -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` — Conecta à API de gestão de Indexador. Tipicamente, a conexão ao servidor é aberta através do redirecionamento de portas, para que a CLI possa ser operada remotamente com facilidade. (Exemplo: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph indexer rules get [options] [ ...]` — Mostra uma ou mais regras de indexação usando `all` como o `` para mostrar todas as regras, ou `global` para exibir os padrões globais. Um argumento adicional `--merged` pode ser usado para especificar que regras, específicas à implantação, estão fundidas com a regra global. É assim que elas são aplicadas no agente de Indexador. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` — Configura uma ou mais regras de indexação. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Começa a indexar uma implantação de subgraph, se disponível, e configura a sua `decisionBasis` para `always`, para que o agente indexador sempre escolha indexá-lo. Caso a regra global seja configurada para `always`, todos os subgraphs disponíveis na rede serão indexados. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` — Pára de indexar uma implantação e configura a sua `decisionBasis` em `never`, com o fim de pular esta implantação ao decidir quais implantações indexar. -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — Configura a `decisionBasis` de uma implantação para obedecer o `rules`, para que o agente Indexador use regras de indexação para decidir se esta implantação será ou não indexada. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` — Retira uma ou mais ações usando o `all`, ou deixa o `action-id` vazio para mostrar todas as ações. Um argumento adicional — `--status` — pode ser usado para imprimir todas as ações de um certo estado. -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer action queue allocate ` — Programa a ação de alocação -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer action queue reallocate ` — Programa uma ação de realocação -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer action queue unallocate ` — Programa uma retirada de alocação -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` — Cancela todas as ações na fila se a id não for especificada; caso contrário, cancela o arranjo do id com espaço como separador -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` — Aprova múltiplas ações para execução -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` — Força a execução imediata de ações aprovadas -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Todos os comandos que mostram regras no resultado podem escolher entre os formatos de resultado (`table`, `yaml`, e `json`) com o argumento `-output`. -#### Indexing rules +#### Regras de indexação -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +As regras de indexação podem ser aplicadas como padrões globais ou para implantações específicas de subgraph com o uso das suas IDs. Os campos `deployment` e `decisionBasis` são obrigatórios, enquanto todos os outros campos são opcionais. Quando uma regra de indexação tem `rules` como a `decisionBasis`, então o agente de Indexador comparará valores de limiar não-nulos naquela regra com valores retirados da rede para a implantação correspondente. Se a implantação do subgraph tiver valores acima (ou abaixo) de todos os limiares, ela será escolhida para a indexação. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +Por exemplo: se a regra global tem um `minStake` de **5** (GRT), qualquer implantação de subgraph que tiver mais de 5 (GRT) de stake alocado nele será indexada. Regras de limiar incluem `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, e `minAverageQueryFees`. -Data model: +Modelo de dados: ```graphql type IndexingRule { @@ -599,7 +599,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +Exemplos de uso de regra de indexação: ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### CLI de fila de ações -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +O indexer-cli fornece um módulo `actions` para trabalhar manualmente com a fila de ações. Ele interage com a fila de ações através do **API GraphQL** hospedado pelo servidor de gestão de indexador. -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +O programa de execução de ações só retirará itens da fila para execução se esses tiverem o `ActionStatus = approved`. No local recomendado, as ações são adicionadas à fila com `ActionStatus = queued`; depois, serão aprovadas para serem executadas on-chain. O fluxo geral ficará assim: -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- Ação adicionada à fila por ferramenta de otimização de terceiros ou utilizador do indexer-cli +- O Indexador pode usar o `indexer-cli` para visualizar todas as ações enfileiradas +- O Indexador (ou outro software) pode aprovar ou cancelar ações na fila usando o `indexer-cli`. Os comandos de aprovação e cancelamento aceitam um arranjo de ids de ação como comando. +- O programa de execução consulta a fila regularmente para verificar as ações aprovadas. Ele tomará as ações `approved` da fila, tentará executá-las, e atualizará os valores no banco de dados a depender do estado da execução, sendo `success` ou `failed`. +- Se uma ação tiver êxito, o programa garantirá a presença de uma regra de indexação que diz ao agente como administrar a alocação dali em diante. Isto será mais conveniente para executar ações manuais enquanto o agente está no modo `auto` ou `oversight`. +- O indexador pode monitorizar a fila de ações para ver um histórico de execuções de ação, e se necessário, aprovar novamente e atualizar itens de ação caso a sua execução falhe. A fila de ações provém um histórico de todas as ações agendadas e tomadas. -Data model: +Modelo de dados: ```graphql Type ActionInput { @@ -657,7 +657,7 @@ ActionType { } ``` -Example usage from source: +Exemplo de uso da fonte: ```bash graph indexer actions get all @@ -677,141 +677,142 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +Observe que os tipos apoiados de ações para gestão de alocação têm requisitos diferentes de entrada: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` — aloca stakes a uma implantação de subgraph específica - - required action params: + - parâmetros de ação exigidos: - deploymentID - amount -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `Unallocate` — fecha uma alocação, que libera o stake para ser redistribuído em outro lugar - - required action params: + - parâmetros de ação exigidos: - allocationID - deploymentID - - optional action params: + - parâmetros de ação opcionais: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (força o uso do POI providenciado, mesmo se ele não corresponder ao providenciado pelo graph-node) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` — fecha a alocação automaticamente e abre uma alocação nova para a mesma implantação de subgraph - - required action params: + - parâmetros de ação exigidos: - allocationID - deploymentID - amount - - optional action params: + - parâmetros de ação opcionais: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (força o uso do POI providenciado, mesmo se ele não corresponder ao providenciado pelo graph-node) -#### Cost models +#### Modelos de custo -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Modelos de custo servem preços dinâmicos para queries, com base em atributos de mercado e query. O Serviço de Indexador compartilha um modelo de custo com os gateways para cada subgraph, aos quais ele pretende responder a consultas. Os gateways, por sua vez, usam o modelo de custo para decidir seleções de Indexador por query e para negociar pagamentos com Indexadores escolhidos. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +A linguagem Agora providencia um formato flexível para a declaração de modelos de custo para queries. Um modelo de preço do Agora é uma sequência de declarações, executadas em ordem, para cada query de alto-nível em um query no GraphQL. Para cada query de nível máximo, a primeira declaração correspondente determina o preço para o tal query. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Uma declaração consiste de um predicado, que é usado para corresponder a buscas GraphQL; e uma expressão de custo que, quando avaliada, mostra um custo em GRT decimal. Valores na posição de argumento nomeada em um query podem ser capturados no predicado e usados na expressão. Valores globais também podem ser configurados e substituídos por valores temporários em uma expressão. -Example cost model: +Exemplo de modelo de custo: ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +# Esta declaração captura o valor de pulo, +# usa uma expressão boolean no predicado para corresponder a consultas específicas que usam 'skip' +# e uma expressão de custo para calcular o custo baseado no valor 'skip' e no global SYSTEM_LOAD +SYSTEM_LOAD global query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +# Este padrão corresponderá a qualquer expressão GraphQL. +# Ele usa um Global substituído na expressão para calcular o custo default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +Exemplo de custo de query usando o modelo acima: -| Query | Price | +| Query | Preço | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | | { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### Aplicação do modelo de custo -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Os modelos de custo são aplicados através do Indexer CLI, que os repassa à API de Gestão do agente de Indexador para armazenamento no banco de dados. O Serviço de Indexador depois irá localizar e servir os modelos de custo para gateways, sempre que eles forem requisitados. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Interações com a rede -### Stake in the protocol +### Stake no protocolo -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +Os primeiros passos para participar na rede como Indexador consistem em aprovar o protocolo, fazer staking de fundos, e (opcionalmente) preparar um endereço de operador para interações ordinárias do protocolo. -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> Nota: Para os propósitos destas instruções, o Remix será usado para interação com contratos, mas é possível escolher a sua própria ferramenta ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/) e [MyCrypto](https://www.mycrypto.com/account) são algumas outras ferramentas conhecidas). -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +Quando um Indexador faz stake de GRT no protocolo, será possível iniciar os seus [componentes](/indexing/overview/#indexer-components) e começar as suas interações com a rede. -#### Approve tokens +#### Aprovação de tokens -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Abra o [app Remix](https://remix.ethereum.org/) em um navegador -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. No `File Explorer`, crie um arquivo chamado **GraphToken.abi** com a [Token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Com `GraphToken.abi` selecionado e aberto no editor, abra a seção `Deploy and Run Transactions` (Implantar e Executar Transações) na interface do Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Na opção **Environment** (ambiente), selecione `Injected Web3`, e sob `Account` (conta), selecione o seu endereço de Indexador. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Configure o endereço de contrato de GraphToken — cole o endereço de contrato do GraphToken (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) próximo ao `At Address` e clique no botão `At address` para aplicar. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Chame a função `approve(spender, amount)` para aprovar o contrato de Staking. Preencha a lacuna `spender`, que tem o endereço de contrato de Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`), e a `amount` com os tokens a serem colocados (em wei). -#### Stake tokens +#### Staking de tokens -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Abra o [app Remix](https://remix.ethereum.org/) em um navegador -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. No `File Explorer`, crie um arquivo chamado **Staking.abi** com a ABI de staking. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. Com o `Staking.abi` selecionado e aberto no editor, entre na seção com `Deploy and Run Transactions` na interface do Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. Na opção **Environment** (ambiente), selecione `Injected Web3`, e sob `Account` (conta), selecione o seu endereço de Indexador. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Configure o endereço de contrato de Staking — cole o endereço de contrato do Staking (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) próximo ao `At Address` e clique no botão `At address` para aplicar. -6. Call `stake()` to stake GRT in the protocol. +6. Chame o `stake()` para fazer stake de GRT no protocolo. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Opcional) Os Indexadores podem aprovar outro endereço para operar sua infraestrutura de Indexador, a fim de poder separar as chaves que controlam os fundos daquelas que realizam ações rotineiras, como alocar em subgraphs e servir queries (pagos). Para configurar o operador, chame o `setOperator()` com o endereço do operador. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Opcional) Para controlar a distribuição de recompensas e atrair Delegantes estrategicamente, os Indexadores podem atualizar os seus parâmetros de delegação ao atualizar o seu indexingRewardCut (partes por milhão); queryFeeCut (partes por milhão); e cooldownBlocks (número de blocos). Para fazer isto, chame o `setDelegationParameters()`. O seguinte exemplo configura o queryFeeCut para distribuir 95% de rebates de query ao Indexador e 5% aos Delegantes; configura o indexingRewardCutto para distribuir 60% de recompensas de indexação ao Indexador e 40% aos Delegantes; e configura o período do `thecooldownBlocks` para 500 blocos. ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### Configuração de parâmetros de delegação -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +A função `setDelegationParameters()` no [contrato de staking](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) é essencial para Indexadores; esta permite configurar parâmetros que definem as suas interações com Delegantes, o que influencia a sua capacidade de delegação e divisa de recompensas. -### How to set delegation parameters +### Como configurar parâmetros de delegação -To set the delegation parameters using Graph Explorer interface, follow these steps: +Para configurar os parâmetros de delegação com a interface do Graph Explorer, siga os seguintes passos: -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. Navegue para o [Graph Explorer](https://thegraph.com/explorer/). +2. Conecte a sua carteira. Escolha a multisig (por ex., Gnosis Safe), e depois, a mainnet. Observe que será necessário repetir este processo para o Arbitrum One. +3. Conecte a carteira que possui como signatário. +4. Navegue até a seção 'Settings' (Configurações) e selecione 'Delegation Parameters' (Parâmetros de Delegação). Estes parâmetros devem ser configurados para alcançar uma parte efetiva dentro do alcance desejado. Após preencher os campos com valores, a interface calculará automaticamente a parte efetiva. Ajuste estes valores como necessário para obter a percentagem de parte efetiva desejada. +5. Envie a transação à rede. -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> Nota: Esta transação deverá ser confirmada pelos signatários da carteira multisig. -### The life of an allocation +### A vida de uma alocação -After being created by an Indexer a healthy allocation goes through two states. +Após criada por um Indexador, uma alocação sadia passa por dois estados. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Ativa** - Quando uma alocação é criada on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)), ela é considerada **ativa**. Uma porção do stake próprio e/ou delegado do Indexador é alocada a uma implantação de subgraph, que lhe permite resgatar recompensas de indexação e servir queries para aquela implantação de subgraph. O agente indexador cria alocações baseadas nas regras do Indexador. -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **Fechada** - Um Indexador pode fechar uma alocação após a passagem de um epoch ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)), ou o seu agente indexador a fechará automaticamente após o **maxAllocationEpochs** (atualmente, 28 dias). Quando uma alocação é fechada com uma prova de indexação válida (POI), as suas recompensas de indexação são distribuídas ao Indexador e aos seus Delegantes ([aprenda mais](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +É ideal que os Indexadores utilizem a funcionalidade de sincronização off-chain para sincronizar implantações de subgraph à chainhead antes de criar a alocação on-chain. Esta ferramenta é mais útil para subgraphs que demorem mais de 28 epochs para sincronizar, ou que têm chances de falhar não-deterministicamente. diff --git a/website/src/pages/pt/indexing/supported-network-requirements.mdx b/website/src/pages/pt/indexing/supported-network-requirements.mdx index d678f0534f01..c1bd4433f1d7 100644 --- a/website/src/pages/pt/indexing/supported-network-requirements.mdx +++ b/website/src/pages/pt/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Requisitos de Redes Apoiadas --- -| Rede | Guias | Requisitos de sistema | Recompensas de Indexação | -| --- | --- | --- | :-: | -| Arbitrum | [Guia Baremetal](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Guia Docker](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | CPU de 4+ núcleos
Ubuntu 22.04
16GB+ RAM
>= SSD NVMe com mais de 8 TiB
_última atualização em agosto de 2023_ | ✅ | -| Avalanche | [Guia Docker](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | CPU de 4 núcleos e 8 threads
Ubuntu 22.04
16GB+ RAM
SSD NVMe com mais de 5 TiB
_última atualização em agosto de 2023_ | ✅ | -| Base | [Guia Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[Guia GETH Baremetal](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[Guia GETH Docker](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | CPU de 8+ núcleos
Debian 12/Ubuntu 22.04
16 GB RAM
mais 4.5TB (NVMe preferido)
_última atualização em 14 de maio de 2024_ | ✅ | -| Binance | [Guia Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | CPU de 8 núcleos e 16 threads
Ubuntu 22.04
16GB+ RAM
NVMe SSD com mais de 14 TiB
_última atualização em 22 de junho de 2024_ | ✅ | -| Celo | [Guia Docker](https://docs.infradao.com/archive-nodes-101/celo/docker) | CPU de 4 núcleos e 8 threads
Ubuntu 22.04
16GB+ RAM
>= SSD NVMe com mais de 2 TiB
_última atualização em agosto de 2023_ | ✅ | -| Ethereum | [Guia Docker](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Frequência de clock maior que número de núcleos
Ubuntu 22.04
16GB+ RAM
Mais de 3TB (NVMe recomendado)
_última atualização em agosto de 2023_ | ✅ | -| Fantom | [Guia Docker](https://docs.infradao.com/archive-nodes-101/fantom/docker) | CPU de 4 núcleos e 8 threads
Ubuntu 22.04
16GB+ RAM
SSD NVMe com mais de 13 TiB
_última atualização em agosto de 2023_ | ✅ | -| Gnosis | [Guia Baremetal](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | CPU de 6 núcleos e 12 threads
Ubuntu 22.04
16GB+ RAM
NVMe SSD com mais de 3 TiB
_última atualização em agosto de 2023_ | ✅ | -| Linea | [Guia Baremetal](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | CPU de 4+ núcleos
Ubuntu 22.04
16GB+ RAM
>= SSD NVMe com mais de 1 TiB
_última atualização em 2 de abril de 2024_ | ✅ | -| Optimism | [Guia Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[Guia GETH Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[Guia GETH Docker](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | CPU de 4 núcleos e 8 threads
Ubuntu 22.04
16GB+ RAM
SSD NVMe com mais de 8 TiB
_última atualização em agosto de 2023_ | ✅ | -| Polygon | [Guia Docker](https://docs.infradao.com/archive-nodes-101/polygon/docker) | CPU de 16 núcleos
Ubuntu 22.04
32GB+ RAM
>= SSD NVMe com mais de 10 TiB
_última atualização em agosto de 2023_ | ✅ | -| Scroll | [Guia Baremetal](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Guia Docker](https://docs.infradao.com/archive-nodes-101/scroll/docker) | CPU de 4 núcleos e 8 threads
Debian 12
16GB+ RAM
SSD NVMe com mais de 1 TiB
_última atualização em 3 de abril de 2024_ | ✅ | +| Rede | Guias | Requisitos de sistema | Recompensas de Indexação | +| --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------: | +| Arbitrum | [Guia Baremetal](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Guia Docker](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | CPU de 4+ núcleos
Ubuntu 22.04
16GB+ RAM
>= SSD NVMe com mais de 8 TiB
_última atualização em agosto de 2023_ | ✅ | +| Avalanche | [Guia Docker](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | CPU de 4 núcleos e 8 threads
Ubuntu 22.04
16GB+ RAM
SSD NVMe com mais de 5 TiB
_última atualização em agosto de 2023_ | ✅ | +| Base | [Guia Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[Guia GETH Baremetal](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[Guia GETH Docker](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | CPU de 8+ núcleos
Debian 12/Ubuntu 22.04
16 GB RAM
mais 4.5TB (NVMe preferido)
_última atualização em 14 de maio de 2024_ | ✅ | +| Binance | [Guia Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | CPU de 8 núcleos e 16 threads
Ubuntu 22.04
16GB+ RAM
NVMe SSD com mais de 14 TiB
_última atualização em 22 de junho de 2024_ | ✅ | +| Celo | [Guia Docker](https://docs.infradao.com/archive-nodes-101/celo/docker) | CPU de 4 núcleos e 8 threads
Ubuntu 22.04
16GB+ RAM
>= SSD NVMe com mais de 2 TiB
_última atualização em agosto de 2023_ | ✅ | +| Ethereum | [Guia Docker](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Frequência de clock maior que número de núcleos
Ubuntu 22.04
16GB+ RAM
Mais de 3TB (NVMe recomendado)
_última atualização em agosto de 2023_ | ✅ | +| Fantom | [Guia Docker](https://docs.infradao.com/archive-nodes-101/fantom/docker) | CPU de 4 núcleos e 8 threads
Ubuntu 22.04
16GB+ RAM
SSD NVMe com mais de 13 TiB
_última atualização em agosto de 2023_ | ✅ | +| Gnosis | [Guia Baremetal](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | CPU de 6 núcleos e 12 threads
Ubuntu 22.04
16GB+ RAM
NVMe SSD com mais de 3 TiB
_última atualização em agosto de 2023_ | ✅ | +| Linea | [Guia Baremetal](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | CPU de 4+ núcleos
Ubuntu 22.04
16GB+ RAM
>= SSD NVMe com mais de 1 TiB
_última atualização em 2 de abril de 2024_ | ✅ | +| Optimism | [Guia Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[Guia GETH Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[Guia GETH Docker](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | CPU de 4 núcleos e 8 threads
Ubuntu 22.04
16GB+ RAM
SSD NVMe com mais de 8 TiB
_última atualização em agosto de 2023_ | ✅ | +| Polygon | [Guia Docker](https://docs.infradao.com/archive-nodes-101/polygon/docker) | CPU de 16 núcleos
Ubuntu 22.04
32GB+ RAM
>= SSD NVMe com mais de 10 TiB
_última atualização em agosto de 2023_ | ✅ | +| Scroll | [Guia Baremetal](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Guia Docker](https://docs.infradao.com/archive-nodes-101/scroll/docker) | CPU de 4 núcleos e 8 threads
Debian 12
16GB+ RAM
SSD NVMe com mais de 1 TiB
_última atualização em 3 de abril de 2024_ | ✅ | diff --git a/website/src/pages/pt/indexing/tap.mdx b/website/src/pages/pt/indexing/tap.mdx index 33f6583ea3c6..79de9a57e6ae 100644 --- a/website/src/pages/pt/indexing/tap.mdx +++ b/website/src/pages/pt/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: Como migrar para o TAP +title: GraphTally Guide --- -Conheça o novo sistema de pagamentos do The Graph: **TAP — Timeline Aggregation Protocol** ("Protocolo de Agregação de Histórico"): um sistema de microtransações rápidas e eficientes, livre de confiança. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Visão geral -O [TAP](https://docs.rs/tap_core/latest/tap_core/index.html) é um programa modular que substituirá o sistema de pagamento Scalar atualmente em uso. Os recursos do TAP incluem: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Processamento eficiente de micropagamentos. - Uma camada de consolidações para transações e custos na chain. - Controle total de recibos e pagamentos para Indexadores, garantindo pagamentos por queries. - Pontes de ligação descentralizadas e livres de confiança, melhorando o desempenho do `indexer-service` para grupos de remetentes. -## Especificações +### Especificações -O TAP permite que um remetente faça múltiplos pagamentos a um destinatário — os **TAP Receipts** ("Recibos do TAP") — que agrega os pagamentos em um, o **RAV — Receipt Aggregate Voucher** (Prova de Recibos Agregados). Este pagamento agregado pode ser verificado na blockchain, reduzindo o número de transações e simplificando o processo de pagamento. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. Para cada query, a ponte de ligação enviará um `signed receipt` ("recibo assinado") para armazenar na sua base de dados. Estes queries serão então agregados por um `tap-agent` através de uma solicitação. Depois, você receberá um RAV. Para atualizar um RAV, envie-o com novos recibos para gerar um novo RAV com valor maior. @@ -45,28 +45,28 @@ Tudo será executado automaticamente enquanto `tap-agent` e `indexer-agent` fore ### Contratos -| Contrato | Mainnet Arbitrum (42161) | Arbitrum Sepolia (421614) | -| ------------------- | -------------------------------------------- | -------------------------------------------- | -| Verificador do TAP | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | -| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | -| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | +| Contrato | Mainnet Arbitrum (42161) | Arbitrum Sepolia (421614) | +| ------------------------- | -------------------------------------------- | -------------------------------------------- | +| Verificador do TAP | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | +| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | ### Porta de Ligação -| Componente | Mainnet Edge and Note (Mainnet Arbitrum) | Testnet do Edge and Node (Arbitrum Sepolia) | -| ----------- | --------------------------------------------- | --------------------------------------------- | -| Remetente | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | -| Signatários | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | -| Agregador | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | +| Componente | Mainnet Edge and Note (Mainnet Arbitrum) | Testnet do Edge and Node (Arbitrum Sepolia) | +| -------------- | --------------------------------------------- | ------------------------------------------------ | +| Remetente | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signatários | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Agregador | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Requisitos +### Pré-requisitos -Além dos requisitos típicos para executar um indexador, é necessário um endpoint `tap-escrow-subgraph` para fazer queries de atualizações do TAP. É possível usar o The Graph Network para fazer queries ou se hospedar no seu `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. - [Subgraph do TAP do The Graph — Arbitrum Sepolia (para a testnet do The Graph)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) - [Subgraph do TAP do The Graph (para a mainnet do The Graph)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Nota: o `indexer-agent` atualmente não executa o indexamento deste subgraph como faz com o lançamento de subgraphs da rede. Portanto, ele deve ser anexado manualmente. +> Nota: o `indexer-agent` atualmente não executa a indexação deste subgraph como faz com a implantação de subgraphs da rede. Portanto, ela deve ser anexada manualmente. ## Guia de migração @@ -79,7 +79,7 @@ O software necessário está [aqui](https://github.com/graphprotocol/indexer/blo 1. **Agente Indexador** - Siga o [mesmo processo](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Insira o novo argumento `--tap-subgraph-endpoint` para ativar os novos caminhos de código e ativar o resgate de RAVs do TAP. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Serviço Indexador** @@ -99,14 +99,14 @@ O software necessário está [aqui](https://github.com/graphprotocol/indexer/blo Para o mínimo de configuração, veja o exemplo abaixo: ```bash -# Você deve mudar *todos* os valores abaixo para mudar sua configuração. +# Mude *todos* os valores abaixo para combinar com a sua configuração. # -# O abaixo inclui valores globais da Graph Network, como visto aqui: +# A config abaixo inclui valores globais da graph network, conforme aqui: # # -# Fica a dica: se precisar carregar alguns variáveis do ambiente nesta configuração, você -# pode substituí-los com variáveis do ambiente. Por exemplo: pode-se substituir -# o abaixo por [PREFIX]_DATABASE_POSTGRESURL, onde PREFIX pode ser `INDEXER_SERVICE` ou `TAP_AGENT`: +# Fica a dica: se precisar carregar alguns valores do ambiente nesta config, você +# pode reescrever com variáveis de ambiente. Por exemplo, dá para trocar o seguinte +# com [PREFIX]_DATABASE_POSTGRESURL, onde PREFIX pode ser `INDEXER_SERVICE` ou `TAP_AGENT`: # # [database] # postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" @@ -116,56 +116,56 @@ indexer_address = "0x1111111111111111111111111111111111111111" operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" [database] -# A URL da base de dados Postgres usada para os componentes do indexador. -# A mesma base de dados usada pelo `indexer-agent`. Espera-se que o `indexer-agent` -# criará as tabelas necessárias. +# A URL do banco de dados Postgres usada para os componentes do indexador; o mesmo +# banco usado pelo `indexer-agent`. Espera-se que o `indexer-agent` crie +# as tabelas necessárias. postgres_url = "postgres://postgres@postgres:5432/postgres" [graph_node] -# URL to your graph-node's query endpoint +# URL para o endpoint de queries do seu graph-node query_url = "" -# URL to your graph-node's status endpoint +# URL para o endpoint de estado do seu graph-node status_url = "" [subgraphs.network] -# URL de query pro subgraph do Graph Network. +# URL de Query para o Subgraph da Graph Network query_url = "" -# Opcional, procure o lançamento no `graph-node` local, se localmente indexado. -# Vale a pena indexar o subgraph localmente. -# NOTA: Usar apenas `query_url` ou `deployment_id` +# Opcional, implantação para buscar no `graph-node` local, se indexada localmente. +# Recomenda-se indexar o Subgraph localmente. +# IMPORTANTE: Só use `query_url` ou `deployment-id` deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# URL de Query para o Subgraph da Escrow query_url = "" -# Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. -# NOTE: Use `query_url` or `deployment_id` only +# Opcional, implantação para buscar no `graph-node` local, se indexada localmente. +# Recomenda-se indexar o Subgraph localmente. +# IMPORTANTE: Só use `query_url` ou `deployment-id` deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [blockchain] -# ID de chain da rede que está a executar o Graph Network +# A ID de chain da rede a executar o Graph Network chain_id = 1337 -# Endereço de contrato do verificador de prova de agregação de recibos do TAP. +# Endereço de contrato do verificador de RAV (Prova de Recibos Agregados) do TAP. receipts_verifier_address = "0x2222222222222222222222222222222222222222" ######################################## -# Configurações específicas para o tap-agent # +# Configurações específicas ao tap-agent # ######################################## [tap] -# Esta é a quantia de taxas que você está disposto a arriscar. Por exemplo: -# se o remetente parar de enviar RAVs por tempo suficiente e as taxas passarem -# desta quantia, o indexer-service não aceitará mais queries deste remetente -# até que as taxas sejam agregadas. -# NOTA: Use strings para valores decimais, para evitar erros de arredondamento -# Por exemplo: -# max_amount_willing_to_lose_grt = "0,1" +# Esta é a quantia de taxas que você pode arriscar a qualquer momento. Por exemplo, +# se o remetente parar de fornecer RAVs por tempo suficiente e as taxas excederem +# essa quantia, o serviço indexador vai parar de aceitar queries do remetente +# até as taxas serem agregadas. +# IMPORTANTE: Use strings de valores decimais, para evitar erros de arredondamento +# e.g: +# max_amount_willing_to_lose_grt = "0.1" max_amount_willing_to_lose_grt = 20 [tap.sender_aggregator_endpoints] # Valor-Chave de todos os remetentes e seus endpoints agregadores -# Por exemplo, o abaixo é para a ponte de ligação do testnet Edge & Node. -0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://t +# Por exemplo, abaixo está o valor para o gateway da testnet do Edge & Node. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" ``` Notas: diff --git a/website/src/pages/pt/indexing/tooling/graph-node.mdx b/website/src/pages/pt/indexing/tooling/graph-node.mdx index 370538b94e34..e8bfe3f3d612 100644 --- a/website/src/pages/pt/indexing/tooling/graph-node.mdx +++ b/website/src/pages/pt/indexing/tooling/graph-node.mdx @@ -2,7 +2,7 @@ title: Graph Node --- -O Node do The Graph (Graph Node) é o componente que indexa subgraphs e disponibiliza os dados resultantes a queries (consultas de dados) através de uma API GraphQL. Assim, ele é central ao stack dos indexers, e é crucial fazer operações corretas com um node Graph para executar um indexer com êxito. +O Graph Node é o componente que indexa subgraphs e disponibiliza os dados resultantes a queries (consultas de dados) através de uma API GraphQL. Assim, ele é central ao stack dos indexers, e é crucial fazer operações corretas com um Graph Node para executar um indexador com êxito. Isto fornece um resumo contextual do Graph Node e algumas das opções mais avançadas disponíveis para indexadores. Para mais instruções e documentação, veja o [repositório do Graph Node](https://github.com/graphprotocol/graph-node). @@ -26,15 +26,15 @@ Enquanto alguns subgraphs exigem apenas um node completo, alguns podem ter recur ### Nodes IPFS -Os metadados de lançamento de subgraph são armazenados na rede IPFS. O Graph Node acessa primariamente o node IPFS durante o lançamento do subgraph, para retirar o manifest e todos os arquivos ligados. Os indexadores de rede não precisam hospedar seu próprio node IPFS. Um node IPFS para a rede é hospedado em https://ipfs.network.thegraph.com. +Os metadados de implantação de subgraph são armazenados na rede IPFS. O Graph Node acessa primariamente o node IPFS durante a implantação do subgraph, para retirar o manifest e todos os arquivos ligados. Os indexadores de rede não precisam hospedar seu próprio node IPFS. Um node IPFS para a rede é hospedado em https://ipfs.network.thegraph.com. ### Servidor de métricas Prometheus O Graph Node pode, opcionalmente, logar métricas a um servidor de métricas Prometheus para permitir funções de relatórios e monitorado. -### Getting started from source +### Começando da fonte -#### Install prerequisites +#### Pré-requisitos para a instalação - **Rust** @@ -42,15 +42,15 @@ O Graph Node pode, opcionalmente, logar métricas a um servidor de métricas Pro - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Requisitos Adicionais para utilizadores de Ubuntu** — A execução de um Graph Node no Ubuntu pode exigir pacotes adicionais. ```sh sudo apt-get install -y clang libpq-dev libssl-dev pkg-config ``` -#### Setup +#### Configuração -1. Start a PostgreSQL database server +1. Inicie um servidor de banco de dados PostgreSQL ```sh initdb -D .postgres @@ -58,9 +58,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Clone o repositório do [Graph Node](https://github.com/graphprotocol/graph-node) e execute `cargo build` para construir a fonte -3. Now that all the dependencies are setup, start the Graph Node: +3. Agora que todas as dependências estão configuradas, inicialize o Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -77,19 +77,19 @@ Veja um exemplo completo de configuração do Kubernetes no [repositório do ind Durante a execução, o Graph Node expõe as seguintes portas: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | +| ----- | ----------------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | Servidor HTTP GraphQL
(para queries de subgraph) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | WS GraphQL
(para inscrições a subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(para gerir implantações) | / | \--admin-port | - | +| 8030 | API de estado de indexação do subgraph | /graphql | \--index-node-port | - | +| 8040 | Métricas Prometheus | /metrics | \--metrics-port | - | > **Importante**: Cuidado ao expor portas publicamente — **portas de administração** devem ser trancadas a sete chaves. Isto inclui o endpoint JSON-RPC do Graph Node. ## Configurações avançadas do Graph Node -Basicamente, o Graph Node pode ser operado com uma única instância de Graph Node, um único banco de dados PostgreSQP, e os clientes de rede como exigidos pelos subgraphs a serem indexados. +Basicamente, o Graph Node pode ser operado com uma única instância de Graph Node, um único banco de dados PostgreSQL, e os clientes de rede conforme exigidos pelos subgraphs a serem indexados. Este setup pode ser escalado horizontalmente, com a adição de vários Graph Nodes e bancos de dados para apoiá-los. Utilizadores mais avançados podem tomar vantagem de algumas das capacidades de escala horizontal do Graph Node, assim como algumas das opções de configuração mais avançadas, através do arquivo `config.toml` e as variáveis de ambiente do Graph Node. @@ -114,13 +114,13 @@ A documentação completa do `config.toml` pode ser encontrada nos [documentos d #### Múltiplos Graph Nodes -A indexação de Graph Nodes pode ser escalada horizontalmente, com a execução de várias instâncias de Graph Node para separar indexação de queries em nodes diferentes. Isto é possível só com a execução de Graph Nodes, configurados com um `node_id` diferente na inicialização (por ex. no arquivo Docker Compose), que pode então ser usado no arquivo `config.toml` para especificar [nodes dedicados de query](#dedicated-query-nodes), [ingestores de blocos](#dedicated-block-ingestion") e separar subgraphs entre nódulos com [regras de lançamento](#deployment-rules). +A indexação de Graph Nodes pode ser escalada horizontalmente, com a execução de várias instâncias de Graph Node para separar a indexação dos queries em nodes diferentes. Isto é possível só com a execução de Graph Nodes, configurados com um `node_id` diferente na inicialização (por ex. no arquivo Docker Compose), que pode então ser usado no arquivo `config.toml` para especificar [nodes dedicados de query](#dedicated-query-nodes), [ingestores de blocos](#dedicated-block-ingestion") e separar subgraphs entre nódulos com [regras de implantação](#deployment-rules). > Note que vários Graph Nodes podem ser configurados para usar o mesmo banco de dados — que, por conta própria, pode ser escalado horizontalmente através do sharding. #### Regras de lançamento -Levando em conta vários Graph Nodes, é necessário gerir o lançamento de novos subgraphs para que o mesmo subgraph não seja indexado por dois nodes diferentes, o que levaria a colisões. Isto é possível regras de lançamento, que também podem especificar em qual `shard` os dados de um subgraph devem ser armazenados, caso seja usado o sharding de bancos de dados. As regras de lançamento podem combinar com o nome do subgraph e com a rede que o lançamento indexa para fazer uma decisão. +Levando em conta vários Graph Nodes, é necessário gerir a implantação de novos subgraphs para que o mesmo subgraph não seja indexado por dois nodes diferentes, o que levaria a colisões. Isto é possível com regras de implantação, que também podem especificar em qual `shard` os dados de um subgraph devem ser armazenados, caso seja usado o sharding de bancos de dados. As regras de implantação podem combinar com o nome do subgraph e com a rede que a implantação indexa para fazer uma decisão. Exemplo de configuração de regra de lançamento: @@ -132,13 +132,13 @@ shard = "vip" indexers = [ "index_node_vip_0", "index_node_vip_1" ] [[deployment.rule]] match = { network = "kovan" } -# No shard, so we use the default shard called 'primary' +# Sem shard, então usamos o shard padrão chamado 'primary' indexers = [ "index_node_kovan_0" ] [[deployment.rule]] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# Não tem 'match', então qualquer subgraph combina shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,7 +167,7 @@ Qualquer node cujo --node-id combina com a expressão regular será programado p Para a maioria dos casos de uso, um único banco de dados Postgres é suficiente para apoiar uma instância de graph-node. Quando uma instância de graph-node cresce mais que um único banco Postgres, é possível dividir o armazenamento dos dados do graph-node entre múltiplos bancos Postgres. Todos os bancos de dados, juntos, formam o armazenamento da instância do graph-node. Cada banco de dados individual é chamado de shard. -Os shards servem para dividir lançamentos de subgraph em múltiplos bancos de dados, e podem também ser configurados para usar réplicas a fim de dividir a carga de query entre bancos de dados. Isto inclui a configuração do número de conexões disponíveis do banco que cada `graph-node` deve manter em seu pool de conexão para cada banco, o que fica cada vez mais importante conforme são indexados mais subgraphs. +Os shards servem para dividir implantações de subgraph em múltiplos bancos de dados, e podem também ser configurados para usar réplicas a fim de dividir a carga de query entre bancos de dados. Isto inclui a configuração do número de conexões disponíveis do banco que cada `graph-node` deve manter em seu pool de conexão para cada banco, o que fica cada vez mais importante conforme são indexados mais subgraphs. O sharding torna-se útil quando o seu banco de dados existente não aguenta o peso do Graph Node, e quando não é mais possível aumentar o tamanho do banco. @@ -225,11 +225,11 @@ Os utilizadores a operar um setup de indexing escalado, com configurações avan ### Como gerir o Graph Node -Dado um Graph Node (ou Nodes!) em execução, o desafio torna-se gerir subgraphs lançados entre estes nodes. O Graph Node tem uma gama de ferramentas para ajudar a direção de subgraphs. +Com um Graph Node (ou Nodes!) em execução, o desafio torna-se gerir subgraphs lançados entre estes nodes. O Graph Node tem uma gama de ferramentas para ajudar a direção de subgraphs. #### Logging -Os logs do Graph Node podem fornecer informações úteis, para debug e otimização — do Graph Node e de subgraphs específicos. O Graph Node apoia níveis diferentes de logs através da variável de ambiente `GRAPH_LOG`, com os seguintes níveis: `error`, `warn`, `info`, `debug` ou `trace`. +Os registos do Graph Node podem fornecer informações úteis, para debug e otimização — do Graph Node e de subgraphs específicos. O Graph Node apoia níveis diferentes de logs através da variável de ambiente `GRAPH_LOG`, com os seguintes níveis: `error`, `warn`, `info`, `debug` ou `trace`. Além disto, configurar o `GRAPH_LOG_QUERY_TIMING` para `gql` fornece mais detalhes sobre o processo de queries no GraphQL (porém, isto criará um grande volume de logs). @@ -263,7 +263,7 @@ Há três partes separadas no processo de indexação: - Processar eventos conforme os handlers apropriados (isto pode envolver chamar a chain para o estado, e retirar dados do armazenamento) - Escrever os dados resultantes ao armazenamento -Estes estágios são segmentados (por ex., podem ser executados em paralelo), mas são dependentes um no outro. Quando há demora em indexar, a causa depende do subgraph específico. +Estes estágios são segmentados (por ex., podem ser executados em paralelo), porém dependentes um no outro. Quando há demora em indexar, a causa depende do subgraph específico. Causas comuns de lentidão na indexação: @@ -276,18 +276,18 @@ Causas comuns de lentidão na indexação: - Atraso do próprio provedor em relação ao topo da chain - Atraso em retirar novos recibos do topo da chain do provedor -As métricas de indexação de subgraph podem ajudar a diagnosticar a causa raiz do atraso na indexação. Em alguns casos, o problema está no próprio subgraph, mas em outros, melhorar provedores de rede, reduzir a contenção no banco de dados, e outras melhorias na configuração podem aprimorar muito o desempenho da indexação. +As métricas de indexação de subgraph podem ajudar a diagnosticar a causa raiz do atraso na indexação. Em alguns casos, o problema está no próprio subgraph; mas em outros, melhorar provedores de rede, reduzir a contenção no banco de dados, e outras melhorias na configuração podem aprimorar muito o desempenho da indexação. #### Subgraphs falhos -É possível que subgraphs falhem durante a indexação, caso encontrem dados inesperados; algum componente não funcione como o esperado; ou se houver algum bug nos handlers de eventos ou na configuração. Geralmente, há dois tipos de falha: +É possível que subgraphs falhem durante a indexação, caso encontrem dados inesperados; algum componente não funcione como o esperado; ou se houver algum bug nos handlers de eventos ou na configuração. Geralmente, há dois tipos gerais de falha: - Falhas determinísticas: Falhas que não podem ser resolvidas com outras tentativas - Falhas não determinísticas: podem ser resumidas em problemas com o provedor ou algum erro inesperado no Graph Node. Quando ocorrer uma falha não determinística, o Graph Node reiniciará os handlers falhos e recuará gradualmente. Em alguns casos, uma falha pode ser resolvida pelo indexador (por ex. a indexação falhou por ter o tipo errado de provedor, e necessita do correto para continuar). Porém, em outros, é necessária uma alteração no código do subgraph. -> Falhas determinísticas são consideradas "finais", com uma Prova de Indexação (POI) gerada para o bloco falho; falhas não determinísticas não são finais, como há chances do subgraph superar a falha e continuar a indexar. Às vezes, o rótulo de "não determinístico" é incorreto e o subgraph não tem como melhorar do erro; estas falhas devem ser relatadas como problemas no repositório do Graph Node. +> Falhas determinísticas são consideradas "finais", com uma Prova de Indexação (POI) gerada para o bloco falho; falhas não determinísticas não são finais, já que há chances do subgraph superar a falha e continuar a indexar. Às vezes, o rótulo de "não determinístico" é incorreto e o subgraph não tem como melhorar do erro; estas falhas devem ser relatadas como problemas no repositório do Graph Node. #### Cache de blocos e chamadas @@ -304,7 +304,7 @@ Caso haja uma suspeita de inconsistência no cache de blocos, como a falta de um #### Erros e problemas de query -Quando um subgraph for indexado, os indexadores podem esperar servir consultas através do endpoint dedicado de consultas do subgraph. Se o indexador espera servir volumes significantes de consultas, é recomendado um node dedicado a queries; e para volumes muito altos, podem querer configurar réplicas de shard para que os queries não impactem o processo de indexação. +Depois que um subgraph for indexado, os indexadores podem esperar servir queries através do endpoint dedicado de queries do subgraph. Se o indexador espera servir volumes significantes de query, é recomendado um node dedicado a queries; e para volumes muito altos de queries, vale a pena configurar réplicas de shard para que os queries não impactem o processo de indexação. Porém, mesmo com um node dedicado a consultas e réplicas deste, certos queries podem demorar muito para executar; em alguns casos, aumentam o uso da memória e pioram o tempo de query para outros utilizadores. @@ -342,4 +342,4 @@ Para subgraphs parecidos com o Uniswap, as tábuas `pair` e `token` são ótimas > Esta é uma funcionalidade nova, que estará disponível no Graph Node 0.29.x -Em certo ponto, o indexador pode querer remover um subgraph. É só usar o `graphman drop`, que apaga um lançamento e todos os seus dados indexados. O lançamento pode ser especificado como o nome de um subgraph, um hash IPFS `Qm..`, ou o namespace de banco de dados `sgdNNN`. Mais documentos sobre o processo [aqui](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +Em certo ponto, o indexador pode querer remover um subgraph. É só usar o `graphman drop`, que apaga uma implantação e todos os seus dados indexados. A implantação pode ser especificada como o nome de um subgraph, um hash IPFS `Qm..`, ou o namespace de banco de dados `sgdNNN`. Mais documentos sobre o processo [aqui](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/pt/indexing/tooling/graphcast.mdx b/website/src/pages/pt/indexing/tooling/graphcast.mdx index e57b6b206900..84aa40b24cd5 100644 --- a/website/src/pages/pt/indexing/tooling/graphcast.mdx +++ b/website/src/pages/pt/indexing/tooling/graphcast.mdx @@ -11,7 +11,7 @@ Atualmente, o custo de transmitir informações para outros participantes de red O SDK (Kit de Programação de Software) do Graphcast permite aos programadores construir Rádios, que são aplicativos movidos a mexericos, que os Indexers podem executar por um certo propósito. Nós também pretendemos criar alguns Rádios (ou oferecer apoio para outros programadores/outras equipas que desejam construir Rádios) para os seguintes casos de uso: - Verificação em tempo real de integridade dos dados de um subgraph ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Condução de leilões e coordenação para a sincronização de subgraphs, substreams e dados do Firehose de outros Indexers. +- Condução de leilões e coordenação para a sincronização de subgraphs, substreams, e dados do Firehose de outros Indexadores. - Autorrelatos em analíticas ativas de queries, inclusive volumes de pedidos de subgraphs, volumes de taxas, etc. - Autorrelatos em analíticas de indexação, como tempo de indexação de subgraphs, custos de gas de handlers, erros encontrados, etc. - Autorrelatos em informações de stack, incluindo versão do graph-node, versão do Postgres, versão do cliente Ethereum, etc. diff --git a/website/src/pages/pt/resources/_meta-titles.json b/website/src/pages/pt/resources/_meta-titles.json index f5971e95a8f6..f6b3ef905da1 100644 --- a/website/src/pages/pt/resources/_meta-titles.json +++ b/website/src/pages/pt/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "Funções Adicionais", + "migration-guides": "Guias de Migração" } diff --git a/website/src/pages/pt/resources/benefits.mdx b/website/src/pages/pt/resources/benefits.mdx index 536f02bd4a05..5b5e565e381b 100644 --- a/website/src/pages/pt/resources/benefits.mdx +++ b/website/src/pages/pt/resources/benefits.mdx @@ -27,58 +27,57 @@ Os custos de query podem variar; o custo citado é o normal até o fechamento da ## Utilizador de Baixo Volume (menos de 100 mil queries por mês) -| Comparação de Custos | Auto-hospedagem | The Graph Network | -| :-: | :-: | :-: | -| Custo mensal de servidor\* | $350 por mês | $0 | -| Custos de query | $0+ | $0 por mês | -| Tempo de engenharia | $400 por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | -| Queries por mês | Limitadas pelas capabilidades da infra | 100 mil (Plano Grátis) | -| Custo por query | $0 | $0 | -| Infrastructure | Centralizada | Descentralizada | -| Redundância geográfica | $750+ por node adicional | Incluída | -| Uptime (disponibilidade) | Varia | 99.9%+ | -| Custos mensais totais | $750+ | $0 | +| Comparação de Custos | Auto-hospedagem | The Graph Network | +| :-----------------------------: | :-------------------------------------: | :---------------------------------------------------------------: | +| Custo mensal de servidor\* | $350 por mês | $0 | +| Custos de query | $0+ | $0 por mês | +| Tempo de engenharia | $400 por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | +| Queries por mês | Limitadas pelas capabilidades da infra | 100 mil (Plano Grátis) | +| Custo por query | $0 | $0 | +| Infraestrutura | Centralizada | Descentralizada | +| Redundância geográfica | $750+ por node adicional | Incluída | +| Uptime (disponibilidade) | Varia | 99.9%+ | +| Custos mensais totais | $750+ | $0 | ## Utilizador de Volume Médio (cerca de 3 milhões de queries por mês) -| Comparação de Custos | Auto-hospedagem | The Graph Network | -| :-: | :-: | :-: | -| Custo mensal de servidor\* | $350 por mês | $0 | -| Custos de query | $500 por mês | $120 por mês | -| Tempo de engenharia | $800 por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | -| Queries por mês | Limitadas pelas capabilidades da infra | ~3 milhões | -| Custo por query | $0 | $0.00004 | -| Infrastructure | Centralizada | Descentralizada | -| Custo de engenharia | $200 por hora | Incluída | -| Redundância geográfica | $1.200 em custos totais por node adicional | Incluída | -| Uptime (disponibilidade) | Varia | 99.9%+ | -| Custos mensais totais | $1.650+ | $120 | +| Comparação de Custos | Auto-hospedagem | The Graph Network | +| :-----------------------------: | :----------------------------------------: | :---------------------------------------------------------------: | +| Custo mensal de servidor\* | $350 por mês | $0 | +| Custos de query | $500 por mês | $120 por mês | +| Tempo de engenharia | $800 por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | +| Queries por mês | Limitadas pelas capabilidades da infra | ~3 milhões | +| Custo por query | $0 | $0.00004 | +| Infraestrutura | Centralizada | Descentralizada | +| Custo de engenharia | $200 por hora | Incluída | +| Redundância geográfica | $1.200 em custos totais por node adicional | Incluída | +| Uptime (disponibilidade) | Varia | 99.9%+ | +| Custos mensais totais | $1.650+ | $120 | ## Utilizador de Volume Alto (cerca de 30 milhões de queries por mês) -| Comparação de Custos | Auto-hospedagem | The Graph Network | -| :-: | :-: | :-: | -| Custo mensal de servidor\* | $1.100 por mês, por node | $0 | -| Custos de query | $4.000 | $1,200 por mês | -| Número de nodes necessário | 10 | Não se aplica | -| Tempo de engenharia | $6.000 ou mais por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | -| Queries por mês | Limitadas pelas capabilidades da infra | Cerca de 30 milhões | -| Custo por query | $0 | $0.00004 | -| Infrastructure | Centralizada | Descentralizada | -| Redundância geográfica | $1.200 em custos totais por node adicional | Incluída | -| Uptime (disponibilidade) | Varia | 99.9%+ | -| Custos mensais totais | $11.000+ | $1.200 | +| Comparação de Custos | Auto-hospedagem | The Graph Network | +| :-----------------------------: | :-----------------------------------------: | :---------------------------------------------------------------: | +| Custo mensal de servidor\* | $1.100 por mês, por node | $0 | +| Custos de query | $4.000 | $1,200 por mês | +| Número de nodes necessário | 10 | Não se aplica | +| Tempo de engenharia | $6.000 ou mais por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | +| Queries por mês | Limitadas pelas capabilidades da infra | Cerca de 30 milhões | +| Custo por query | $0 | $0.00004 | +| Infraestrutura | Centralizada | Descentralizada | +| Redundância geográfica | $1.200 em custos totais por node adicional | Incluída | +| Uptime (disponibilidade) | Varia | 99.9%+ | +| Custos mensais totais | $11.000+ | $1.200 | \*com custos de backup incluídos: $50-$100 por mês Tempo de engenharia baseado numa hipótese de $200 por hora -Reflete o custo ao consumidor de dados. Taxas de query ainda são pagas a Indexadores por queries do Plano -Grátis. +Reflete o custo ao consumidor de dados. Taxas de query ainda são pagas a Indexadores por queries do Plano Grátis. -Os custos estimados são apenas para subgraphs na Mainnet do Ethereum — os custos são maiores ao auto-hospedar um graph-node em outras redes. Alguns utilizadores devem atualizar o seu subgraph a uma versão mais recente. Até o fechamento deste texto, devido às taxas de gas do Ethereum, uma atualização custa cerca de 50 dólares. Note que as taxas de gás no [Arbitrum](/archived/arbitrum/arbitrum-faq/) são muito menores que as da mainnet do Ethereum. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curar um sinal em um subgraph é um custo opcional, único, e zero-líquido (por ex., $1 mil em um subgraph pode ser curado em um subgraph, e depois retirado — com potencial para ganhar retornos no processo). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## Zero Custos de Preparação e Mais Eficiência Operacional @@ -90,4 +89,4 @@ A rede descentralizada do The Graph permite que os utilizadores acessem redundâ Enfim: A Graph Network é mais barata e fácil de usar, e produz resultados melhores comparados à execução local de um graph-node. -Comece a usar a Graph Network hoje, e aprenda como [editar o seu subgraph na rede descentralizada do The Graph](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/pt/resources/glossary.mdx b/website/src/pages/pt/resources/glossary.mdx index 4660c4d00ecf..d075e63e2c25 100644 --- a/website/src/pages/pt/resources/glossary.mdx +++ b/website/src/pages/pt/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossário - **The Graph:** Um protocolo descentralizado para indexação e query de dados. -- **Query:** Uma solicitação de dados. No The Graph, um query é uma solicitação por dados de um subgraph que será respondida por um Indexador. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL:** Uma linguagem de queries para APIs e um runtime (programa de execução) para realizar esses queries com os dados existentes. O The Graph usa a GraphQL para fazer queries de subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: Um URL que pode ser usado para fazer queries. O ponto final de execução para o Subgraph Studio é `https://api.studio.thegraph.com/query///`, e o do Graph Explorer é `https://gateway.thegraph.com/api//subgraphs/id/`. O ponto final do Graph Explorer é usado para fazer queries de subgraphs na rede descentralizada do The Graph. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph:** Uma API aberta que extrai, processa, e guarda dados de uma blockchain para facilitar queries via a GraphQL. Os programadores podem construir, lançar, e editar subgraphs na The Graph Network. Indexado, o subgraph está sujeito a queries por quem quiser solicitar. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexador**: Um participante da rede que executa nodes de indexação para indexar dados de blockchains e servir queries da GraphQL. - **Fluxos de Receita de Indexadores:** Os Indexadores são recompensados em GRT com dois componentes: Rebates de taxa de query e recompensas de indexação. - 1. **Rebates de Taxa de Query**: Pagamentos de consumidores de subgraphs por servir queries na rede. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Recompensas de Indexação**: São recebidas por Indexadores por indexar subgraphs, e geradas via a emissão anual de 3% de GRT. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - \*\*Auto-Stake (Stake Próprio) do Indexador: A quantia de GRT que os Indexadores usam para participar na rede descentralizada. A quantia mínima é 100.000 GRT, e não há limite máximo. - **Capacidade de Delegação**: A quantia máxima de GRT que um Indexador pode aceitar dos Delegantes. Os Indexadores só podem aceitar até 16 vezes o seu Auto-Stake, e mais delegações resultam em recompensas diluídas. Por exemplo: se um Indexador tem um Auto-Stake de 1 milhão de GRT, a capacidade de delegação é 16 milhões. Porém, os Indexadores só podem aumentar a sua Capacidade de Delegação se aumentarem também o seu Auto-Stake. -- **Indexador de Atualizações**: Um Indexador de reserva para queries não servidos por outros Indexadores na rede. Este Indexador não compete com outros Indexadores. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegante:** Um participante da rede que possui GRT e delega uma quantia para Indexadores, permitindo que esses aumentem o seu stake em subgraphs. Em retorno, os Delegantes recebem uma porção das Recompensas de Indexação recebidas pelos Indexadores por processar subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Taxa de Delegação**: Uma taxa de 0,5% paga por Delegantes quando delegam GRT a Indexadores. O GRT usado para pagar a taxa é queimado. -- **Curador:** Um participante da rede que identifica subgraphs de qualidade e sinaliza GRT para eles em troca de ações de curadoria. Quando os Indexadores resgatam as taxas de query em um subgraph, 10% é distribuído para os Curadores desse subgraph. Há uma correlação positiva entre a quantia de GRT sinalizada e o número de Indexadores a indexar um subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- \*\*Taxa de Curadoria: Uma taxa de 1% paga pelos Curadores quando sinalizam GRT em subgraphs. O GRT usado para pagar a taxa é queimado. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- Consumidor de Dados: Qualquer aplicativo ou utilizador que faz queries para um subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- \*\*Programador de Subgraph: Um programador que constrói e lança um subgraph à rede descentralizada do The Graph. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Manifest de Subgraph:** Um arquivo YAML que descreve o schema, fontes de dados, e outros metadados de um subgraph. [Veja um exemplo](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml). +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch:** Uma unidade de tempo na rede. Um epoch atualmente dura 6.646 blocos, ou cerca de um dia. -- \*\*Alocação: Um Indexador pode alocar o seu stake total em GRT (incluindo o stake dos Delegantes) em subgraphs editados na rede descentralizada do The Graph. As alocações podem ter estados diferentes: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Ativa:** Uma alocação é considerada ativa quando é criada on-chain. Isto se chama abrir uma alocação, e indica à rede que o Indexador está a indexar e servir consultas ativamente para um subgraph particular. Alocações ativas acumulam recompensas de indexação proporcionais ao sinal no subgraph, e à quantidade de GRT alocada. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Fechada**: Um Indexador pode resgatar as recompensas acumuladas em um subgraph selecionado ao enviar uma Prova de Indexação (POI) recente e válida. Isto se chama "fechar uma alocação". Uma alocação deve ter ficado aberta por, no mínimo, um epoch antes que possa ser fechada. O período máximo de alocação é de 28 epochs; se um indexador deixar uma alocação aberta por mais que isso, ela se torna uma alocação obsoleta. Quando uma alocação está **Fechada**, um Pescador ainda pode abrir uma disputa contra um Indexador por servir dados falsos. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: um dApp (aplicativo descentralizado) poderoso para a construção, lançamento e edição de subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Pescadores**: Um papel na Graph Network cumprido por participantes que monitoram a precisão e integridade dos dados servidos pelos Indexadores. Quando um Pescador identifica uma resposta de query ou uma POI que acreditam ser incorreta, ele pode iniciar uma disputa contra o Indexador. Se a disputa der um veredito a favor do Pescador, o Indexador é cortado, ou seja, perderá 2.5% do seu auto-stake de GRT. Desta quantidade, 50% é dado ao Pescador como recompensa pela sua vigilância, e os 50% restantes são retirados da circulação (queimados). Este mecanismo é desenhado para encorajar Pescadores a ajudar a manter a confiança na rede ao garantir que Indexadores sejam responsabilizados pelos dados que providenciam. @@ -56,28 +56,28 @@ title: Glossário - Corte: Os Indexadores podem tomar cortes no seu self-stake de GRT por fornecer uma prova de indexação (POI) incorreta ou servir dados imprecisos. A percentagem de corte é um parâmetro do protocolo, atualmente configurado em 2,5% do auto-stake de um Indexador. 50% do GRT cortado vai ao Pescador que disputou os dados ou POI incorretos. Os outros 50% são queimados. -- **Recompensas de Indexação**: As recompensas que os Indexadores recebem por indexar subgraphs, distribuídas em GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Recompensas de Delegação**: As recompensas que os Delegantes recebem por delegar GRT a Indexadores, distribuídas em GRT. - **GRT**: O token de utilidade do The Graph, que oferece incentivos económicos a participantes da rede por contribuir. -- **POI (Prova de Indexação)**: Quando um Indexador fecha a sua alocação e quer resgatar as suas recompensas de indexação acumuladas em um certo subgraph, ele deve apresentar uma Prova de Indexação (POI) válida e recente. Os Pescadores podem disputar a POI providenciada por um Indexador; disputas resolvidas a favor do Pescador causam um corte para o Indexador. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: O componente que indexa subgraphs e disponibiliza os dados resultantes abertos a queries através de uma API GraphQL. Assim, ele é essencial ao stack de indexadores, e operações corretas de um Graph Node são cruciais para executar um indexador com êxito. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Agente de Indexador**: Parte do stack do indexador. Ele facilita as interações do Indexer on-chain, inclusive registos na rede, gestão de lançamentos de Subgraph ao(s) seu(s) Graph Node(s), e gestão de alocações. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: Uma biblioteca para construir dApps baseados em GraphQL de maneira descentralizada. -- **Graph Explorer**: Um dApp desenhado para que participantes da rede explorem subgraphs e interajam com o protocolo. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: Uma ferramenta de interface de comando de linha para construções e lançamentos no The Graph. - **Período de Recarga**: O tempo restante até que um Indexador que mudou os seus parâmetros de delegação possa fazê-lo novamente. -- Ferramentas de Transferência para L2: Contratos inteligentes e interfaces que permitem que os participantes na rede transfiram ativos relacionados à rede da mainnet da Ethereum ao Arbitrum One. Os participantes podem transferir GRT delegado, subgraphs, ações de curadoria, e o Auto-Stake do Indexador. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Atualização de um subgraph**: O processo de lançar uma nova versão de subgraph com atualizações ao manifest, schema e mapeamentos do subgraph. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migração**: O processo de movimentar ações de curadoria da versão antiga de um subgraph à versão nova do mesmo (por ex., quando a v.0.0.1 é atualizada à v.0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/pt/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/pt/resources/migration-guides/assemblyscript-migration-guide.mdx index 165055c46822..436e74de6f60 100644 --- a/website/src/pages/pt/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/pt/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: Guia de Migração do AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Isto permitirá que os programadores de subgraph usem recursos mais novos da linguagem AS e da sua biblioteca normal. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Recursos @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## Como atualizar? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -Se não tiver certeza de qual escolher, é sempre bom usar a versão segura. Se o valor não existir, pode fazer uma declaração if precoce com um retorno no seu handler de subgraph. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Sombreamento Varíavel @@ -132,7 +132,7 @@ Renomeie as suas variáveis duplicadas, se tinha o sombreamento variável. ### Comparações de Nulos -Ao fazer a atualização no seu subgraph, às vezes aparecem erros como este: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // não dá erros de tempo de compilação como deveria ``` -Nós abrimos um problema no compilador AssemblyScript para isto, mas por enquanto, se fizer estes tipos de operações nos seus mapeamentos de subgraph, vale mudá-las para fazer uma checagem de anulação antes delas. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Ele fará a compilação, mas quebrará no tempo de execução porque o valor não foi inicializado. Tenha certeza de que o seu subgraph inicializou os seus valores, como assim: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/pt/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/pt/resources/migration-guides/graphql-validations-migration-guide.mdx index 7b94db58a11d..d0d8c2a80204 100644 --- a/website/src/pages/pt/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/pt/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Guia de migração de Validações GraphQL +title: GraphQL Validations Migration Guide --- Em breve, o `graph-node` apoiará a cobertura total da [especificação de Validações GraphQL](https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ Para cumprir tais validações, por favor siga o guia de migração. Pode usar a ferramenta de migração em CLI para encontrar e consertar quaisquer problemas nas suas operações no GraphQL. De outra forma, pode atualizar o endpoint do seu cliente GraphQL para usar o endpoint `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Testar os seus queries perante este endpoint ajudará-lhe a encontrar os problemas neles presentes. -> Nem todos os Subgraphs precisam ser migrados; se usar o [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) ou o [Gerador de Código GraphQL](https://the-guild.dev/graphql/codegen), eles já garantirão que os seus queries sejam válidos. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Ferramenta CLI de migração @@ -256,7 +256,8 @@ query { } ``` -**Conflicting fields with arguments (#OverlappingFieldsCanBeMergedRule)** (Campos em conflito com argumentos) +**Conflicting fields with arguments (#OverlappingFieldsCanBeMergedRule)** +(Campos em conflito com argumentos) ```graphql # Argumentos diferentes podem levar a dados diferentes, @@ -465,10 +466,10 @@ Estas referências desconhecidas devem ser consertadas: - caso contrário, remova ### Fragment: invalid spread or definition - (Fragment: espalhamento ou definição inválidos) -**Invalid Fragment spread (#PossibleFragmentSpreadsRule)** (Espalhamento de fragment inválido) +**Invalid Fragment spread (#PossibleFragmentSpreadsRule)** +(Espalhamento de fragment inválido) Um Fragment não pode ser espalhado em um tipo não aplicável. @@ -508,7 +509,8 @@ fragment inlineFragOnScalar on Dog { ### Uso de Diretivas -**Directive cannot be used at this location (#KnownDirectivesRule)** (A diretiva não pode ser usada neste local) +**Directive cannot be used at this location (#KnownDirectivesRule)** +(A diretiva não pode ser usada neste local) Apenas diretivas GraphQL (`@...`) apoiadas pela API do The Graph podem ser usadas. @@ -525,7 +527,8 @@ query { _Nota: `@stream`, `@live`, e `@defer` não têm apoio._ -**Directive can only be used once at this location (#UniqueDirectivesPerLocationRule)** (A diretiva só pode ser usada neste local uma vez) +**Directive can only be used once at this location (#UniqueDirectivesPerLocationRule)** +(A diretiva só pode ser usada neste local uma vez) As diretivas apoiadas pelo The Graph só podem ser usadas uma vez por local. diff --git a/website/src/pages/pt/resources/roles/curating.mdx b/website/src/pages/pt/resources/roles/curating.mdx index 582a7926b9ee..0bdc3248b7be 100644 --- a/website/src/pages/pt/resources/roles/curating.mdx +++ b/website/src/pages/pt/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curadorias --- -Curadores são importantes para a economia descentralizada do The Graph. Eles utilizam o seu conhecimento do ecossistema web3 para avaliar e sinalizar nos subgraphs que devem ser indexados pela Graph Network. Através do Graph Explorer, Curadores visualizam dados de rede para tomar decisões sobre sinalizações. Em troca, a Graph Network recompensa Curadores que sinalizam em subgraphs de alta qualidade com uma parte das taxas de query geradas por estes subgraphs. A quantidade de GRT sinalizada é uma das considerações mais importantes para Indexadores ao determinar quais subgraphs indexar. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## O que a Sinalização Significa para a Graph Network? -Antes que consumidores possam indexar um subgraph, ele deve ser indexado. É aqui que entra a curadoria. Para que Indexadores ganhem taxas de query substanciais em subgraphs de qualidade, eles devem saber quais subgraphs indexar. Quando Curadores sinalizam um subgraph, isto diz aos Indexadores que um subgraph está em demanda e tem qualidade suficiente para ser indexado. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Os Curadores trazem eficiência à Graph Network, e a [sinalização](#how-to-signal) é o processo que curadores usam para avisar aos Indexadores que um subgraph é bom para indexar. Os Indexadores podem confiar no sinal de um Curador, porque ao sinalizar, os Curadores mintam uma ação de curadoria para o subgraph, o que concede aos Curadores uma porção das futuras taxas de query movidas pelo subgraph. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Sinais de curador são representados como tokens ERC20 chamados de Ações de Curadoria do Graph (GCS). Quem quiser ganhar mais taxas de query devem sinalizar o seu GRT a subgraphs que apostam que gerará um fluxo forte de taxas á rede. Curadores não podem ser cortados por mau comportamento, mas há uma taxa de depósito em Curadores para desincentivar más decisões que possam ferir a integridade da rede. Curadores também ganharão menos taxas de query se curarem um subgraph de baixa qualidade, já que haverão menos queries a processar ou menos Indexadores para processá-las. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -O [Indexador de Atualização do Nascer do Sol](/sunrise/#what-is-the-upgrade-indexer) garante a indexação de todos os subgraphs; sinalizar GRT em um subgraph específico atrairá mais Indexadores a ele. Este incentivo para Indexadores através da curadoria visa melhorar a qualidade do serviço de queries através da redução de latência e do aprimoramento da disponibilidade de rede. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -Ao sinalizar, Curadores podem decidir entre sinalizar numa versão específica do subgraph ou sinalizar com a automigração. Caso sinalizem com a automigração, as ações de um curador sempre serão atualizadas à versão mais recente publicada pelo programador. Se decidirem sinalizar numa versão específica, as ações sempre permanecerão nesta versão específica. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Se precisar de ajuda com a curadoria para melhorar a qualidade do serviço, peça ajuda à equipa da Edge Node em support@thegraph.zendesk.com e especifique os subgraphs com que precisa de assistência. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Os indexadores podem achar subgraphs para indexar com base em sinais de curadoria que veem no Graph Explorer (imagem abaixo). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Subgraphs do Explorer](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Como Sinalizar -Na aba "Curator" (Curador) do Graph Explorer, os curadores podem sinalizar e tirar sinal de certos subgraphs baseados nas estatísticas de rede. [Clique aqui](/subgraphs/explorer/) para um passo-a-passo deste processo no Graph Explorer. +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Um curador pode escolher sinalizar uma versão específica de subgraph, ou pode automaticamente migrar o seu sinal à versão mais recente desse subgraph. Ambas estratégias são válidas, e vêm com as suas próprias vantagens e desvantagens. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Sinalizar numa versão específica serve muito mais quando um subgraph é usado por vários dApps. Um dApp pode precisar atualizar o subgraph regularmente com novos recursos; outro dApp pode preferir usar uma versão mais antiga, porém melhor testada. Na curadoria inicial, é incorrida uma taxa de 1%. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Ter um sinal que migra automaticamente à build mais recente de um subgraph pode ser bom para garantir o acúmulo de taxas de consulta. Toda vez que cura, é incorrida uma taxa de 1% de curadoria. Também pagará uma taxa de 0.5% em toda migração. É recomendado que rogramadores de subgraphs evitem editar novas versões com frequência - eles devem pagar uma taxa de curadoria de 0.5% em todas as ações de curadoria auto-migradas. -> \*\*Nota: O primeiro endereço a sinalizar um subgraph particular é considerado o primeiro curador e deverá realizar tarefas muito mais intensivas em gas do que o resto dos curadores seguintes — porque o primeiro curador inicializa os tokens de ação de curadoria, inicializa o bonding curve, e também transfere tokens no proxy do Graph. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Como Sacar o Seu GRT @@ -40,39 +40,39 @@ Curadores têm a opção de sacar o seu GRT sinalizado a qualquer momento. Ao contrário do processo de delegação, se decidir sacar o seu GRT sinalizado, você não precisará esperar um período de recarga, e receberá a quantidade completa (menos a taxa de curadoria de 1%). -Quando um curador retira o seu sinal, Indexadores podem escolher continuar a indexar o subgraph, mesmo se não houver no momento nenhum GRT sinalizado. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -Porém, é recomendado que curadores deixem o seu GRT no lugar, não apenas para receber uma porção das taxas de query, mas também para garantir a confiança e disponibilidade do subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Riscos 1. O mercado de consulta é jovem por natureza no The Graph, e há sempre o risco do seu rendimento anual ser menor que o esperado devido às dinâmicas nascentes do mercado. -2. Taxa de Curadoria - Quando um curador sinaliza GRT em um subgraph, ele incorre uma taxa de curadoria de 1%, que é queimada. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Um subgraph pode falhar devido a um erro de código. Um subgraph falho não acumula taxas de consulta. Portanto, espere até o programador consertar o erro e lançar uma nova versão. - - Caso se inscreva à versão mais recente de um subgraph, suas ações migrarão automaticamente a esta versão nova. Isto incorrerá uma taxa de curadoria de 0.5%. - - Se sinalizou em um subgraph específico e ele falhou, deverá queimar as suas ações de curadoria manualmente. Será então possível sinalizar na nova versão do subgraph, o que incorre uma taxa de curadoria de 1%. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Perguntas Frequentes sobre Curadoria ### 1. Qual a % das taxas de query que os Curadores ganham? -Ao sinalizar em um subgraph, ganhará parte de todas as taxas de query geradas pelo subgraph. 10% de todas as taxas de curadoria vão aos Curadores, pro-rata às suas ações de curadoria. Estes 10% são sujeitos à governança. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Como decidir quais subgraphs são de qualidade alta para sinalizar? +### 2. How do I decide which Subgraphs are high quality to signal on? -Achar subgraphs de alta qualidade é uma tarefa complexa, mas o processo pode ser abordado de várias formas diferentes. Como Curador, procure subgraphs confiáveis que movem volumes de query. Um subgraph confiável pode ser valioso se for completo, preciso, e apoiar as necessidades de dados de um dApp. Um subgraph mal arquitetado pode precisar de revisões ou reedições, além de correr risco de falhar. É importante que os Curadores verifiquem a arquitetura ou código de um subgraph, para averiguar se ele é valioso. Portanto: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Os curadores podem usar o seu conhecimento de uma rede para tentar adivinhar como um subgraph individual pode gerar um volume maior ou menor de queries no futuro -- Os curadores também devem entender as métricas disponíveis através do Graph Explorer. Métricas como o volume de queries passados e a identidade do programador do subgraph podem ajudar a determinar se um subgraph vale ou não o sinal. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Qual o custo de atualizar um subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrar as suas ações de curadoria a uma nova versão de subgraph incorre uma taxa de curadoria de 1%. Os curadores podem escolher se inscrever na versão mais nova de um subgraph. Quando ações de curadores são automigradas a uma nova versão, os Curadores também pagarão metade da taxa de curadoria, por ex., 0.5%, porque a atualização de subgraphs é uma ação on-chain que custa gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. Com que frequência posso atualizar o meu subgraph? +### 4. How often can I update my Subgraph? -Não atualize os seus subgraphs com frequência excessiva. Veja a questão acima para mais detalhes. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Posso vender as minhas ações de curadoria? diff --git a/website/src/pages/pt/resources/roles/delegating/undelegating.mdx b/website/src/pages/pt/resources/roles/delegating/undelegating.mdx index 1c335992bbc7..4dbce163a419 100644 --- a/website/src/pages/pt/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/pt/resources/roles/delegating/undelegating.mdx @@ -1,73 +1,69 @@ --- -title: Undelegating +title: Como Retirar uma Delegação --- -Learn how to withdraw your delegated tokens through [Graph Explorer](https://thegraph.com/explorer) or [Arbiscan](https://arbiscan.io/). +Aprenda como retirar os seus tokens delegados através do [Graph Explorer](https://thegraph.com/explorer) ou [Arbiscan](https://arbiscan.io/). -> To avoid this in the future, it's recommended that you select an Indexer wisely. To learn how to select and Indexer, check out the Delegate section in Graph Explorer. +> Para evitar isso no futuro, recomendamos que tenha cuidado ao escolher um Indexador. Para aprender como selecionar um indexador, confira a seção Delegar no Graph Explorer. -## How to Withdraw Using Graph Explorer +## Como Retirar uma Delegação com o Graph Explorer ### Passo a Passo -1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. +1. Visite o [Graph Explorer](https://thegraph.com/explorer). Certifique-se que está no Explorer, e **não** no Subgraph Studio. -2. Click on your profile. You can find it on the top right corner of the page. +2. Clique no seu perfil, no canto superior direito da página. + - Verifique se a sua carteira está conectada. Se não estiver, o botão "connect" (conectar) aparecerá no lugar. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. +3. Já no seu perfil, clique na aba "Delegating" (Delegação). Nessa aba, é possível visualizar a lista de Indexadores para os quais já delegou. -3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. +4. Clique no indexador do qual deseja retirar os seus tokens. + - Indique o Indexador específico, pois ele terá que ser encontrado novamente para fazer a retirada. -4. Click on the Indexer from which you wish to withdraw your tokens. +5. Selecione a opção "Undelegate" (Retirar Delegação) nos três pontos ao lado do Indexador, ao lado direito. Conforme a imagem abaixo: - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. + ![Botão de Retirar Delegação](/img/undelegate-button.png) -5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: +6. Após cerca de [28 epochs](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 dias), volte à seção "Delegate" (delegar) e localize o indexador específico do qual retirou a sua delegação. - ![Undelegate button](/img/undelegate-button.png) +7. Após encontrar o Indexador, clique nos três pontos ao lado dele e retire todos os seus tokens. -6. After approximately [28 epochs](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 days), return to the Delegate section and locate the specific Indexer you undelegated from. +## Como Retirar uma Delegação com o Arbiscan -7. Once you find the Indexer, click on the three dots next to them and proceed to withdraw all your tokens. - -## How to Withdraw Using Arbiscan - -> This process is primarily useful if the UI in Graph Explorer experiences issues. +> Esse processo é primariamente útil se estiver com problemas na interface do Graph Explorer. ### Passo a Passo -1. Find your delegation transaction on Arbiscan. - - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) - -2. Navigate to "Transaction Action" where you can find the staking extension contract: +1. Encontre a sua transação de delegação no Arbiscan. + - Aqui está um [exemplo de transação pelo Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) +2. Navegue até "Transaction Action" (Ação de Transação), onde poderá encontrar o contrato da extensão de staking: + - [Este é o contrato de extensão de staking do exemplo listado acima](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) -3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) +3. Em seguida, clique em "Contract" (Contrato). ![Aba de contrato no Arbiscan, entre NFT Transfers (Transferências de NFT) e Events (Eventos)](/img/arbiscan-contract.png) -4. Scroll to the bottom and copy the Contract ABI. There should be a small button next to it that allows you to copy everything. +4. Role até o final e copie a ABI do Contrato. Deve haver um pequeno botão próximo a ela que permite copiar tudo. -5. Click on your profile button in the top right corner of the page. If you haven't created an account yet, please do so. +5. Clique no seu botão de perfil, no canto superior direito da página. Se ainda não criou uma conta, faça isso logo. -6. Once you're in your profile, click on "Custom ABI”. +6. Já no seu perfil, clique em "Custom ABI" (Personalizar ABI). -7. Paste the custom ABI you copied from the staking extension contract, and add the custom ABI for the address: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**sample address**) +7. Cole a ABI personalizada que copiou do contrato da extensão de staking e adicione a ABI personalizada para o endereço: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**endereço de amostra**) -8. Go back to the [staking extension contract](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Now, call the `unstake` function in the [Write as Proxy tab](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), which has been added thanks to the custom ABI, with the number of tokens that you delegated. +8. Volte para o [contrato de extensão de staking](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Agora, chame a função `unstake` na [aba "Write as Proxy" (Escrever como Proxy)](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), que foi adicionada graças à ABI personalizada, com o número de tokens que você delegou. -9. If you don't know how many tokens you delegated, you can call `getDelegation` on the Read Custom tab. You will need to paste your address (delegator address) and the address of the Indexer that you delegated to, as shown in the following screenshot: +9. Se não souber quantos tokens delegou, chame `getDelegation` na aba "Read Custom" (Ler Personalização). Será necessário colar tanto o seu endereço (endereço de delegante) quanto o do indexador para o qual você delegou, conforme na seguinte imagem: - ![Both of the addresses needed](/img/get-delegate.png) + ![Ambos os endereços necessários](/img/get-delegate.png) - - This will return three numbers. The first number is the amount you can unstake. + - Isto retornará três números. O primeiro número é a quantidade de staking que você pode retirar. -10. After you have called `unstake`, you can withdraw after approximately 28 epochs (28 days) by calling the `withdraw` function. +10. Após chamar `unstake`, você pode retirar o stake após, em média, 28 epochs (28 dias) com a função `withdraw`. -11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: +11. É possível ver o quanto terá disponível para retirar, ao chamar `getWithdrawableDelegatedTokens` no "Read Custom" e repassar a sua tupla de delegação. Veja a imagem abaixo: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Chame \`getWithdrawableDelegatedTokens\` para ver a quantia de tokens que pode ser retirada](/img/withdraw-available.png) ## Outros Recursos -To delegate successfully, review the [delegating documentation](/resources/roles/delegating/delegating/) and check out the delegate section in Graph Explorer. +Para delegar com êxito, consulte a [documentação de delegação](/resources/roles/delegating/delegating/) e confira a seção de delegação no Graph Explorer. diff --git a/website/src/pages/pt/resources/subgraph-studio-faq.mdx b/website/src/pages/pt/resources/subgraph-studio-faq.mdx index 57c66e49c2e0..161340865f69 100644 --- a/website/src/pages/pt/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/pt/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Perguntas Frequentes do Subgraph Studio ## 1. O que é o Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Como criar uma Chave de API? @@ -12,20 +12,20 @@ Para criar uma API, navegue até o Subgraph Studio e conecte a sua carteira. Log ## 3. Posso criar várias Chaves de API? -Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +Sim! Pode criar mais de uma Chave de API para usar em projetos diferentes. Confira [aqui](https://thegraph.com/studio/apikeys/). ## 4. Como restringir um domínio para uma Chave de API? Após criar uma Chave de API, na seção de Segurança (Security), pode definir os domínios que podem consultar uma Chave de API específica. -## 5. Posso transferir meu subgraph para outro dono? +## 5. Can I transfer my Subgraph to another owner? -Sim. Subgraphs editados no Arbitrum One podem ser transferidos para uma nova carteira ou uma Multisig. Para isto, clique nos três pontos próximos ao botão 'Publish' (Publicar) na página de detalhes do subgraph e selecione 'Transfer ownership' (Transferir titularidade). +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note que após a transferência, não poderá mais ver ou alterar o subgraph no Studio. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Se eu não for o programador do subgraph que quero usar, como encontro URLs de query para subgraphs? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Lembre-se que, mesmo se construir um subgraph por conta própria, ainda poderá criar uma chave de API e consultar qualquer subgraph publicado na rede. Estes queries através da nova chave API são pagos, como quaisquer outros na rede. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/pt/resources/tokenomics.mdx b/website/src/pages/pt/resources/tokenomics.mdx index f5994ac88795..5126fa077fec 100644 --- a/website/src/pages/pt/resources/tokenomics.mdx +++ b/website/src/pages/pt/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: A Graph Network é incentivada por uma tokenomia (economia de token ## Visão geral -O The Graph é um protocolo descentralizado que permite acesso fácil a dados de blockchain. Ele indexa dados de blockchain da mesma forma que o Google indexa a web; se já usou um dApp (aplicativo descentralizado) que resgata dados de um subgraph, você provavelmente já interagiu com o The Graph. Hoje, milhares de [dApps populares](https://thegraph.com/explorer) no ecossistema da Web3 usam o The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Especificações @@ -24,9 +24,9 @@ Há quatro participantes primários na rede: 1. Delegantes — Delegam GRT aos Indexadores e protegem a rede -2. Curadores — Encontram os melhores subgraphs para Indexadores +2. Curators - Find the best Subgraphs for Indexers -3. Programadores — Constroem e consultam subgraphs em queries +3. Developers - Build & query Subgraphs 4. Indexadores — Rede de transporte de dados em blockchain @@ -36,7 +36,7 @@ Pescadores e Árbitros também são integrais ao êxito da rede através de outr ## Delegantes (Ganham GRT passivamente) -Os Delegantes delegam GRT a Indexadores, aumentando o stake do Indexador em subgraphs na rede. Em troca, os Delegantes ganham uma porcentagem de todas as taxas de query e recompensas de indexação do Indexador. Cada Indexador determina a porção que será recompensada aos Delegantes de forma independente, criando competição entre Indexadores para atrair Delegantes. Muitos Indexadores oferecem entre 9 e 12% ao ano. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. Por exemplo, se um Delegante delegasse 15.000 GRT a um Indexador que oferecesse 10%, o Delegante receberia cerca de 1.500 GRT em recompensas por ano. @@ -46,25 +46,25 @@ Quem ler isto pode tornar-se um Delegante agora mesmo na [página de participant ## Curadores (Ganham GRT) -Os Curadores identificam subgraphs de alta qualidade e os "curam" (por ex., sinalizam GRT neles) para ganhar ações de curadoria, que garantem uma porção de todas as taxas de query futuras geradas pelo subgraph. Enquanto qualquer participante independente da rede pode ser um Curador, os programadores de subgraphs tendem a ser os primeiros Curadores dos seus próprios subgraphs, pois querem garantir que o seu subgraph seja indexado. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Desde 11 de abril de 2024, os programadores de subgraphs podem curar o seu subgraph com, no mínimo, 3.000 GRT. Porém, este número pode ser impactado pela atividade na rede e participação na comunidade. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Os Curadores pagam uma taxa de curadoria de 1% ao curar um subgraph novo. Esta taxa de curadoria é queimada, de modo a reduzir a reserva de GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Programadores -Os programadores constroem e fazem queries em subgraphs para retirar dados da blockchain. Como os subgraphs têm o código aberto, os programadores podem carregar dados da blockchain em seus dApps com queries nos subgraphs existentes. Os programadores pagam por queries feitos em GRT, que é distribuído aos participantes da rede. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Como criar um Subgraph +### Creating a Subgraph -Para indexar dados na blockchain, os programadores podem [criar um subgraph](]/developing/creating-a-subgraph/) — um conjunto de instruções para Indexadores sobre quais dados devem ser servidos aos consumidores. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Depois que os programadores tiverem criado e testado o seu subgraph, eles poderão [editá-lo](/subgraphs/developing/publishing/publishing-a-subgraph/) na rede descentralizada do The Graph. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Como fazer queries um Subgraph existente +### Querying an existing Subgraph -Depois que um subgraph for [editado](/subgraphs/developing/publishing/publishing-a-subgraph/) na rede descentralizada do The Graph, qualquer um poderá criar uma chave API, depositar GRT no seu saldo de cobrança, e consultar o subgraph em um query. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Os Subgraphs [recebem queries pelo GraphQL](/subgraphs/querying/introduction/), e as taxas de query são pagas em GRT no [Subgraph Studio](https://thegraph.com/studio/). As taxas de query são distribuídas a participantes da rede com base nas suas contribuições ao protocolo. @@ -72,27 +72,27 @@ Os Subgraphs [recebem queries pelo GraphQL](/subgraphs/querying/introduction/), ## Indexadores (Ganham GRT) -Os Indexadores são o núcleo do The Graph: operam o equipamento e o software independentes que movem a rede descentralizada do The Graph. Eles servem dados a consumidores baseado em instruções de subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Os Indexadores podem ganhar recompensas em GRT de duas maneiras: -1. **Taxas de query**: GRT pago, por programadores ou utilizadores, para queries de dados de subgraph. Taxas de query são distribuídas diretamente a Indexadores conforme a função de rebate exponencial (veja o GIP [aqui](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Recompensas de indexação**: a emissão anual de 3% é distribuída aos Indexadores com base no número de subgraphs que indexam. Estas recompensas os incentivam a indexar subgraphs, às vezes antes das taxas de query começarem, de modo a acumular e enviar Provas de Indexação (POIs) que verificam que indexaram dados corretamente. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Cada subgraph recebe uma porção da emissão total do token na rede, com base na quantia do sinal de curadoria do subgraph. Essa quantia é então recompensada aos Indexadores com base no seu stake alocado no subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. Para executar um node de indexação, os Indexadores devem fazer um stake de 100.000 GRT ou mais com a rede. Os mesmos são incentivados a fazer um stake de GRT, proporcional à quantidade de queries que servem. -Os Indexadores podem aumentar suas alocações de GRT nos subgraphs ao aceitar delegações de GRT de Delegantes; também podem aceitar até 16 vezes a quantia do seu stake inicial. Se um Indexador se tornar "excessivamente delegado" (por ex., com seu stake inicial multiplicado mais de 16 vezes), ele não poderá usar o GRT adicional dos Delegantes até aumentar o seu próprio stake na rede. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. A quantidade de recompensas recebidas por um Indexador pode variar com base no seu auto-stake, delegação aceita, qualidade de serviço, e muito mais fatores. ## Reserva de Tokens: Queima e Emissão -A reserva inicial de tokens é de 10 bilhões de GRT, com um alvo de emissão de 3% novos ao ano para recompensar os Indexadores por alocar stake em subgraphs. Portanto, a reserva total de tokens GRT aumentará por 3% a cada ano à medida que tokens novos são emitidos para Indexadores, pela sua contribuição à rede. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -O The Graph é projetado com vários mecanismos de queima para compensar pela emissão de novos tokens. Aproximadamente 1% da reserva de GRT é queimado todo ano, através de várias atividades na rede, e este número só aumenta conforme a atividade na rede cresce. Estas atividades de queima incluem: uma taxa de delegação de 0,5% sempre que um Delegante delega GRT a um Indexador; uma taxa de curadoria de 1% quando Curadores sinalizam em um subgraph; e 1% de taxas de query por dados de blockchain. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. [Total de GRT Queimado](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/pt/sps/introduction.mdx b/website/src/pages/pt/sps/introduction.mdx index 88ae1cd29f54..c355e80d015a 100644 --- a/website/src/pages/pt/sps/introduction.mdx +++ b/website/src/pages/pt/sps/introduction.mdx @@ -3,28 +3,29 @@ title: Introudução a Subgraphs Movidos pelo Substreams sidebarTitle: Introdução --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Melhore a eficiência e a escalabilidade do seu subgraph com o [Substreams](/substreams/introduction/) para transmitir dados pré-indexados de blockchain. ## Visão geral -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use um pacote Substreams (`.spkg`) como fonte de dados para que o seu subgraph ganhe acesso a um fluxo de dados de blockchain pré-indexados. Isto resulta num tratamento de dados mais eficiente e escalável, especialmente com redes de blockchain grandes ou complexas. ### Especificações Há dois metodos de ativar esta tecnologia: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Usar [gatilhos](/sps/triggers/)**: isto importa o modelo do Protobuf via um handler de subgraph, permitindo que o utilizador consuma de qualquer módulo do Substreams e mude toda a sua lógica para um subgraph. Este método cria as entidades diretamente no subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **[Mudanças de Entidade](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: Ao inserir mais da lógica no Substreams, pode-se alimentar o rendimento do módulo diretamente no [graph-node](/indexing/tooling/graph-node/). No graph-node, os dados do Substreams podem ser usados para criar as entidades do seu subgraph. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +É possível escolher onde colocar a sua lógica, seja no subgraph ou no Substreams. Porém, considere o que supre as suas necessidades de dados; o Substreams tem um modelo paralelizado, e os gatilhos são consumidos de forma linear no graph-node. ### Outros Recursos -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +Visite os seguintes links para ver guias passo-a-passo sobre ferramentas de geração de código, para construir o seu primeiro projeto de ponta a ponta rapidamente: - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/pt/sps/sps-faq.mdx b/website/src/pages/pt/sps/sps-faq.mdx index 2991b30adbe3..9c78ae2c3162 100644 --- a/website/src/pages/pt/sps/sps-faq.mdx +++ b/website/src/pages/pt/sps/sps-faq.mdx @@ -1,31 +1,31 @@ --- -title: 'Perguntas Frequentes: Subgraphs Movidos pelo Substreams' -sidebarTitle: FAQ +title: "Perguntas Frequentes: Subgraphs Movidos pelo Substreams" +sidebarTitle: Perguntas Frequentes --- ## O que são Substreams? -Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications. +O Substreams é um mecanismo de processamento excecionalmente poderoso, capaz de consumir ricos fluxos de dados de blockchain. Ele permite refinar e moldar dados de blockchain, para serem digeridos rápida e continuamente por aplicativos de utilizador final. Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere. -Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. +O Substreams é programado pela [StreamingFast](https://www.streamingfast.io/). Para mais informações, visite a [Documentação do Substreams](/substreams/introduction/). -## O que são subgraphs movidos por substreams? +## O que são subgraphs movidos por Substreams? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Subgraphs movidos pelo Substreams](/sps/introduction/) combinam o poder do Substreams com as queries de subgraphs. Ao editar um subgraph movido pelo Substreams, os dados produzidos pelas transformações do Substreams podem [produzir mudanças de entidade](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) compatíveis com entidades de subgraph. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +Se já entende da programação de subgraphs, observe que subgraphs movidos a Substreams podem ser consultados do mesmo jeito que se tivessem sido produzidos pela camada de transformação em AssemblyScript; isso com todos os benefícios do Subgraph, o que inclui uma API GraphQL dinâmica e flexível. -## Como subgraphs movidos a Substreams são diferentes de subgraphs? +## Como subgraphs movidos a Substreams diferem de subgraphs? Os subgraphs são compostos de fontes de dados que especificam eventos on-chain, e como transformar estes eventos através de handlers escritos em AssemblyScript. Estes eventos são processados em sequência, com base na ordem em que acontecem na chain. -Por outro lado, subgraphs movidos a substreams têm uma única fonte de dados que referencia um pacote de substreams, processado pelo Graph Node. Substreams têm acesso a mais dados granulares on-chain em comparação a subgraphs convencionais, e também podem se beneficiar de um processamento paralelizado em massa, o que pode diminuir a espera do processamento. +Por outro lado, subgraphs movidos pelo Substreams têm uma única fonte de dados que referencia um pacote de substreams, processado pelo Graph Node. Substreams têm acesso a mais dados granulares on-chain em comparação a subgraphs convencionais, e também podem se beneficiar de um processamento paralelizado em massa, o que pode diminuir muito a espera do processamento. ## Quais os benefícios do uso de subgraphs movidos a Substreams? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Subgraphs movidos a Substreams combinam todos os benefícios do Substreams com o potencial de query de subgraphs. Eles também trazem mais composabilidade e indexações de alto desempenho ao The Graph. Eles também resultam em novos casos de uso de dados; por exemplo, após construir o seu Subgraph movido a Substreams, é possível reutilizar os seus [módulos de Substreams](https://substreams.streamingfast.io/documentation/develop/manifest-modules) para usar [coletores de dados](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) diferentes, como PostgreSQL, MongoDB e Kafka. ## Quais os benefícios do Substreams? @@ -35,7 +35,7 @@ Usar o Substreams incorre muitos benefícios, que incluem: - Indexação de alto desempenho: Indexação muito mais rápida através de clusters de larga escala de operações paralelas (como o BigQuery). -- Mergulho em qualquer lugar: Mergulhe seus dados onde quiser: PostgreSQL, MongoDB, Kafka, subgraphs, arquivos planos, Google Sheets. +- Colete dados em qualquer lugar: Mergulhe os seus dados onde quiser: PostgreSQL, MongoDB, Kafka, subgraphs, arquivos planos, Google Sheets. - Programável: Use códigos para personalizar a extração, realizar agregações de tempo de transformação, e modelar o seu resultado para vários sinks. @@ -67,7 +67,7 @@ Há muitos benefícios do uso do Firehose, que incluem: Para aprender como construir módulos do Substreams, leia a [documentação do Substreams](/substreams/introduction/). -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +Para aprender como empacotar subgraphs e implantá-los no The Graph, veja a [documentação sobre subgraphs movidos pelo Substreams](/sps/introduction/). A [ferramenta de Codegen no Substreams mais recente](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) permitirá ao programador inicializar um projeto no Substreams sem a necessidade de código. @@ -75,7 +75,7 @@ A [ferramenta de Codegen no Substreams mais recente](https://streamingfastio.med Módulos de Rust são o equivalente aos mapeadores em AssemblyScript em subgraphs. Eles são compilados em WASM de forma parecida, mas o modelo de programação permite execuções paralelas. Eles definem a categoria de transformações e agregações que você quer aplicar aos dados de blockchain crus. -See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. +Veja a [documentação dos módulos](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) para mais detalhes. ## O que faz o Substreams compostável? @@ -85,11 +85,11 @@ Como exemplo, Fulana pode construir um módulo de preço de DEX, Sicrano pode us ## Como construir e publicar um Subgraph movido a Substreams? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +Após [definir](/sps/introduction/) um subgraph movido pelo Substreams, é possível usar a Graph CLI para implantá-lo no [Subgraph Studio](https://thegraph.com/studio/). ## Onde posso encontrar exemplos de Substreams e subgraphs movidos a Substreams? -Você pode visitar [este repo do Github](https://github.com/pinax-network/awesome-substreams) para encontrar exemplos de Substreams e subgraphs movidos a Substreams. +Você pode visitar [este repositório do Github](https://github.com/pinax-network/awesome-substreams) para encontrar exemplos de Substreams e subgraphs movidos a Substreams. ## O que Substreams e subgraphs movidos a Substreams significam para a Graph Network? diff --git a/website/src/pages/pt/sps/triggers.mdx b/website/src/pages/pt/sps/triggers.mdx index 548bde4ca531..eafeca1e373f 100644 --- a/website/src/pages/pt/sps/triggers.mdx +++ b/website/src/pages/pt/sps/triggers.mdx @@ -2,17 +2,17 @@ title: Gatilhos do Substreams --- -Use Custom Triggers and enable the full use GraphQL. +Use Gatilhos Personalizados e ative o uso completo da GraphQL. ## Visão geral -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Com Gatilhos Personalizados, é possível enviar dados diretamente ao arquivo de mapeamento do seu subgraph e às suas entidades; sendo esses aspetos parecidos com tabelas e campos. Assim, é possível usar a camada da GraphQL livremente. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +Estes dados podem ser recebidos e processados no handler do seu subgraph ao importar as definições do Protobuf emitidas pelo seu módulo do Substreams. Assim, o tratamento de dados na estrutura do subgraph fica mais simples e eficiente. -### Defining `handleTransactions` +### Como definir `handleTransactions` -O código a seguir demonstra como definir uma função `handleTransactions` num handler de subgraph. Esta função recebe bytes brutos do Substreams como um parâmetro e os decodifica num objeto `Transactions`. Uma nova entidade de subgraph é criada para cada transação. +O código a seguir demonstra como definir uma função `handleTransactions` num handler de subgraph. Esta função recebe bytes brutos do Substreams como um parâmetro e os descodifica num objeto `Transactions`. Uma nova entidade de subgraph é criada para cada transação. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -34,14 +34,14 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +Você verá isto no arquivo `mappings.ts`: 1. Os bytes contendo dados do Substreams são descodificados no objeto `Transactions` gerado; este é usado como qualquer outro objeto AssemblyScript 2. Um loop sobre as transações 3. Uma nova entidade de subgraph é criada para cada transação -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +Para ver um exemplo detalhado de um subgraph baseado em gatilhos, [clique aqui](/sps/tutorial/). ### Outros Recursos -To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/). +Para estruturar o seu primeiro projeto no Recipiente de Programação, confira [este guia](/substreams/developing/dev-container/). diff --git a/website/src/pages/pt/sps/tutorial.mdx b/website/src/pages/pt/sps/tutorial.mdx index deb7589c4cdd..3fb6838f7f28 100644 --- a/website/src/pages/pt/sps/tutorial.mdx +++ b/website/src/pages/pt/sps/tutorial.mdx @@ -1,15 +1,15 @@ --- -title: 'Tutorial: Como Montar um Subgraph Movido a Substreams na Solana' +title: "Tutorial: Como Montar um Subgraph Movido a Substreams na Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Configure um subgraph, movido pelo Substreams e baseado em gatilhos, para um token da SPL (Biblioteca de Protocolos da Solana) da Solana. ## Como Começar -For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) +Para ver um tutorial em vídeo sobre o assunto, [clique aqui](/sps/tutorial/#video-tutorial) -### Prerequisites +### Pré-requisitos Antes de começar: @@ -52,10 +52,10 @@ dataSources: network: solana-mainnet-beta source: package: - moduleName: map_spl_transfers # Módulo definido em substreams.yaml + moduleName: map_spl_transfers # Módulo definido no substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -63,9 +63,9 @@ dataSources: ### Passo 3: Defina as Entidades em `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Para definir os campos a guardar nas suas entidades de subgraph, atualize o arquivo `schema.graphql`. -Here is an example: +Por exemplo: ```graphql type MyTransfer @entity { @@ -81,9 +81,9 @@ Este schema define uma entidade `MyTransfer` com campos como `id`, `amount`, `so ### Passo 4: Controle Dados do Substreams no `mappings.ts` -With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. +Com os objetos do Protobuf criados, agora você pode tratar os dados descodificados do Substreams no seu arquivo `mappings.ts` no diretório `./src`. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +O exemplo abaixo demonstra como extrair as transferências não derivadas associadas à id de conta do Orca para entidades de subgraph: ```ts import { Protobuf } from 'as-proto/assembly' @@ -122,15 +122,15 @@ Para gerar objetos do Protobuf no AssemblyScript, execute: npm run protogen ``` -Este comando converte as definições do Protobuf em AssemblyScript, permitindo o uso destas no handler do subgraph. +Este comando converte as definições do Protobuf em AssemblyScript, permitindo o seu uso no handler do subgraph. ### Conclusão -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Parabéns! Está montado um subgraph movido a Substreams, baseado em gatilhos, para um token da SPL da Solana. Agora dá para personalizar mais o seu schema, os seus mapeamentos, e os seus módulos de modo que combinem com o seu caso de uso específico. -### Video Tutorial +### Tutorial em vídeo - + ### Outros Recursos diff --git a/website/src/pages/pt/subgraphs/_meta-titles.json b/website/src/pages/pt/subgraphs/_meta-titles.json index 0556abfc236c..a72543795a1d 100644 --- a/website/src/pages/pt/subgraphs/_meta-titles.json +++ b/website/src/pages/pt/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", - "cookbook": "Cookbook", - "best-practices": "Best Practices" + "querying": "Queries", + "developing": "Programação", + "guides": "How-to Guides", + "best-practices": "Boas práticas" } diff --git a/website/src/pages/pt/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/pt/subgraphs/best-practices/avoid-eth-calls.mdx index f8f0fc8dedab..4217065c4fe7 100644 --- a/website/src/pages/pt/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Melhores Práticas de Subgraph Parte 4 - Como Melhorar a Velocidade da Indexação ao Evitar eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` são chamadas feitas de um subgraph a um node no Ethereum. Estas chamadas levam um bom tempo para retornar dados, o que retarda a indexação. Se possível, construa contratos inteligentes para emitir todos os dados necessários, para que não seja necessário usar `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Por que Evitar `eth_calls` É uma Boa Prática -Subgraphs são otimizados para indexar dados de eventos emitidos de contratos inteligentes. Um subgraph também pode indexar os dados que vêm de uma `eth_call`, mas isto pode atrasar muito a indexação de um subgraph, já que `eth_calls` exigem a realização de chamadas externas para contratos inteligentes. A capacidade de respostas destas chamadas depende não apenas do subgraph, mas também da conectividade e das respostas do node do Ethereum a ser consultado. Ao minimizar ou eliminar `eth_calls` nos nossos subgraphs, podemos melhorar muito a nossa velocidade de indexação. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### Como É Um `eth_call`? -`eth_calls` tendem a ser necessárias quando os dados requeridos por um subgraph não estão disponíveis via eventos emitidos. Por exemplo, vamos supor que um subgraph precisa identificar se tokens ERC20 são parte de um pool específico, mas o contrato só emite um evento `Transfer` básico e não emite um evento que contém os dados que precisamos: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -Isto é funcional, mas não ideal, já que ele atrasa a indexação do nosso subgraph. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## Como Eliminar `eth_calls` @@ -54,7 +54,7 @@ Idealmente, o contrato inteligente deve ser atualizado para emitir todos os dado event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -Com esta atualização, o subgraph pode indexar directamente os dados exigidos sem chamadas externas: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,22 +96,22 @@ A porção destacada em amarelo é a declaração de chamada. A parte antes dos O próprio handler acessa o resultado desta `eth_call` exatamente como na secção anterior ao atrelar ao contrato e fazer a chamada. o graph-node coloca em cache os resultados de `eth_calls` na memória e a chamada do handler terirará o resultado disto no cache de memória em vez de fazer uma chamada de RPC real. -Nota: `eth_calls` declaradas só podem ser feitas em subgraphs com specVersion maior que 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusão -O desempenho da indexação pode melhorar muito ao minimizar ou eliminar `eth_calls` nos nossos subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/pt/subgraphs/best-practices/derivedfrom.mdx index dedf0bf2ffe2..6640242a3ddd 100644 --- a/website/src/pages/pt/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Boas Práticas de Subgraph 2 - Melhorar a Indexação e a Capacidade de Resposta de Queries com @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -O desempenho de um subgraph pode ser muito atrasado por arranjos no seu schema, já que esses podem crescer além dos milhares de entradas. Se possível, a diretiva `@derivedFrom` deve ser usada ao usar arranjos, já que ela impede a formação de grandes arranjos, simplifica handlers e reduz o tamanho de entidades individuais, o que melhora muito a velocidade da indexação e o desempenho dos queries. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## Como Usar a Diretiva `@derivedFrom` @@ -15,7 +15,7 @@ Você só precisa adicionar uma diretiva `@derivedFrom` após o seu arranjo no s comments: [Comment!]! @derivedFrom(field: "post") ``` -o `@derivedFrom` cria relações eficientes de um-para-muitos, o que permite que uma entidade se associe dinamicamente com muitas entidades relacionadas com base em um campo na entidade relacionada. Esta abordagem faz com que ambos os lados do relacionamento não precisem armazenar dados duplicados e aumenta a eficácia do subgraph. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Exemplo de Caso de Uso para `@derivedFrom` @@ -60,30 +60,30 @@ type Comment @entity { Ao adicionar a diretiva `@derivedFrom`, este schema só armazenará os "Comentários" no lado "Comments" do relacionamento, e não no lado "Post". Os arranjos são armazenados em fileiras individuais, o que os faz crescer significativamente. Se o seu crescimento não for contido, isto pode permitir que o tamanho fique excessivamente grande. -Isto não só aumenta a eficiência do nosso subgraph, mas também desbloqueia três características: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. Podemos fazer um query sobre o `Post` e ver todos os seus comentários. 2. Podemos fazer uma pesquisa reversa e um query sobre qualquer `Comment`, para ver de qual post ele vem. -3. Podemos usar [Carregadores de Campos Derivados](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) para ativar o acesso e manipulação de dados diretamente de relacionamentos virtuais nos nossos mapeamentos de subgraph. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusão -Usar a diretiva `@derivedFrom` nos subgraphs lida eficientemente com arranjos que crescem dinamicamente, o que melhora o desempenho da indexação e o retiro de dados. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. Para aprender mais estratégias detalhadas sobre evitar arranjos grandes, leia este blog por Kevin Jones: [Melhores Práticas no Desenvolvimento de Subgraphs: Como Evitar Grandes Arranjos](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/pt/subgraphs/best-practices/grafting-hotfix.mdx index d9f463501e94..60602417c85a 100644 --- a/website/src/pages/pt/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- -title: 'Melhores Práticas de Subgraph #6 - Use Enxertos para Implantar Hotfixes Mais Rápido' -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +title: "Melhores Práticas de Subgraph #6 - Use Enxertos para Implantar Hotfixes Mais Rápido" +sidebarTitle: Grafting and Hotfixing --- ## TLDR -O enxerto é uma função poderosa na programação de subgraphs, que permite a construção e implantação de novos subgraphs enquanto recicla os dados indexados dos já existentes. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Visão geral -Esta função permite a implantação rápida de hotfixes para problemas críticos, eliminando a necessidade de indexar o subgraph inteiro do zero novamente. Ao preservar dados históricos, enxertar diminui o tempo de espera e garante a continuidade em serviços de dados. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefícios de Enxertos para Hotfixes 1. **Lançamento Rápido** - - **Espera Minimizada**: Quando um subgraph encontra um erro crítico e para de indexar, um enxerto permite que seja lançada uma solução imediata, sem esperar uma nova indexação. - - **Recuperação Imediata**: O novo subgraph continua do último bloco indexado, garantindo o funcionamento ininterrupto dos serviços de dados. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Preservação de Dados** - - **Reaproveitamento de Dados Históricos**: O enxerto copia os dados existentes do subgraph de origem; assim, não há como perder dados históricos valiosos. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistência**: Mantém a continuidade de dados, que é crucial para aplicativos que dependem de dados históricos consistentes. 3. **Eficiência** @@ -31,38 +31,38 @@ Esta função permite a implantação rápida de hotfixes para problemas crític 1. \*Implantação Inicial sem Enxerto\*\* - - **Começar do Zero**: Sempre lance o seu subgraph inicial sem enxertos para que fique estável e funcione como esperado. - - **Fazer Testes Minuciosos:** Valide o desempenho do subgraph para minimizar a necessidade de hotfixes futuros. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementação do Hotfix com Enxerto** - **Identificar o Problema**: Quando ocorrer um erro crítico, determine o número de bloco do último evento indexado com êxito. - - **Criar um Novo Subgraph**: Programe um novo subgraph que inclui o hotfix. - - **Configure o Enxerto**: Use o enxerto para copiar dados até o número de bloco identificado do subgraph defeituoso. - - **Lance Rápido**: Edite o subgraph enxertado para reabrir o serviço o mais rápido possível. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Depois do Hotfix** - - **Monitore o Desempenho**: Tenha certeza que o subgraph enxertado está a indexar corretamente, e que o hotfix pode resolver o problema. - - **Reedite Sem Enxertos**: Agora que está estável, lance uma nova versão do subgraph sem enxertos para fins de manutenção a longo prazo. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Nota: Não é recomendado depender de enxertos indefinidamente, pois isto pode complicar a manutenção e implantação de futuras atualizações. - - **Atualize as Referências**: Redirecione quaisquer serviços ou aplicativos para que usem o novo subgraph, sem enxertos. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Considerações Importantes** - **Selecione Blocos Corretamente**: Escolha o número de bloco do enxerto com cuidado, para evitar perdas de dados. - **Dica**: Use o número de bloco do último evento corretamente processado. - - **Use a ID de Implantação**: Referencie a ID de Implantação do subgraph de origem, não a ID do Subgraph. - - **Nota**: A ID de Implantação é a identificadora única para uma implantação específica de subgraph. - - **Declaração de Funções**: Não se esqueça de declarar enxertos na lista de funções, no manifest do seu subgraph. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Exemplo: Como Implantar um Subgraph com Enxertos -Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou de indexar devido a um erro crítico. Veja como usar um enxerto para implementar um hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Manifest Falho de Subgraph (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d 2. **Novo Manifest Enxertado de Subgraph (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explicação:** -- **Atualização de Fonte de Dados**: O novo subgraph aponta para 0xNewContractAddress, que pode ser uma versão consertada do contrato inteligente. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Bloco Inicial**: Configure para um bloco após o último indexado com êxito, para evitar processar o erro novamente. - **Configuração de Enxerto**: - - **base**: ID de Implantação do subgraph falho. + - **base**: Deployment ID of the failed Subgraph. - **block**: Número de blocos onde o enxerto deve começar. 3. **Etapas de Implantação** @@ -135,10 +135,10 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d - **Ajuste o Manifest**: Conforme detalhado acima, atualize o `subgraph.yaml` com configurações de enxerto. - **Lance o Subgraph**: - Autentique com a Graph CLI. - - Lance o novo subgraph com `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Após a Implantação** - - **Verifique a Indexação**: Verifique se o subgraph está a indexar corretamente a partir do ponto de enxerto. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitore os Dados**: Verifique se há novos dados sendo capturados, e se o hotfix funciona. - **Planeie Para uma Reedição**: Prepare a implantação de uma versão não enxertada, para mais estabilidade a longo prazo. @@ -146,9 +146,9 @@ Vamos supor que tens um subgraph a rastrear um contrato inteligente, que parou d O enxerto é uma ferramenta poderosa para implantar hotfixes rapidamente, mas deve ser evitado em algumas situações específicas — para manter a integridade dos dados e garantir o melhor desempenho. -- **Mudanças Incompatíveis de Schema**: Se o seu hotfix exigir a alteração do tipo de campos existentes ou a remoção de campos do seu esquema, não é adequado fazer um enxerto. O enxerto espera que o esquema do novo subgraph seja compatível com o schema do subgráfico base. Alterações incompatíveis podem levar a inconsistências e erros de dados, porque os dados existentes não se alinham com o novo schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Mudanças Significantes na Lógica de Mapeamento**: Quando o hotfix envolve modificações substanciais na sua lógica de mapeamento — como alterar o processamento de eventos ​de funções do handler — o enxerto pode não funcionar corretamente. A nova lógica pode não ser compatível com os dados processados ​​sob a lógica antiga, levando a dados incorretos ou indexação com falha. -- **Implantações na The Graph Network:** Enxertos não são recomendados para subgraphs destinados à rede descentralizada (mainnet) do The Graph. Um enxerto pode complicar a indexação e pode não ser totalmente apoiado por todos os Indexers, o que pode causar comportamento inesperado ou aumento de custos. Para implantações de mainnet, é mais seguro recomeçar a indexação do subgraph do zero, para garantir total compatibilidade e confiabilidade. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### **Controle de Riscos** @@ -157,31 +157,31 @@ O enxerto é uma ferramenta poderosa para implantar hotfixes rapidamente, mas de ## Conclusão -O enxerto é uma estratégia eficaz para implantar hotfixes no desenvolvimento de subgraphs, e ainda permite: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Se recuperar rapidamente** de erros críticos sem recomeçar a indexação. - **Preservar dados históricos**, mantendo a continuidade tanto para aplicativos quanto para utilizadores. - **Garantir a disponibilidade do serviço** ao minimizar o tempo de espera em períodos importantes de manutenção. -No entanto, é importante usar enxertos com cuidado e seguir as melhores práticas para controlar riscos. Após estabilizar o seu subgraph com o hotfix, planeie a implantação de uma versão não enxertada para garantir a estabilidade a longo prazo. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Outros Recursos - **[Documentação de Enxertos](/subgraphs/cookbook/grafting/)**: Substitua um Contrato e Mantenha o Seu Histórico com Enxertos - **[Como Entender IDs de Implantação](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Aprenda a diferença entre ID de Implantação e ID de Subgraph. -Ao incorporar enxertos ao seu fluxo de programação de subgraphs, é possível melhorar a sua capacidade de responder a problemas, garantindo que os seus serviços de dados permaneçam robustos e confiáveis. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/pt/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 4124d0504cde..93d54d6a07e9 100644 --- a/website/src/pages/pt/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Boas Práticas de Subgraph 3 - Como Melhorar o Desempenho da Indexação e de Queries com Entidades Imutáveis e Bytes como IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ Enquanto outros tipos de IDs são possíveis, como String e Int8, recomendamos u ### Razões para Não Usar Bytes como IDs 1. Se IDs de entidade devem ser legíveis para humanos, como IDs numéricas automaticamente incrementadas ou strings legíveis, então Bytes como IDs não devem ser usados. -2. Em caso de integração dos dados de um subgraph com outro modelo de dados que não usa Bytes como IDs, então Bytes como IDs não devem ser usados. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Melhorias no desempenho de indexação e queries não são desejáveis. ### Concatenação com Bytes como IDs -É comum em vários subgraphs usar a concatenação de strings para combinar duas propriedades de um evento em uma ID única, como o uso de `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Mas como isto retorna um string, isto impede muito o desempenho da indexação e queries de subgraphs. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Em vez disto, devemos usar o método `concatI32()` para concatenar propriedades de evento. Esta estratégia resulta numa ID `Bytes` que tem um desempenho muito melhor. @@ -172,20 +172,20 @@ Resposta de query: ## Conclusão -É comprovado que usar Entidades Imutáveis e Bytes como IDs aumenta muito a eficiência de subgraphs. Especificamente, segundo testes, houve um aumento de até 28% no desempenho de queries e uma aceleração de até 48% em velocidades de indexação. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Leia mais sobre o uso de Entidades Imutáveis e Bytes como IDs nesta publicação por David Lutterkort, Engenheiro de Software na Edge & Node: [Duas Melhorias Simples no Desempenho de Subgraphs](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/best-practices/pruning.mdx b/website/src/pages/pt/subgraphs/best-practices/pruning.mdx index eb6afc85791f..4fb9bc557b22 100644 --- a/website/src/pages/pt/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Boas Práticas de Subgraph 1 - Acelerar Queries com Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -O [pruning](/developing/creating-a-subgraph/#prune) retira entidades de arquivo do banco de dados de um subgraph até um bloco especificado; e retirar entidades não usadas do banco de dados de um subgraph tende a melhorar muito o desempenho de queries de um subgraph. Usar o `indexerHints` é uma maneira fácil de fazer o pruning de um subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## Como Fazer Pruning de um Subgraph com `indexerHints` @@ -13,14 +13,14 @@ Adicione uma secção chamada `indexerHints` ao manifest. O `indexerHints` tem três opções de `prune`: -- `prune: auto`: Guarda o histórico mínimo necessário, conforme configurado pelo Indexador, para otimizar o desempenho dos queries. Esta é a configuração geralmente recomendada e é padrão para todos os subgraphs criados pela `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Determina um limite personalizado no número de blocos históricos a serem retidos. - `prune: never`: Não será feito pruning de dados históricos; guarda o histórico completo, e é o padrão caso não haja uma secção `indexerHints`. `prune: never` deve ser selecionado caso queira [Queries de Viagem no Tempo](/subgraphs/querying/graphql-api/#time-travel-queries). -Podemos adicionar `indexerHints` aos nossos subgraphs ao atualizar o nosso `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,18 +39,18 @@ dataSources: ## Conclusão -O pruning com `indexerHints` é uma boa prática para o desenvolvimento de subgraphs que oferece melhorias significativas no desempenho de queries. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/best-practices/timeseries.mdx b/website/src/pages/pt/subgraphs/best-practices/timeseries.mdx index b0228580d20f..bb25b602d8dd 100644 --- a/website/src/pages/pt/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/pt/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- -title: 'Melhores Práticas para um Subgraph #5 — Simplifique e Otimize com Séries Temporais e Agregações' -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +title: "Melhores Práticas para um Subgraph #5 — Simplifique e Otimize com Séries Temporais e Agregações" +sidebarTitle: Séries de Tempo e Agregações --- ## TLDR -Tirar vantagem de séries temporais e agregações em subgraphs pode melhorar bastante a velocidade da indexação e o desempenho dos queries. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Visão geral @@ -36,6 +36,10 @@ Séries temporais e agregações reduzem a sobrecarga do processamento de dados ## Como Implementar Séries Temporais e Agregações +### Pré-requisitos + +You need `spec version 1.1.0` for this feature. + ### Como Definir Entidades de Séries Temporais Uma entidade de série temporal representa pontos de dados brutos coletados gradativamente. Ela é definida com a anotação `@entity(timeseries: true)`. Requisitos principais: @@ -51,7 +55,7 @@ Exemplo: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Exemplo: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -Neste exemplo, o campo `Stats` ("Estatísticas") agrega o campo de preços de Data de hora em hora, diariamente, e computa a soma. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Queries de Dados Agregados @@ -172,24 +176,24 @@ Os operadores e funções suportados incluem aritmética básica (+, -, \_, /), ### Conclusão -Implementar séries temporais e agregações em subgraphs é recomendado para projetos que lidam com dados baseados em tempo. Esta abordagem: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Melhora o Desempenho: Acelera a indexação e os queries ao reduzir a carga de processamento de dados. - Simplifica a Produção: Elimina a necessidade de lógica de agregação manual em mapeamentos. - Escala Eficientemente: Manuseia grandes quantias de dados sem comprometer a velocidade ou a capacidade de resposta. -Ao adotar esse padrão, os programadores podem criar subgraphs mais eficientes e escaláveis, fornecendo acesso mais rápido e confiável de dados aos utilizadores finais. Para saber mais sobre como implementar séries temporais e agregações, consulte o [Leia-me sobre Séries Temporais e Agregações](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) e experimente esse recurso nos seus subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Melhores Práticas para um Subgraph 1 – 6 -1. [Pruning: Reduza o Excesso de Dados do Seu Subgraph para Acelerar Queries](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Use o @derivedFrom para Melhorar a Resposta da Indexação e de Queries](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Melhore o Desempenho da Indexação e de Queries com o Uso de Bytes como IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Evite `eth-calls` para Acelerar a Indexação](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Simplifique e Otimize com Séries Temporais e Agregações](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Lance Hotfixes Mais Rápido com Enxertos](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/pt/subgraphs/billing.mdx b/website/src/pages/pt/subgraphs/billing.mdx index f73ae48ff725..a028354d7e66 100644 --- a/website/src/pages/pt/subgraphs/billing.mdx +++ b/website/src/pages/pt/subgraphs/billing.mdx @@ -10,7 +10,9 @@ Há dois planos disponíveis para queries de subgraphs na Graph Network. - **Plano de Crescimento**: Inclui tudo no Plano Grátis, com todos os queries após a cota de 100.000 mensais exigindo pagamentos com cartão de crédito ou GRT. Este plano é flexível o suficiente para cobrir equipes que estabeleceram dapps numa variedade de casos de uso. - +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + + ## Pagamentos de Queries com cartão de crédito diff --git a/website/src/pages/pt/subgraphs/cookbook/arweave.mdx b/website/src/pages/pt/subgraphs/cookbook/arweave.mdx index a84800d73d48..4fdd129460c0 100644 --- a/website/src/pages/pt/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/pt/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Construindo Subgraphs no Arweave --- -> O apoio ao Arweave no Graph Node, e no Subgraph Studio, está em beta: por favor nos contacte no [Discord](https://discord.gg/graphprotocol) se tiver dúvidas sobre como construir subgraphs no Arweave! +> O apoio ao Arweave no Graph Node, e no Subgraph Studio, está em fase beta: por favor nos contacte no [Discord](https://discord.gg/graphprotocol) se tiver dúvidas sobre como construir subgraphs no Arweave! Neste guia, você aprenderá como construir e lançar Subgraphs para indexar a blockchain Arweave. @@ -25,8 +25,8 @@ O [Graph Node](https://github.com/graphprotocol/graph-node) é atualmente capaz Para construir e lançar Subgraphs no Arweave, são necessários dois pacotes: -1. `@graphprotocol/graph-cli` acima da versão 0.30.2 — Esta é uma ferramenta de linha de comandos para a construção e implantação de subgraphs. [Clique aqui](https://www.npmjs.com/package/@graphprotocol/graph-cli) para baixá-la usando o `npm`. -2. `@graphprotocol/graph-ts` acima da versão 0.27.0 — Esta é uma ferramenta de linha de comandos para a construção e implantação de subgraphs. [Clique aqui](https://www.npmjs.com/package/@graphprotocol/graph-ts) para baixá-la usando o `npm`. +1. `@graphprotocol/graph-cli` acima da versão 0.30.2 — Esta é uma ferramenta de linha de comandos para a construção e implantação de subgraphs. [Clique aqui](https://www.npmjs.com/package/@graphprotocol/graph-cli) para baixá-la com o `npm`. +2. `@graphprotocol/graph-ts` acima da versão 0.27.0 — Uma biblioteca de tipos específicos de subgraphs. [Clique aqui](https://www.npmjs.com/package/@graphprotocol/graph-ts) para baixar com `npm`. ## Os componentes de um subgraph @@ -46,40 +46,40 @@ Os requisitos para subgraphs do Arweave estão cobertos pela [documentação](/d Esta é a lógica que determina como os dados devem ser retirados e armazenados quando alguém interage com as fontes de dados que estás a escutar. Os dados são traduzidos e armazenados baseados no schema que listaste. -Durante o desenvolvimento de um subgraph, existem dois comandos importantes: +During Subgraph development there are two key commands: ``` -$ graph codegen # gera tipos do arquivo de schema identificado no manifest -$ graph build # gera Web Assembly dos arquivos AssemblyScript, e prepara todos os arquivos do subgraph em uma pasta /build +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Definição de Manifest de Subgraph -O manifest do subgraph `subgraph.yaml` identifica as fontes de dados para o subgraph, os gatilhos de interesse, e as funções que devem ser executadas em resposta a tais gatilhos. Veja abaixo um exemplo de um manifest de subgraph, para um subgraph no Arweave: +O manifest do subgraph `subgraph.yaml` identifica as fontes de dados para o subgraph, os gatilhos de interesse, e as funções que devem ser executadas em resposta a tais gatilhos. Veja abaixo um exemplo de um manifest, para um subgraph no Arweave: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: - file: ./schema.graphql # link ao arquivo schema + file: ./schema.graphql # link to the schema file dataSources: - kind: arweave name: arweave-blocks - network: arweave-mainnet # The Graph apoia apenas a Mainnet do Arweave + network: arweave-mainnet # The Graph only supports Arweave Mainnet source: - owner: 'ID-OF-AN-OWNER' # A chave pública de uma carteira no Arweave - startBlock: 0 # coloque isto como 0 para começar a indexar da gênese da chain + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript - file: ./src/blocks.ts # link ao arquivo com os mapeamentos no Assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: - Block - Transaction blockHandlers: - - handler: handleBlock # o nome da função no arquivo de mapeamento + - handler: handleBlock # the function name in the mapping file transactionHandlers: - - handler: handleTx # o nome da função no arquivo de mapeamento + - handler: handleTx # the function name in the mapping file ``` - Subgraphs no Arweave introduzem uma nova categoria de fonte de dados (`arweave`) @@ -99,7 +99,7 @@ Fontes de dados no Arweave apoiam duas categorias de handlers: ## Definição de Schema -A definição de Schema descreve a estrutura do banco de dados resultado do subgraph, e os relacionamentos entre entidades. Isto é agnóstico da fonte de dados original. Para mais detalhes na definição de schema de subgraph, [clique aqui](/developing/creating-a-subgraph/#the-graphql-schema). +A definição de Schema descreve a estrutura do banco de dados resultado do subgraph, e os relacionamentos entre entidades. Isto é agnóstico da fonte de dados original. Para mais detalhes sobre a definição de schema de subgraph, [clique aqui](/developing/creating-a-subgraph/#the-graphql-schema). ## Mapeamentos em AssemblyScript @@ -160,7 +160,7 @@ graph deploy --access-token ## Consultando um Subgraph no Arweave -O ponto final do GraphQL para subgraphs no Arweave é determinado pela definição do schema, com a interface existente da API. Visite a [documentação da API da GraphQL](/subgraphs/querying/graphql-api/) para mais informações. +O endpoint do GraphQL para subgraphs no Arweave é determinado pela definição do schema, com a interface existente da API. Visite a [documentação da API da GraphQL](/subgraphs/querying/graphql-api/) para mais informações. ## Exemplos de Subgraphs @@ -168,17 +168,17 @@ Aqui está um exemplo de subgraph para referência: - [Exemplo de subgraph para o Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) -## FAQ +## Perguntas Frequentes ### Um subgraph pode indexar o Arweave e outras chains? -Não, um subgraph só pode apoiar fontes de dados de apenas uma chain/rede. +No, a Subgraph can only support data sources from one chain/network. ### Posso indexar os arquivos armazenados no Arweave? Atualmente, The Graph apenas indexa o Arweave como uma blockchain (seus blocos e transações). -### Posso identificar pacotes do Bundlr em meu subgraph? +### Posso identificar pacotes do Bundlr no meu subgraph? Isto não é apoiado no momento. @@ -188,7 +188,7 @@ O source.owner pode ser a chave pública ou o endereço da conta do ### Qual é o formato atual de encriptação? -Os dados são geralmente passados aos mapeamentos como Bytes, que se armazenados diretamente, são retornados ao subgraph em um formato `hex` (por ex. hashes de transações e blocos). Você pode querer convertê-lo a um formato seguro para `base64` ou `base64 URL` em seus mapeamentos, para combinar com o que é exibido em exploradores de blocos, como o [Arweave Explorer](https://viewblock.io/arweave/). +Os dados são geralmente passados aos mapeamentos como Bytes, que, se armazenados diretamente, são retornados ao subgraph em um formato `hex` (por ex. hashes de transações e blocos). Vale converter estes em um formato compatível com `base64` ou `base64 URL` em seus mapeamentos, para combinar com o que é exibido em exploradores de blocos, como o [Arweave Explorer](https://viewblock.io/arweave/). A seguinte função de helper `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` pode ser usada, e será adicionada ao `graph-ts`: diff --git a/website/src/pages/pt/subgraphs/cookbook/enums.mdx b/website/src/pages/pt/subgraphs/cookbook/enums.mdx index d76ea4c23c4b..6851d45ad0f1 100644 --- a/website/src/pages/pt/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/pt/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, ou tipos de enumeração, são um tipo de dados específico que permite d ### Exemplo de Enums no seu Schema -Se estiver a construir um subgraph para rastrear o histórico de posse de tokens em um marketplace, cada token pode passar por posses diferentes, como `OriginalOwner`, `SecondOwner`, e `ThirdOwner`. Ao usar enums, é possível definir essas posses específicas, assim garantindo que só são nomeados valores predefinidos. +Se estiver a construir um subgraph para rastrear o histórico de posse de tokens em um mercado, cada token pode passar por posses diferentes, como `OriginalOwner`, `SecondOwner`, e `ThirdOwner`. Ao usar enums, é possível definir essas posses específicas, o que garante que só são nomeados valores predefinidos. É possível definir enums no seu schema; assim definidos, a representação de string dos valores de enum podem ser usados para configurar um campo de enum numa entidade. @@ -65,7 +65,7 @@ Enums provém segurança de dados, minimizam os riscos de erros de digitação, > Nota: o guia a seguir usa o contrato inteligente de NFTs CryptoCoven. -Para definir enums para os vários marketplaces com apoio a troca de NFTs, use o seguinte no seu schema de subgraph: +Para definir enums para os vários mercados com apoio a troca de NFTs, use o seguinte no seu schema de subgraph: ```gql # Enum para Marketplaces com que o contrato CryptoCoven interagiu(provavelmente Troca/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Como Usar Enums para Marketplaces de NFT -Quando definidos, enums podem ser usados no seu subgraph para categorizar transações ou eventos. +Quando definidos, os enums podem ser usados no seu subgraph para categorizar transações ou eventos. Por exemplo: ao registrar vendas de NFT, é possível usar o enum para especificar o marketplace envolvido na ação. diff --git a/website/src/pages/pt/subgraphs/cookbook/grafting.mdx b/website/src/pages/pt/subgraphs/cookbook/grafting.mdx index cbfc42ddc895..ffaf06038f89 100644 --- a/website/src/pages/pt/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/pt/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Como Substituir um Contrato e Manter a sua História com Enxertos --- -Neste guia, aprenda como construir e lançar novos subgraphs com o enxerto de subgraphs existentes. +Neste guia, aprenda como construir e implantar novos subgraphs com o enxerto de subgraphs existentes. ## O que é Enxerto? -O processo de enxerto reutiliza os dados de um subgraph existente e o indexa em um bloco seguinte. Isto é útil durante o desenvolvimento para rapidamente superar erros simples nos mapeamentos, ou fazer um subgraph existente funcionar temporariamente após ele ter falhado. Ele também pode ser usado ao adicionar uma característica a um subgraph que demora para ser indexado do zero. +O processo de enxerto reutiliza os dados de um subgraph existente e o indexa em um bloco seguinte. Isto é útil durante o desenvolvimento para superar erros simples rapidamente nos mapeamentos, ou fazer um subgraph existente funcionar temporariamente após ele ter falhado. Ele também pode ser usado ao adicionar uma característica a um subgraph que demora para ser indexado do zero. -O subgraph enxertado pode usar um schema GraphQL que não é idêntico ao schema do subgraph base, mas é apenas compatível com ele. Ele deve ser um schema válido no seu próprio mérito, mas pode desviar do schema do subgraph base nas seguintes maneiras: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Ele adiciona ou remove tipos de entidade - Ele retira atributos de tipos de entidade @@ -26,17 +26,17 @@ Neste tutorial, cobriremos um caso de uso básico. Substituiremos um contrato ex ## Notas Importantes sobre Enxertos ao Migrar Para a Graph Network -> **Cuidado**: Não é recomendado usar enxertos para subgraphs editados na The Graph Network +> **Cuidado**: Não recomendamos usar enxertos para subgraphs editados na The Graph Network ### Qual a Importância Disto? -Isto é um recurso poderoso que permite que os programadores "enxertem" um subgraph em outro, o que, efetivamente, transfere dados históricos do subgraph existente até uma versão nova. Não é possível enxertar um subgraph da Graph Network de volta ao Subgraph Studio. +É um recurso poderoso que permite que os programadores "enxertem" um subgraph em outro, o que, efetivamente, transfere dados históricos do subgraph existente até uma versão nova. Não é possível enxertar um subgraph da Graph Network de volta ao Subgraph Studio. ### Boas práticas **Migração Inicial**: Ao implantar o seu subgraph pela primeira vez na rede descentralizada, faça-o sem enxertos. Verifique se o subgraph está estável e funciona como esperado. -**Atualizações Subsequentes**: quando o seu subgraph estiver ativo e estável na rede descentralizada, será possível usar enxertos para versões futuras, para tornar a transição mais suave e preservar dados históricos. +**Atualizações Subsequentes**: quando o seu subgraph estiver ativo e estável na rede descentralizada, será possível usar enxertos para versões futuras, para suavizar a transição e preservar dados históricos. Ao aderir a estas diretrizes, dá para minimizar riscos e garantir um processo de migração mais suave. @@ -46,14 +46,14 @@ Construir subgraphs é uma parte essencial do The Graph, descrita mais profundam - [Exemplo de repositório de subgraph](https://github.com/Shiyasmohd/grafting-tutorial) -> Nota: O contrato usado no subgraph foi tirado do seguinte [Kit para Iniciantes de Hackathon](https://github.com/schmidsi/hackathon-starterkit). +> Observação: O contrato usado no subgraph foi tirado do seguinte [Kit para Iniciantes de Hackathon](https://github.com/schmidsi/hackathon-starterkit). ## Definição de Manifest de Subgraph -O manifest do subgraph `subgraph.yaml` identifica as fontes de dados para o subgraph, os gatilhos de interesse, e as funções que devem ser executadas em resposta a esses gatilhos. Veja abaixo um exemplo de manifesto de subgraph para usar: +O manifest `subgraph.yaml` identifica as fontes de dados para o subgraph, os gatilhos de interesse, e as funções que devem ser executadas em resposta a esses gatilhos. Veja abaixo um exemplo de manifest de subgraph para usar: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -92,8 +92,8 @@ Enxertos exigem a adição de dois novos itens ao manifest do subgraph original: features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph - block: 5956000 # block number + base: Qm... # ID do subgraph base + block: 5956000 # Número do bloco ``` - `features:` é uma lista de todos os [nomes de função](/developing/creating-a-subgraph/#experimental-features) usados. @@ -105,7 +105,7 @@ Os valores `base` e ​​`block` podem ser encontrados com a implantação de d 1. Vá para o [Subgraph Studio](https://thegraph.com/studio/) e crie um subgraph na testnet da Sepolia chamado `graft-example` 2. Siga as direções na seção `AUTH & DEPLOY` na sua página de subgraph, na pasta `graft-example` do repositório -3. Ao terminar, verifique que o subgraph está a indexar corretamente. Se executar o seguinte comando no The Graph Playground +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -145,9 +145,9 @@ Após verificar que o subgraph está a indexar corretamente, será possível atu O subgraph.yaml do substituto terá um novo endereço de contrato. Isto pode acontecer quando atualizar o seu dapp, relançar um contrato, etc. 1. Vá para o [Subgraph Studio](https://thegraph.com/studio/) e crie um subgraph na testnet da Sepolia chamado `graft-replacement` -2. Crie um novo manifesto. O `subgraph.yaml` para `graph-replacement` contém um endereço de contrato diferente e novas informações sobre como ele deve enxertar. Estes são o `block` do [último evento importante emitido](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) pelo contrato antigo, e o `base` do subgraph antigo. A ID de subgraph `base` é a `Deployment ID` do seu subgraph `graph-example` original. Você pode encontrá-la no Subgraph Studio. +2. Crie um novo manifest. O `subgraph.yaml` para `graph-replacement` contém um endereço de contrato diferente e novas informações sobre como ele deve enxertar. Estes são o `block` do [último evento importante emitido](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) pelo contrato antigo, e o `base` do subgraph antigo. A ID de subgraph `base` é a `Deployment ID` do seu subgraph `graph-example` original. Você pode encontrá-la no Subgraph Studio. 3. Siga as instruções na seção `AUTH & DEPLOY` da sua página de subgraph, na pasta `graft-replacement` do repositório -4. Ao terminar, verifique que o subgraph está a indexar corretamente. Se executar o seguinte comando no The Graph Playground +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -187,7 +187,7 @@ Ele deve retornar algo como: Repare que o subgraph `graft-replacement` está a indexar a partir de dados `graph-example` mais antigos e dados mais novos do novo endereço de contrato. O contrato original emitiu dois eventos `Withdrawal`: [Evento 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) e [Evento 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). O novo contrato emitiu um `Withdrawal` após, [Evento 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). As duas transações indexadas anteriormente (Evento 1 e 2) e a nova transação (Evento 3) foram combinadas no subgraph `graft-replacement`. -Parabéns! Enxertaste um subgraph em outro subgraph. +Parabéns! Você acaba de enxertar um subgraph sobre outro subgraph. ## Outros Recursos diff --git a/website/src/pages/pt/subgraphs/cookbook/near.mdx b/website/src/pages/pt/subgraphs/cookbook/near.mdx index 58143e87a809..db728bc51e51 100644 --- a/website/src/pages/pt/subgraphs/cookbook/near.mdx +++ b/website/src/pages/pt/subgraphs/cookbook/near.mdx @@ -2,7 +2,7 @@ title: Construção de Subgraphs na NEAR --- -Este guia é uma introdução à construção de subgraphs para indexar contratos inteligentes na blockchain [NEAR](https://docs.near.org/). +Este guia explica como construir subgraphs para indexar contratos inteligentes na blockchain [NEAR](https://docs.near.org/). ## O que é NEAR? @@ -29,7 +29,7 @@ Subgraphs são baseados em eventos; ou seja, eles esperam e então processam eve A programação de subgraphs no NEAR exige o `graph-cli` acima da versão `0.23.0`, e o `graph-ts` acima da versão `0.23.0`. -> Construir um subgraph NEAR é um processo muito parecido com a construção de um subgraph que indexa o Ethereum. +> Construir um subgraph na NEAR é um processo muito parecido com a construção de um subgraph que indexa o Ethereum. Há três aspectos de definição de subgraph: @@ -39,11 +39,11 @@ Há três aspectos de definição de subgraph: **Mapeamentos de AssemblyScript:** [Código AssemblyScript](/subgraphs/developing/creating/graph-ts/api/) que traduz dos dados do evento para as entidades definidas no seu esquema. O apoio à NEAR introduz tipos de dados específicos da NEAR e novas funções de análise JSON. -Durante o desenvolvimento de um subgraph, existem dois comandos importantes: +During Subgraph development there are two key commands: ```bash -$ graph codegen # gera tipos do arquivo de schema identificado no manifest -$ graph build # gera Web Assembly dos arquivos AssemblyScript, e prepara todos os arquivos do subgraph em uma pasta /build +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Definição de Manifest de Subgraph @@ -51,23 +51,23 @@ $ graph build # gera Web Assembly dos arquivos AssemblyScript, e prepara todos o O manifest do subgraph (`subgraph.yaml`) identifica as fontes de dados para o subgraph, os gatilhos de interesse, e as funções que devem ser executadas em resposta a tais gatilhos. Veja abaixo um exemplo de manifest para um subgraph na NEAR: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: - file: ./src/schema.graphql # link para o arquivo de schema + file: ./src/schema.graphql # link to the schema file dataSources: - kind: near network: near-mainnet source: - account: app.good-morning.near # Esta fonte de dados monitorará esta conta - startBlock: 10662188 # Necessário para a NEAR + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - - handler: handleNewBlock # nome da função no arq. de mapeamento + - handler: handleNewBlock # the function name in the mapping file receiptHandlers: - - handler: handleReceipt # nome da função no arq. de mapeamento - file: ./src/mapping.ts # link ao arq. com os mapeamentos de Assemblyscript + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` - Subgraphs na NEAR introduzem um novo tipo (`kind`) de fonte de dados (`near`) @@ -92,7 +92,7 @@ As fontes de dados na NEAR apoiam duas categorias de handlers: ### Definição de Schema -A definição de Schema descreve a estrutura do banco de dados resultado do subgraph, e os relacionamentos entre entidades. Isto é agnóstico da fonte de dados original. Para mais detalhes na definição de schema de subgraph, [clique aqui](/developing/creating-a-subgraph/#the-graphql-schema). +A definição de Schema descreve a estrutura do banco de dados resultado do subgraph, e os relacionamentos entre entidades. Isto é agnóstico da fonte de dados original. Para mais detalhes sobre a definição de schema de subgraph, [clique aqui](/developing/creating-a-subgraph/#the-graphql-schema). ### Mapeamentos em AssemblyScript @@ -178,7 +178,7 @@ O Subgraph Studio e o Indexador de atualização na Graph Network apoiam atualme - `near-mainnet` - `near-testnet` -Para mais informações sobre criar e implantar subgraphs no Subgraph Studio, clique [aqui](/deploying/deploying-a-subgraph-to-studio/). +Para saber mais sobre como criar e implantar subgraphs no Subgraph Studio, clique [aqui](/deploying/deploying-a-subgraph-to-studio/). Para começo de conversa, o primeiro passo consiste em "criar" o seu subgraph - isto só precisa ser feito uma vez. No Subgraph Studio, isto pode ser feito do [seu Painel](https://thegraph.com/studio/): "Criar um subgraph". @@ -186,10 +186,10 @@ Quando o seu subgraph estiver pronto, implante o seu subgraph com o comando de C ```sh $ graph create --node # cria um subgraph num Graph Node local (no Subgraph Studio, isto é feito via a interface) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # sobe os arquivos do build a um ponto final IPFS especificado, e implanta o subgraph num Graph Node com base no hash IPFS do manifest +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # envia os arquivos do build a um ponto final IPFS especificado, e implanta o subgraph num Graph Node com base no hash IPFS do manifest ``` -A configuração do nódulo dependerá de onde o subgraph será lançado. +A configuração do nódulo dependerá de onde o subgraph será implantado. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Quando o seu subgraph for lançado, ele será indexado pelo Graph Node. O seu progresso pode ser conferido com um query no próprio subgraph: +Quando o seu subgraph for implantado, ele será indexado pelo Graph Node. O seu progresso pode ser conferido com um query no próprio subgraph: ```graphql { @@ -228,7 +228,7 @@ Em breve, falaremos mais sobre como executar os componentes acima. ## Como Consultar um Subgraph na NEAR -O ponto final do GraphQL para subgraphs na NEAR é determinado pela definição do schema, com a interface existente da API. Visite a [documentação da API da GraphQL](/subgraphs/querying/graphql-api/) para mais informações. +O endpoint do GraphQL para subgraphs na NEAR é determinado pela definição do schema, com a interface existente da API. Visite a [documentação da API da GraphQL](/subgraphs/querying/graphql-api/) para mais informações. ## Exemplos de Subgraphs @@ -238,15 +238,15 @@ Aqui estão alguns exemplos de subgraphs para referência: [Recibos da NEAR](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) -## FAQ +## Perguntas Frequentes ### Como o beta funciona? -O apoio à NEAR está em beta; podem ocorrer mais mudanças na API enquanto continuamos a melhorar a integração. Por favor, contacte-nos em near@thegraph.com para podermos apoiar-te na construção de subgraphs no NEAR e avisar-te sobre os acontecimentos mais recentes! +O apoio à NEAR está em fase beta; podem ocorrer mais mudanças na API enquanto continuamos a melhorar a integração. Por favor, contacte-nos em near@thegraph.com para podermos apoiar-te na construção de subgraphs no NEAR e avisar-te sobre os acontecimentos mais recentes! ### Um subgraph pode indexar chains da NEAR e da EVM? -Não, um subgraph só pode apoiar fontes de dados de apenas uma chain/rede. +No, a Subgraph can only support data sources from one chain/network. ### Os subgraphs podem reagir a gatilhos mais específicos? @@ -270,9 +270,9 @@ Não há apoio a isto. Estamos a avaliar se esta funcionalidade é necessária p Não há apoio a isto no momento. Estamos a avaliar se esta funcionalidade é necessária para indexação. -### Subgraphs no Ethereum apoiam versões "pendentes" e "atuais". Como posso lançar uma versão "pendente" de um subgraph no NEAR? +### Os subgraphs na Ethereum apoiam versões "pendentes" e "atuais". Como posso implantar uma versão "pendente" de um subgraph no NEAR? -No momento, não há apoio à funcionalidade de pendências para subgraphs na NEAR. Entretanto, podes lançar uma nova versão para um subgraph de "nome" diferente, e quando este for sincronizado com a cabeça da chain, podes relançá-la para seu subgraph de "nome" primário, que usará o mesmo ID de lançamento subjacente — e aí, o subgraph principal sincronizará instantaneamente. +No momento, não há apoio à funcionalidade de pendências para subgraphs na NEAR. Entretanto, é possível lançar uma nova versão para um subgraph de "nome" diferente, e quando este for sincronizado com a cabeça da chain, podes relançá-la para seu subgraph de "nome" primário, que usará o mesmo ID de lançamento subjacente — e aí, o subgraph principal sincronizará instantaneamente. ### A minha pergunta não foi respondida. Onde posso conseguir mais ajuda sobre construir subgraphs na NEAR? diff --git a/website/src/pages/pt/subgraphs/cookbook/polymarket.mdx b/website/src/pages/pt/subgraphs/cookbook/polymarket.mdx index add043fc2af3..7f5948c674d9 100644 --- a/website/src/pages/pt/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/pt/subgraphs/cookbook/polymarket.mdx @@ -1,19 +1,19 @@ --- title: Como Consultar Dados de Blockchain do Polymarket com Subgraphs no The Graph -sidebarTitle: Query Polymarket Data +sidebarTitle: Queries de dados do Polymarket --- -Consulte os dados na chain do Polymarket usando a GraphQL via subgraphs na The Graph Network. Subgraphs são APIs descentralizadas energizadas pelo The Graph, um protocolo para indexação e consulta de dados de blockchains. +Solicite os dados na chain do Polymarket com a GraphQL, via subgraphs na The Graph Network. Subgraphs são APIs descentralizadas energizadas pelo The Graph, um protocolo para indexação e consulta de dados de blockchains. ## Subgraph do Polymarket no Graph Explorer -Dá para ver um playground interativo de queries na [página do subgraph do Polymarket no The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), onde você pode testar qualquer query. +Dá para ver um playground interativo de queries na [página do subgraph do Polymarket no The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), onde pode-se testar qualquer query. ![Playground da Polymarket](/img/Polymarket-playground.png) ## Como Usar o Editor Visual de Queries -O editor visual de queries permite-te testar exemplos de query do seu subgraph. +O editor visual de queries permite-lhe testar exemplos de query do seu subgraph. Você pode usar o GraphiQL Explorer para compor os seus queries da GraphQL clicando nos campos desejados. @@ -73,7 +73,7 @@ Você pode usar o GraphiQL Explorer para compor os seus queries da GraphQL clica ## Schema da GraphQL do Polymarket -O esquema deste subgraph é definido [aqui no GitHub do Polymarket](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +O schema deste subgraph é definido [aqui no GitHub do Polymarket](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Ponto Final do Subgraph do Polymarket @@ -145,4 +145,4 @@ axios(graphQLRequest) Para mais informações sobre queries de dados do seu subgraph, leia mais [aqui](/subgraphs/querying/introduction/). -Para explorar todas as maneiras de otimizar e personalizar o seu subgraph para melhor desempenho, leia mais sobre [como criar um subgraph aqui](/developing/creating-a-subgraph/). +Para explorar todas as maneiras de otimizar e personalizar o seu subgraph por um desempenho melhor, leia mais sobre [como criar um subgraph aqui](/developing/creating-a-subgraph/). diff --git a/website/src/pages/pt/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/pt/subgraphs/cookbook/secure-api-keys-nextjs.mdx index 768ee1418880..c3c2d17e30e5 100644 --- a/website/src/pages/pt/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/pt/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,7 +4,7 @@ title: Como Proteger Chaves de API com Componentes do Servidor Next.js ## Visão geral -Podemos proteger a nossa chave API no frontend do nosso dApp com [componentes do servidor Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components). Para ainda mais segurança, também podemos [restringir a nossa chave API a certos domínios ou subgraphs no Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +Podemos proteger a nossa chave API no frontend do nosso dApp (aplicativo descentralizado) com [componentes do servidor Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components). Para ainda mais segurança, também podemos [restringir a nossa chave API a certos domínios ou subgraphs no Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). Neste manual, veremos como criar um componente de servidor Next.js que faz queries em um subgraph enquanto esconde a chave API do frontend. @@ -18,11 +18,11 @@ Neste manual, veremos como criar um componente de servidor Next.js que faz queri Num aplicativo de React normal, chaves API incluídas no código do frontend podem ser expostas ao lado do cliente, o que apresenta um risco de segurança. É normal o uso de arquivos `.env`, mas estes não protegem as chaves por completo, já que o código do React é executado no lado do cliente (client-side), o que expõe a chave API nos headers. Os componentes do servidor Next.js abordam este problema via a execução de operações sensíveis server-side. -### Como usar renderização client-side para fazer queries em um subgraph +### Como renderizar pelo lado do cliente para fazer queries em um subgraph ![Renderização client-side](/img/api-key-client-side-rendering.png) -### Prerequisites +### Pré-requisitos - Uma chave API do [Subgraph Studio](https://thegraph.com/studio) - Conhecimentos básicos de Next.js e React. diff --git a/website/src/pages/pt/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/pt/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..fc990b6eecbd --- /dev/null +++ b/website/src/pages/pt/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Como Agregar Dados com a Composição de Subgraphs +sidebarTitle: Construa um Subgraph Compostável com Múltiplos Subgraphs +--- + +Otimize o seu Subgraph mesclando dados de três Subgraphs de origem, independentes, num único Subgraph compostável para melhorar a agregação de dados. + +> Avisos Importantes: +> +> - A composição de um subgraph está incorporada à CLI, e a implantação é feita com o [Subgraph Studio](https://thegraph.com/studio/). +> - Este recurso requer a versão 1.3.0 de `specVersion`. + +## Visão geral + +A composição de Subgraphs permite o uso de um Subgraph como fonte de dados para outro, e também permite que ele consuma e responda às mudanças de entidades. Em vez de buscar diretamente os dados na chain, um Subgraph pode procurar atualizações de outro Subgraph e reagir às mudanças. Isso serve para agregar dados de vários Subgraphs ou desencadear ações com base em atualizações externas. + +## Pré-requisitos + +Para implantar **todos os** Subgraphs localmente, é necessário ter o seguinte: + +- Uma instância do [Graph Node](https://github.com/graphprotocol/graph-node) em execução local +- Uma instância de [IPFS](https://docs.ipfs.tech/) em execução local +- [Node.js](https://nodejs.org) e npm + +## Como Começar + +O guia a seguir fornece exemplos para a definição de três Subgraphs de origem para criar um poderoso Subgraph composto. + +### Especificações + +- Para fins de simplicidade, todos os Subgraphs de origem usam apenas handlers de blocos. No entanto, num ambiente real, cada Subgraph de origem usará dados de contratos inteligentes diferentes. +- Os exemplos abaixo mostram como importar e estender o schema de outro Subgraph para melhorar a sua funcionalidade. +- Cada Subgraph de origem é otimizado com uma entidade específica. +- Todos os comandos listados instalam as dependências necessárias, geram código baseado no schema da GraphQL, constroem o Subgraph, e o implantam na instância local do Graph Node. + +### Passo 1. Implante o Tempo de Bloco do Subgraph de Origem + +Este primeiro Subgraph de origem calcula o tempo de bloco para cada bloco. + +- Ele importa schemas de outros Subgraphs e adiciona uma entidade `block` com um campo `timestamp`, que representa o tempo em que cada bloco foi minerado. +- Procura eventos de blockchain relacionados ao tempo (por exemplo, data e hora de blocos) e processa esses dados para atualizar as entidades do Subgraph de acordo. + +Para implantar este Subgraph localmente, execute os seguintes comandos: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Passo 2. Implante o Custo de Bloco do Subgraph de Origem + +Este segundo Subgraph de origem indexa o custo de cada bloco. + +#### Funções Importantes + +- Importa schemas de outros Subgraphs e adiciona uma entidade `block` com campos relacionados ao custo. +- Procura eventos da blockchain relacionados aos custos (por exemplo, taxas de gás, custos de transação), e processa esses dados para atualizar as entidades do Subgraphs de acordo. + +Para implantar este Subgraph localmente, execute os mesmos comandos listados acima. + +### Passo 3. Defina o Tamanho dos Blocos do Subgraph de Origem + +Este terceiro subgraph de origem indexa o tamanho de cada bloco. Para implantar este Subgraph localmente, execute os mesmos comandos acima. + +#### Funções Importantes + +- Ele importa schemas de outros Subgraphs e adiciona uma entidade `block` com um campo `size`, que representa o tamanho de cada bloco. +- Procura eventos da blockchain relacionados ao tamanho dos blocos (por exemplo, armazenamento ou volume), e processa esses dados para atualizar as entidades do Subgraph de acordo. + +### Passo 4. Combine os Subgraphs em Um Único para Estatísticas de Bloco + +Este Subgraph composto combina e agrega as informações dos três Subgraphs de origem acima, fornecendo uma visão unificada das estatísticas dos blocos. Para implantar este Subgraph localmente, execute os mesmos comandos acima. + +> Note: +> +> - Qualquer alteração a um Subgraph de origem provavelmente gerará uma nova ID de implantação. +> - Certifique-se de atualizar a ID de implantação no endereço de origem de dados do manifest do Subgraph, para aproveitar as últimas alterações. +> - Todos os Subgraphs de origem devem ser implantados antes de implantar o Subgraph composto. + +#### Funções Importantes + +- Fornece um modelo consolidado de dados que abrange todas as métricas relevantes de bloco. +- Combina dados de três Subgraphs de origem e fornece uma visão abrangente das estatísticas de blocos, para permitir queries e análises mais complexas. + +## Conclusão + +- Esta ferramenta poderosa escalará a sua programação de Subgraphs e permitirá a combinação de vários Subgraphs. +- A configuração inclui a implantação de três Subgraphs de origem e uma implantação final do Subgraph composto. +- Esta função tira os limites da escalabilidade e simplifica a eficiência tanto na programação como na manutenção. + +## Outros Recursos + +- Confira todo o código deste exemplo [neste repositório do GitHub](https://github.com/isum/subgraph-composition-example). +- Para adicionar recursos avançados ao seu Subgraph, confira a página sobre [recursos avançados](/developing/creating/advanced/). +- Para saber mais sobre agregações, confira a página sobre [Séries Temporais e Agregações](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/pt/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/pt/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..ac64c9d575b0 --- /dev/null +++ b/website/src/pages/pt/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Melhore a Construção do Seu Subgraph Usando a Composição com o Sushiswap v3 na Base +sidebarTitle: Como Compor um Subgraph com o Sushiswap v3 na Base +--- + +Alavanque a composição de um subgraph para acelerar o tempo de desenvolvimento. Crie um subgraph de base com dados essenciais, e em seguida, crie mais subgraphs em cima dele. + +> Avisos Importantes: +> +> - A composição de um subgraph está incorporada à CLI, e você pode implantar com o [Subgraph Studio](https://thegraph.com/studio/). +> - Você pode usar Subgraphs existentes, mas eles devem ser implantados novamente com `specVersion` 1.3.0, que não exige a programação de código novo. +> - Vale reestruturar o seu subgraph para dividir a lógica à conforme você começa a compor subgraphs. + +## Introdução + +Subgraphs compostos permitem que você combine fontes de dados de vários subgraphs num novo, deixando a programação de subgraphs mais rápida e flexível. A composição de subgraphs permite que você crie e mantenha subgraphs menores e mais concentrados, que formam coletivamente um conjunto de dados maior, interconectado. + +### Vantagens da Composição + +A composição de subgraphs é um recurso poderoso para fins de dimensionamento, permitindo: + +- Reciclagem, mistura, e combinação de dados existentes +- Otimização de programação e queries +- Uso de múltiplas fontes de dados (até cinco subgraphs de origem) +- Sincronização acelerada do seu Subgraph +- Solução de erros e otimização da ressincronização + +## Visão Geral da Arquitetura + +A configuração deste exemplo envolve dois Subgraphs: + +1. **Subgraph de Origem**: Rastreia os dados do evento como entidades. +2. **Subgraph Dependente**: Usa o Subgraph de origem como uma fonte de dados. + +Esses exemplos podem ser encontrados nos diretórios `source` e `dependent`. + +- O **subgraph de origem** é um subgraph básico de rastreamento de eventos, que regista eventos emitidos por contratos relevantes. +- O **subgraph dependente** faz referência ao subgraph de origem como uma fonte de dados, usando as entidades da fonte como gatilhos. + +Embora o subgraph de origem seja um Subgraph padrão, o dependente usa o recurso de composição de Subgraphs. + +### Subgraph de Origem + +O Subgraph de origem rastreia os eventos do Subgraph do Sushiswap v3 na chain Base. O arquivo de configuração deste Subgraph é `source/subgraph.yaml`. + +> O `source/subgraph.yaml` emprega o recurso avançado de Subgraphs, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). Para rever o código deste `source/subgraph.yaml`, confira o [repositório de exemplo do Subgraph de origem](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Subgraph Dependente + +O Subgraph dependente está no arquivo `dependent/subgraph.yaml`, que especifica o Subgraph de origem como fonte de dados. Este subgraph usa entidades da fonte para acionar ações específicas com base em alterações nessas entidades. + +> Para rever o código para este `dependent/subgraph.yaml`, confira o [repositório de exemplo de Subgraph dependente](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Como Começar + +A seguir, veja um guia que ilustra como usar um Subgraph como uma fonte de dados para outro. Este exemplo usa: + +- Subgraph do Sushiswap v3 na chain Base +- Dois Subgraphs (com a possibilidade de usar até **5 Subgraphs de origem** na sua programação). + +### Passo 1. Configure o seu Subgraph de Origem + +Para definir o Subgraph de origem como fonte de dados no Subgraph dependente, inclua o seguinte em `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Aqui, `source.address` se refere ao ID de implantação do Subgraph de origem, e `startBlock` especifica o bloco a partir do qual a indexação deve começar. + +### Passo 2. Defina Handlers no Subgraph Dependente + +Veja abaixo um exemplo de definição de handlers no Subgraph dependente: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Atualização de preços e ticks - raiz quadrada do pool + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Atualização de preços do token + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Atualização de preços de ETH em USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Atualização de preço derivado do ETH para tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +Neste exemplo, a função `handleInitialize` é acionada quando uma nova entidade `Initialize` é criada no subgraph de origem, passada como `EntityTrigger`. O handler atualiza o pool e as entidades do token, com base em dados da nova entidade `Initialize`. + +`EntityTrigger` possui três campos: + +1. `operation`: Especifica o tipo de operação, que pode ser `Create`, `Modify`, ou `Remove`. +2. `type`: Indica o tipo de entidade. +3. `data`: Contém os dados da entidade. + +Os programadores podem então determinar ações específicas para os dados das entidades, com base no tipo de operação. + +## Conclusão + +- Use esta ferramenta poderosa para escalar rapidamente a sua programação de Subgraphs e reutilizar os dados existentes. +- A configuração inclui os seguintes atos: criar um Subgraph de origem e referenciá-lo num Subgraph dependente. +- Os handlers são definidos no Subgraph dependente para executar ações, com base nas alterações nas entidades do Subgraph de origem. + +Este método abre portas para composição e escalabilidade, e simplifica a eficiência tanto na programação como na manutenção. + +## Outros Recursos + +Para usar outros recursos avançados do seu Subgraph, confira os [recursos avançados do Subgraph](/developing/creating/advanced/) e [este repositório de composição do Subgraph](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +Para saber mais sobre como definir três Subgraphs de origem, confira [este repositório de composição de Subgraphs](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/pt/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/pt/subgraphs/cookbook/subgraph-debug-forking.mdx index 8d1a1bc6444a..402147459aea 100644 --- a/website/src/pages/pt/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/pt/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Debugging de Subgraphs Rápido e Fácil Com Forks --- -Assim como vários sistemas que processam uma abundância de dados, os Indexadores do Graph (Graph Nodes) podem demorar um pouco para sincronizar o seu subgraph com a blockchain de destino. A discrepância entre mudanças rápidas para fins de debugging e os longos tempos de espera necessários para o indexing é extremamente contraprodutiva, e nós sabemos muito bem disso. É por isso que introduzimos o **forking de subgraphs**, programado pela [LimeChain](https://limechain.tech/); neste artigo. Veja como dá para acelerar bastante o debugging de subgraphs! +Assim como vários sistemas que processam uma abundância de dados, os Indexadores do Graph (Graph Nodes) podem demorar um pouco para sincronizar o seu subgraph com a blockchain de destino. A discrepância entre mudanças rápidas para fins de solução de problemas e os longos tempos de espera necessários para o indexing é extremamente contraprodutiva, e nós sabemos muito bem disso. É por isso que introduzimos o **forking de subgraphs**, programado pela [LimeChain](https://limechain.tech/); neste artigo. Veja como dá para acelerar bastante o processo de debug de subgraphs! ## Ok, o que é isso? -**Forking de subgraphs** é o processo de retirar entidades tranquilamente do armazenamento de _outro_ subgraph (normalmente, remoto). +O **forking de subgraphs** é o processo de retirar entidades tranquilamente do armazenamento de _outro_ subgraph (normalmente, remoto). -No contexto do debugging, o **forking de subgraphs** permite debugar o seu subgraph falho no bloco _X_ sem precisar esperar que ele sincronize até o bloco _X_. +No contexto do debug, o **forking de subgraphs** permite debugar o seu subgraph falho no bloco _X_ sem precisar esperar que ele sincronize até o bloco _X_. ## O quê?! Como?! -Quando um subgraph é implementado a um Graph Node remoto para indexação, e ele falha no bloco _X_, a boa notícia é que o Graph Node ainda servirá queries GraphQL com seu armazenamento, que é sincronizado até o bloco _X_. Ótimo! Podemos aproveitar este armazenamento "atualizado" para consertar os bugs que surgem ao indexar o bloco _X_. +Quando um subgraph é implementado em um Graph Node remoto para indexação, e ele falha no bloco _X_, a boa notícia é que o Graph Node ainda servirá queries GraphQL com o seu armazenamento, que é sincronizado até o bloco _X_. Ótimo! Podemos aproveitar este armazenamento "atualizado" para consertar os bugs que surgem ao indexar o bloco _X_. -Resumindo, faremos um fork do subgraph falho de um Graph Node remoto para garantir que o subgraph seja indexado até o bloco _X_, para fornecer ao subgraph implantado localmente uma visão atualizada do estado da indexação; sendo este debugado no bloco _X_. +Resumindo, faremos um _fork do subgraph falho_ de um Graph Node remoto para garantir que o subgraph seja indexado até o bloco _X_, para fornecer ao subgraph implantado localmente uma visão atualizada do estado da indexação; sendo este debugado no bloco _X_. ## Por favor, quero ver códigos! -Para manter a concentração no debugging de subgraphs, vamos começar com coisas simples: siga o [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) a indexar o contrato inteligente Ethereum Gravity. +Para manter o foco no debug de subgraphs, vamos começar com coisas simples: siga o [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) a indexar o contrato inteligente Ethereum Gravity. Aqui estão os handlers definidos para a indexação dos `Gravatars`, sem qualquer bug: @@ -44,7 +44,7 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Que pena! Quando eu implanto o meu lindo subgraph no Subgraph Studio, ele falha com o erro "Gravatar not found" (Gravatar não encontrado). +Que pena! Quando eu implanto o meu lindo subgraph no Subgraph Studio, ele apresenta o erro "Gravatar not found" (Gravatar não encontrado). A maneira mais comum de tentar consertar este erro é: @@ -59,7 +59,7 @@ Com o **forking de subgraphs**, essencialmente, podemos eliminar este passo. É 0. Crie um Graph Node local com o conjunto de **_fork-base apropriado_**. 1. Faça uma mudança na fonte dos mapeamentos, que talvez possa resolver o problema. -2. Implante-o no Graph Node local, **faça um fork do subgraph falho**, e **comece do bloco problemático\_**. +2. Implante-o no Graph Node local, **faça um fork do subgraph falho**, e **comece do bloco problemático_**. 3. Se quebrar novamente, volte ao passo 1. Se não: Eba! Agora, você deve ter duas perguntas: @@ -69,7 +69,7 @@ Agora, você deve ter duas perguntas: E eu respondo: -1. `fork-base` é o URL "base", tal que quando a _id de subgraph_ é atrelada, o URL resultante (`/`) se torna um ponto final GraphQL válido para o armazenamento do subgraph. +1. `fork-base` é o URL "base", tal que quando a _id de subgraph_ é atrelada, o URL resultante (`/`) se torna um endpoint válido da GraphQL para o armazenamento do subgraph. 2. Forking é fácil, não precisa se preocupar: ```bash @@ -80,7 +80,7 @@ Aliás, não esqueça de preencher o campo `dataSources.source.startBlock` no ma Aqui está o que eu faço: -1. Eu crio um Graph Node local ([veja como](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) com a opção `fork-base` de `https://api.thegraph.com/subgraphs/id/`, já que eu vou forkar um subgraph, o bugado que eu lancei anteriormente, do [Subgraph Studio](https://thegraph.com/studio/). +1. Eu monto um Graph Node local ([veja como](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) com a opção `fork-base` de `https://api.thegraph.com/subgraphs/id/`, já que eu vou forkar um subgraph, o bugado que eu lancei anteriormente, do [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -98,4 +98,4 @@ $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqH ``` 4. Eu verifico os logs produzidos pelo Graph Node local e... eba! Parece que deu tudo certo. -5. Lanço o meu subgraph, agora livre de bugs, a um Graph Node remoto e vivo feliz para sempre! (mas sem batatas) +5. Agora que o meu subgraph está livre de bugs, o implantarei num Graph Node remoto e viverei feliz para sempre! (mas sem batatas) diff --git a/website/src/pages/pt/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/pt/subgraphs/cookbook/subgraph-uncrashable.mdx index 522740ee8246..0cf00eecaac8 100644 --- a/website/src/pages/pt/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/pt/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -6,11 +6,11 @@ O [Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrasha ## Por que integrar com a Subgraph Uncrashable? -- **Atividade Contínua**. Entidades mal-cuidadas podem causar panes em subgraphs, o que pode ser perturbador para projetos dependentes no The Graph. Prepare funções de helper para deixar os seus subgraphs "impossíveis de travar" e garantir a continuidade dos negócios. +- **Atividade Contínua**. Entidades mal-cuidadas podem causar panes em subgraphs, o que pode ser inconveniente para projetos dependentes no The Graph. Prepare funções de helper para deixar os seus subgraphs "impossíveis de travar" e garantir a continuidade dos negócios. - **Totalmente Seguro**. Alguns dos problemas comuns vistos na programação de subgraphs são problemas de carregamento de entidades não definidas; o não-preparo ou inicialização de todos os valores de entidades; e condições de corrida sobre carregamento e salvamento de entidades. Garanta que todas as interações com entidades sejam completamente atômicas. -- **Configurável pelo Utilizador**. Determine valores padrão e configure o nível necessário de verificações de segurança para o seu projeto. São gravados registros de aviso que indicam onde há uma brecha de lógica no subgraph, auxiliando o processo de solução de problemas e garantir a precisão dos dados. +- **Configurável pelo Utilizador**. Determine valores padrão e configure o nível necessário de verificações de segurança para o seu projeto. São gravados registos que indicam onde há uma brecha de lógica no subgraph, para auxiliar o processo de solução de problemas e garantir a precisão dos dados. **Características Importantes** @@ -18,7 +18,7 @@ O [Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrasha - A estrutura também inclui uma maneira (através do arquivo de configuração) de criar funções personalizadas, mas seguras, para configurar grupos de variáveis de entidade. Desta maneira, é impossível que o utilizador carregue/use uma entidade de graph obsoleta, e também é impossível esquecer de salvar ou determinar uma variável exigida pela função. -- Logs de aviso são registrados como logs que indicam onde há uma quebra de lógica no subgraph, para ajudar a consertar o problema e garantir a segurança dos dados. +- Logs de aviso são registados como logs que indicam onde há uma quebra de lógica no subgraph, para ajudar a consertar o problema e garantir a segurança dos dados. A Subgraph Uncrashable pode ser executada como flag opcional usando o comando codegen no Graph CLI. diff --git a/website/src/pages/pt/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/pt/subgraphs/cookbook/transfer-to-the-graph.mdx index e5ad802a2941..54a4c96092df 100644 --- a/website/src/pages/pt/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/pt/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,8 +1,8 @@ --- -title: Transfira-se para The Graph +title: Faça uma transferência para o The Graph --- -Migre rapidamente os seus subgraphs, de qualquer plataforma para a [rede descentralizada do The Graph](https://thegraph.com/networks/). +Migre os seus subgraphs rapidamente de qualquer plataforma para a [rede descentralizada do The Graph](https://thegraph.com/networks/). ## Vantagens de Trocar para The Graph @@ -21,9 +21,9 @@ Migre rapidamente os seus subgraphs, de qualquer plataforma para a [rede descent ### Como Criar um Subgraph no Subgraph Studio - Entre no [Subgraph Studio](https://thegraph.com/studio/) e conecte a sua carteira de criptomoedas. -- Clique em "Create a Subgraph" ("Criar um Subgraph"). É recomendado nomear o subgraph em caixa de título: por exemplo, "Nome De Subgraph Nome da Chain". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Observação: após a edição, o nome do subgraph poderá ser editado, mas isto sempre exigirá uma ação on-chain sempre, então pense bem no nome que irá dar. +> Observação: após a edição, o nome do subgraph poderá ser alterado, mas isto sempre exigirá uma ação on-chain, então pense bem no nome que irá dar. ### Instale a Graph CLI @@ -72,7 +72,7 @@ graph deploy --ipfs-hash > Para atrair cerca de 3 indexadores para fazer queries no seu subgraph, recomendamos curar pelo menos 3.000 GRT. Para saber mais sobre a curadoria, leia sobre [Curadoria](/resources/roles/curating/) no The Graph. -Dá para começar a [fazer queries](/subgraphs/querying/introduction/) em qualquer subgraph enviando um query GraphQL para o ponto final da URL de query do subgraph, localizado na parte superior da página do Explorer no Subgraph Studio. +Dá para começar a [fazer queries](/subgraphs/querying/introduction/) em qualquer subgraph ao enviar um query GraphQL para o ponto final do URL de query do subgraph, localizado na parte superior da página do Explorer no Subgraph Studio. #### Exemplo @@ -80,7 +80,7 @@ Dá para começar a [fazer queries](/subgraphs/querying/introduction/) em qualqu ![URL de Query](/img/cryptopunks-screenshot-transfer.png) -A URL de queries para este subgraph é: +O URL de queries para este subgraph é: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**sua-chave-de-api**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ Agora, você só precisa preencher **sua própria chave de API** para começar a ### Como Monitorar o Estado do Seu Subgraph -Após a atualização, poderá acessar e gerir os seus subgraphs no [Subgraph Studio](https://thegraph.com/studio/) e explorar todos os subgraphs no [The Graph Explorer](https://thegraph.com/networks/). +Após a atualização, poderá acessar e administrar os seus subgraphs no [Subgraph Studio](https://thegraph.com/studio/), e explorar todos os subgraphs no [The Graph Explorer](https://thegraph.com/networks/). ### Outros Recursos -- Para criar e editar um novo subgraph, veja o [Guia de Início Rápido](/subgraphs/quick-start/). -- Para explorar todas as maneiras de otimizar e personalizar o seu subgraph para melhor desempenho, leia mais sobre [como criar um subgraph aqui](/developing/creating-a-subgraph/). +- Para criar e editar um novo subgraph rapidamente, veja o [Guia de Início Rápido](/subgraphs/quick-start/). +- Para explorar todas as maneiras de otimizar e personalizar o seu subgraph para melhorar o desempenho, leia mais sobre [como criar um subgraph aqui](/developing/creating-a-subgraph/). diff --git a/website/src/pages/pt/subgraphs/developing/_meta-titles.json b/website/src/pages/pt/subgraphs/developing/_meta-titles.json index 01a91b09ed77..48b57c9aae14 100644 --- a/website/src/pages/pt/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/pt/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "Criação", + "deploying": "Implante", + "publishing": "Edição", + "managing": "Gestão" } diff --git a/website/src/pages/pt/subgraphs/developing/creating/advanced.mdx b/website/src/pages/pt/subgraphs/developing/creating/advanced.mdx index 5dfeb1034a5f..51adc5cea9a6 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/advanced.mdx @@ -1,23 +1,23 @@ --- -title: Advanced Subgraph Features +title: Funções Avançadas de Subgraph --- ## Visão geral -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: -| Feature | Name | -| ---------------------------------------------------- | ---------------- | -| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | -| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | -| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | +| Função | Nome | +| ------------------------------------------------------ | ---------------- | +| [Erros não fatais](#non-fatal-errors) | `nonFatalErrors` | +| [Busca em full-text](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Enxertos](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,17 +25,17 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Séries de Tempo e Agregações -Prerequisites: +Pré-requisitos: -- Subgraph specVersion must be ≥1.1.0. +- O specVersion do subgraph deve ser maior que 1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Exemplo de Schema @@ -53,33 +53,33 @@ type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { } ``` -### How to Define Timeseries and Aggregations +### Como Definir Séries Temporais e Agregações -Timeseries entities are defined with `@entity(timeseries: true)` in the GraphQL schema. Every timeseries entity must: +Entidades de séries temporais são definidas com `@entity(timeseries: true)` no schema da GraphQL. Toda entidade deste tipo deve: -- have a unique ID of the int8 type -- have a timestamp of the Timestamp type -- include data that will be used for calculation by aggregation entities. +- ter uma ID exclusiva do tipo int8 +- ter um registro de data e hora do tipo Timestamp +- incluir dados a serem usados para cálculo pelas entidades de agregação. -These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the aggregation entities. +Estas entidades de Série Temporal podem ser guardadas em handlers regulares de gatilho, e atuam como “dados brutos” para as entidades de agregação. -Aggregation entities are defined with `@aggregation` in the GraphQL schema. Every aggregation entity defines the source from which it will gather data (which must be a timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). +As entidades de agregação são definidas com `@aggregation` no schema da GraphQL. Toda entidade deste tipo define a fonte de onde retirará dados (que deve ser uma entidade de Série Temporal), determina os intervalos (por ex., hora, dia) e especifica a função de agregação que usará (por ex., soma, contagem, min, max, primeiro, último). -Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. +As entidades de agregação são calculadas automaticamente com base na fonte especificada no final do intervalo necessário. #### Intervalos de Agregação Disponíveis -- `hour`: sets the timeseries period every hour, on the hour. -- `day`: sets the timeseries period every day, starting and ending at 00:00. +- `hour`: configura o período de série de tempo para cada hora, em ponto. +- `day`: configura o período de série de tempo para cada dia, a começar e terminar à meia-noite. #### Funções de Agregação Disponíveis -- `sum`: Total of all values. -- `count`: Number of values. -- `min`: Minimum value. -- `max`: Maximum value. -- `first`: First value in the period. -- `last`: Last value in the period. +- `sum`: Total de todos os valores. +- `count`: Número de valores. +- `min`: Valor mínimo. +- `max`: Valor máximo. +- `first`: Primeiro valor no período. +- `last`: Último valor no período. #### Exemplo de Query de Agregações @@ -93,25 +93,25 @@ Aggregation entities are automatically calculated on the basis of the specified } ``` -[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. +[Leia mais](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) sobre Séries Temporais e Agregações. ## Erros não-fatais -Erros de indexação em subgraphs já sincronizados, por si próprios, farão que o subgraph falhe e pare de sincronizar. Os subgraphs podem, de outra forma, ser configurados a continuar a sincronizar na presença de erros, ao ignorar as mudanças feitas pelo handler que provocaram o erro. Isto dá tempo aos autores de subgraphs para corrigir seus subgraphs enquanto queries continuam a ser servidos perante o bloco mais recente, porém os resultados podem ser inconsistentes devido ao bug que causou o erro. Note que alguns erros ainda são sempre fatais. Para ser não-fatais, os erros devem ser confirmados como determinísticos. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Permitir erros não fatais exige a configuração da seguinte feature flag no manifest do subgraph: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## Fontes de Dados de Arquivos em IPFS/Arweave -Fontes de dados de arquivos são uma nova funcionalidade de subgraph para acessar dados off-chain de forma robusta e extensível. As fontes de dados de arquivos apoiam o retiro de arquivos do IPFS e do Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Isto também abre as portas para indexar dados off-chain de forma determinística, além de potencialmente introduzir dados arbitrários com fonte em HTTP. @@ -153,15 +153,15 @@ Fontes de dados de arquivos são uma nova funcionalidade de subgraph para acessa Em vez de buscar arquivos "em fila" durante a execução do handler, isto introduz modelos que podem ser colocados como novas fontes de dados para um identificador de arquivos. Estas novas fontes de dados pegam os arquivos e tentam novamente caso não obtenham êxito; quando o arquivo é encontrado, executam um handler dedicado. -This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. +Isso é semelhante aos [modelos existentes de fonte de dados](/developing/creating-a-subgraph/#data-source-templates), usados para criar dinamicamente novas fontes de dados baseados em chain. -> This replaces the existing `ipfs.cat` API +> Isto substitui a API `ipfs.cat` existente ### Guia de atualização -#### Update `graph-ts` and `graph-cli` +#### Atualização de `graph-ts` e `graph-cli` -File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 +O recurso de fontes de dados de arquivos exige o graph-ts na versão acima de 0.29.0 e o graph-cli acima de 0.33.1 #### Adicionar um novo tipo de entidade que será atualizado quando os arquivos forem encontrados @@ -210,9 +210,9 @@ type TokenMetadata @entity { Se o relacionamento for perfeitamente proporcional entre a entidade parente e a entidade de fontes de dados de arquivos resultante, é mais simples ligar a entidade parente a uma entidade de arquivos resultante, com a CID IPFS como o assunto de busca. Se tiver dificuldades em modelar suas novas entidades baseadas em arquivos, pergunte no Discord! -> You can use [nested filters](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. +> É possível usar [filtros aninhados](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering) para filtrar entidades parentes, com base nestas entidades aninhadas. -#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` +#### Adicione um novo modelo de fonte de dados com `kind: file/ipfs` ou `kind: file/arweave` Esta é a fonte de dados que será gerada quando um arquivo de interesse for identificado. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -232,15 +232,15 @@ templates: file: ./abis/Token.json ``` -> Currently `abis` are required, though it is not possible to call contracts from within file data sources +> Atualmente é obrigatório usar `abis`, mas não é possível chamar contratos de dentro de fontes de dados de arquivos -The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. +A fonte de dados de arquivos deve mencionar, especificamente, todos os tipos de entidades com os quais ela interagirá sob `entities`. Veja as [limitações](#limitations) para mais detalhes. #### Criar um novo handler para processar arquivos -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/subgraphs/developing/creating/graph-ts/api/#json-api)). +Este handler deve aceitar um parâmetro `Bytes`, que consistirá dos conteúdos do arquivo; quando encontrado, este poderá ser acessado. Isto costuma ser um arquivo JSON, que pode ser processado com helpers `graph-ts` ([documentação](/subgraphs/developing/creating/graph-ts/api/#json-api)). -The CID of the file as a readable string can be accessed via the `dataSource` as follows: +A CID do arquivo como um string legível pode ser acessada através do `dataSource` a seguir: ```typescript const cid = dataSource.stringParam() @@ -277,12 +277,12 @@ export function handleMetadata(content: Bytes): void { Agora pode criar fontes de dados de arquivos durante a execução de handlers baseados em chain: -- Import the template from the auto-generated `templates` -- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave +- Importe o modelo do `templates` gerado automaticamente +- chame o `TemplateName.create(cid: string)` de dentro de um mapeamento, onde o cid é um identificador de conteúdo válido para IPFS ou Arweave -For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifiers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). +Para o IPFS, o Graph Node apoia [identificadores de conteúdo v0 e v1](https://docs.ipfs.tech/concepts/content-addressing/) e identificadores com diretórios (por ex. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). -For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). +Para o Arweave, a partir da versão 0.33.0, o Graph Node pode buscar arquivos armazenados no Arweave com base no seu [ID de transação](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) de um gateway Arweave ([exemplo de arquivo](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). O Arweave apoia transações enviadas via Irys (antigo Bundlr), e o Graph Node também pode solicitar arquivos com base em [manifests do Irys](https://docs.irys.xyz/overview/gateways#indexing). Exemplo: @@ -290,7 +290,7 @@ Exemplo: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//Este exemplo de código é para um subgraph do Crypto Coven. O hash ipfs acima é um diretório com metadados de tokens para todos os NFTs do Crypto Coven. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //Isto cria um caminho aos metadados para um único NFT do Crypto Coven. Ele concatena o diretório com "/" + nome do arquivo + ".json" + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" token.ipfsURI = tokenIpfsHash @@ -315,25 +315,25 @@ export function handleTransfer(event: TransferEvent): void { Isto criará uma fonte de dados de arquivos, que avaliará o endpoint de IPFS ou Arweave configurado do Graph Node, e tentará novamente caso não achá-lo. Com o arquivo localizado, o handler da fonte de dados de arquivos será executado. -This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. +Este exemplo usa a CID como a consulta entre a entidade parente `Token` e a entidade `TokenMetadata` resultante. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Parabéns, você está a usar fontes de dados de arquivos! -#### Como lançar os seus Subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitações -Handlers e entidades de fontes de dados de arquivos são isolados de outras entidades de subgraph, o que garante que sejam determinísticos quando executados e que não haja contaminação de fontes de dados baseadas em chain. Especificamente: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entidades criadas por Fontes de Dados de Arquivos são imutáveis, e não podem ser atualizadas - Handlers de Fontes de Dados de Arquivos não podem acessar entidades de outras fontes de dados de arquivos - Entidades associadas com Fontes de Dados de Arquivos não podem ser acessadas por handlers baseados em chain -> Enquanto esta limitação pode não ser problemática para a maioria dos casos de uso, ela pode deixar alguns mais complexos. Se houver qualquer problema neste processo, por favor dê um alô via Discord! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Além disto, não é possível criar fontes de dados de uma fonte de dado de arquivos, seja uma on-chain ou outra fonte de dados de arquivos. Esta restrição poderá ser retirada no futuro. @@ -341,41 +341,41 @@ Além disto, não é possível criar fontes de dados de uma fonte de dado de arq Caso ligue metadados de NFTs a tokens correspondentes, use o hash IPFS destes para referenciar uma entidade de Metadados da entidade do Token. Salve a entidade de Metadados a usar o hash IPFS como ID. -You can use [DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. +Você pode usar o [contexto de DataSource](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) ao criar Fontes de Dados de Arquivo (FDS), para passar informações extras que estarão disponíveis para o handler de FDS. -If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. +Caso tenha entidades a ser atualizadas várias vezes, crie entidades únicas baseadas em arquivos utilizando o hash IPFS e o ID da entidade, e as referencie com um campo derivado na entidade baseada na chain. > Estamos a melhorar a recomendação acima, para que os queries retornem apenas a versão "mais recente" #### Problemas conhecidos -File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. +Fontes de dados de arquivo atualmente requerem ABIs, mesmo que estas não sejam usadas ([problema](https://github.com/graphprotocol/graph-cli/issues/961)). Por enquanto, vale a pena adicionar qualquer ABI como alternativa. -Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. +Handlers para Fontes de Dados de Arquivos não podem estar em arquivos que importam ligações de contrato `eth_call`, o que causa falhas com "unknown import: `ethereum::ethereum.call` has not been defined" ([problema no GitHub](https://github.com/graphprotocol/graph-node/issues/4309)). A solução atual é criar handlers de fontes de dados de arquivos num arquivo dedicado. #### Exemplos -[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) +[Migração de Subgraph do Crypto Coven](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) #### Referências -[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) +[Fontes de Dados de Arquivos GIP](https://forum.thegraph.com/t/gip-file-data-sources/2721) ## Filtros de Argumentos Indexados / Filtros de Tópicos -> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` +> **Obrigatório**: [SpecVersion](#specversion-releases) >= `1.2.0` -Filtros de tópico, também conhecidos como filtros de argumentos indexados, permitem que os utilizadores filtrem eventos de blockchain com alta precisão, em base nos valores dos seus argumentos indexados. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- Estes filtros ajudam a isolar eventos específicos de interesse do fluxo vasto de eventos na blockchain, o que permite que subgraphs operem com mais eficácia ao focarem apenas em dados relevantes. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- Isto serve para criar subgraphs pessoais que rastreiam endereços específicos e as suas interações com vários contratos inteligentes na blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### Como Filtros de Tópicos Funcionam -Quando um contrato inteligente emite um evento, quaisquer argumentos que forem marcados como indexados podem ser usados como filtros no manifest de um subgraph. Isto permite que o subgraph preste atenção seletiva para eventos que correspondam a estes argumentos indexados. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. -- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. +- O primeiro argumento indexado do evento corresponde ao `topic1`, o segundo ao `topic2`, e por aí vai até o `topic3`, já que a Máquina Virtual de Ethereum (EVM) só permite até três argumentos indexados por evento. ```solidity // SPDX-License-Identifier: MIT @@ -395,13 +395,13 @@ contract Token { Neste exemplo: -- The `Transfer` event is used to log transactions of tokens between addresses. -- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. -- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. +- O evento `Transfer` é usado para gravar transações de tokens entre endereços. +- Os parâmetros `from` e `to` são indexados, o que permite que ouvidores de eventos filtrem e monitorizem transferências que envolvem endereços específicos. +- A função `transfer` é uma representação simples de uma ação de transferência de token, e emite o evento Transfer sempre que é chamada. #### Configuração em Subgraphs -Filtros de tópicos são definidos diretamente na configuração de handlers de eventos no manifest do subgraph. Veja como eles são configurados: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -414,7 +414,7 @@ eventHandlers: Neste cenário: -- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. +- `topic1` corresponde ao primeiro argumento indexado do evento, `topic2` ao segundo, e `topic3` ao terceiro. - Cada tópico pode ter um ou mais valores, e um evento só é processado se corresponder a um dos valores em cada tópico especificado. #### Lógica de Filtro @@ -434,9 +434,9 @@ eventHandlers: Nesta configuração: -- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- `topic1` é configurado para filtrar eventos `Transfer` onde `0xAddressA` é o remetente. +- `topic2` é configurado para filtrar eventos `Transfer` onde `0xAddressB` é o destinatário. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Exemplo 2: Como Rastrear Transações em Qualquer Direção Entre Dois ou Mais Endereços @@ -450,31 +450,31 @@ eventHandlers: Nesta configuração: -- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- O subgraph indexará transações que ocorrerem em qualquer direção entre vários endereços, o que permite a monitoria compreensiva de interações que envolverem todos os endereços. +- O `topic1` é configurado para filtrar eventos `Transfer` onde `0xAddressA`, `0xAddressB`, `0xAddressC` é o remetente. +- `topic2` é configurado para filtrar eventos `Transfer` onde `0xAddressB` e `0xAddressC` é o destinatário. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. -## Declared eth_call +## eth_call declarada -> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. +> Nota: Esta é uma função experimental que atualmente não está disponível numa versão estável do Graph Node, e só pode ser usada no Subgraph Studio ou no seu node auto-hospedado. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. Esta ferramenta faz o seguinte: -- Aumenta muito o desempenho do retiro de dados da blockchain Ethereum ao reduzir o tempo total para múltiplas chamadas e otimizar a eficácia geral do subgraph. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Permite retiros de dados mais rápidos, o que resulta em respostas de query aceleradas e uma experiência de utilizador melhorada. - Reduz tempos de espera para aplicativos que precisam agregar dados de várias chamadas no Ethereum, o que aumenta a eficácia do processo de retiro de dados. -### Key Concepts +### Conceitos Importantes -- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. +- `eth_calls` declarativas: Chamadas no Ethereum definidas para serem executadas em paralelo, e não em sequência. - Execução Paralela: Ao invés de esperar o término de uma chamada para começar a próxima, várias chamadas podem ser iniciadas simultaneamente. - Eficácia de Tempo: O total de tempo levado para todas as chamadas muda da soma dos tempos de chamadas individuais (sequencial) para o tempo levado para a chamada mais longa (paralelo). -#### Scenario without Declarative `eth_calls` +#### Cenário sem `eth_calls` Declarativas -Imagina que tens um subgraph que precisa fazer três chamadas no Ethereum para retirar dados sobre as transações, o saldo e as posses de token de um utilizador. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Tradicionalmente, estas chamadas podem ser realizadas em sequência: @@ -484,7 +484,7 @@ Tradicionalmente, estas chamadas podem ser realizadas em sequência: Total de tempo: 3 + 2 + 4 = 9 segundos -#### Scenario with Declarative `eth_calls` +#### Cenário com `eth_calls` Declarativas Com esta ferramenta, é possível declarar que estas chamadas sejam executadas em paralelo: @@ -496,17 +496,17 @@ Como estas chamadas são executadas em paralelo, o total de tempo é igual ao te Total de tempo = max (3, 2, 4) = 4 segundos -#### How it Works +#### Como Funciona -1. Definição Declarativa: No manifest do subgraph, as chamadas no Ethereum são declaradas de maneira que indique que elas possam ser executadas em paralelo. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Motor de Execução Paralela: O motor de execução do Graph Node reconhece estas declarações e executa as chamadas simultaneamente. -3. Agregação de Resultado: Quando todas as chamadas forem completadas, os resultados são agregados e usados pelo subgraph para mais processos. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. -#### Example Configuration in Subgraph Manifest +#### Exemplo de Configuração no Manifest do Subgraph -Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. +`eth_calls` declaradas podem acessar o `event.address` do evento subjacente, assim como todos os `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -519,12 +519,12 @@ calls: Detalhes para o exemplo acima: -- `global0X128` is the declared `eth_call`. -- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. -- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` -- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. +- `global0X128` é a `eth_call` declarada. +- O texto antes dos dois pontos (`global0X128`) é o rótulo para esta `eth_call` que é usado ao registar erros. +- O texto (`Pool[event.address].feeGrowthGlobal0X128()`) é a `eth_call` a ser executada, que está na forma do `Contract[address].function(arguments)` +- O `address` e o `arguments` podem ser substituídos por variáveis a serem disponibilizadas quando o handler for executado. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -533,24 +533,24 @@ calls: ### Como Enxertar em Subgraphs Existentes -> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). +> **Observação:** não é recomendado usar enxertos quando começar a atualização para a The Graph Network. Aprenda mais [aqui](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # ID do subgraph base - block: 7345624 # Número do bloco + base: Qm... # Subgraph ID of base Subgraph + block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Como o enxerto copia em vez de indexar dados base, dirigir o subgraph para o bloco desejado desta maneira é mais rápido que indexar do começo, mesmo que a cópia inicial dos dados ainda possa levar várias horas para subgraphs muito grandes. Enquanto o subgraph enxertado é inicializado, o Graph Node gravará informações sobre os tipos de entidade que já foram copiados. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -O subgraph enxertado pode usar um schema GraphQL que não é idêntico ao schema do subgraph base, mas é apenas compatível com ele. Ele deve ser um schema válido no seu próprio mérito, mas pode desviar do schema do subgraph base nas seguintes maneiras: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Ele adiciona ou remove tipos de entidade - Ele retira atributos de tipos de entidade @@ -560,4 +560,4 @@ O subgraph enxertado pode usar um schema GraphQL que não é idêntico ao schema - Ele adiciona ou remove interfaces - Ele muda os tipos de entidades para qual implementar uma interface -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/pt/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/pt/subgraphs/developing/creating/assemblyscript-mappings.mdx index e7d972a9d0bf..f6be7a46ee9c 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -1,16 +1,16 @@ --- -title: Writing AssemblyScript Mappings +title: Escrita de Mapeamentos de AssemblyScript --- ## Visão geral -The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. +Os mapeamentos tomam dados de uma fonte particular e os transformam em entidades que são definidas dentro do seu schema. São escritos em um subconjunto do [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) chamado [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki), que pode ser compilado ao ([WebAssembly](https://webassembly.org/)). O AssemblyScript é mais rígido que o TypeScript normal, mas rende uma sintaxe familiar. ## Como Escrever Mapeamentos -For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. +Para cada handler de evento definido no `subgraph.yaml` sob o `mapping.eventHandlers`, crie uma função exportada de mesmo nome. Cada handler deve aceitar um único parâmetro chamado `event` com um tipo a corresponder ao nome do evento a ser lidado. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -37,30 +37,30 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. +O primeiro handler toma um evento `NewGravatar` e cria uma nova entidade `Gravatar` com o `new Gravatar(event.params.id.toHex())`, e assim popula os campos da entidade com os parâmetros de evento correspondentes. Esta instância da entidade é representada pelo variável `gravatar`, com um valor de id de `event.params.id.toHex()`. -The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. +O segundo handler tenta carregar o `Gravatar` existente do armazenamento do Graph Node. Se ele ainda não existe, ele é criado por demanda. A entidade é então atualizada para corresponder aos novos parâmetros de evento, antes de ser devolvida ao armazenamento com `gravatar.save()`. ### IDs Recomendadas para Criar Novas Entidades -It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. +Recomendamos muito utilizar `Bytes` como o tipo para campos `id`, e só usar o `String` para atributos que realmente contenham texto legível para humanos, como o nome de um token. Abaixo estão alguns valores recomendados de `id` para considerar ao criar novas entidades. - `transfer.id = event.transaction.hash` - `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` -- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like +- Para entidades que armazenam dados agregados como, por exemplo, volumes diários de trading, a `id` costuma conter o número do dia. Aqui, usar `Bytes` como a `id` é beneficial. Determinar a `id` pareceria com: ```typescript let dayID = event.block.timestamp.toI32() / 86400 let id = Bytes.fromI32(dayID) ``` -- Convert constant addresses to `Bytes`. +- Converta endereços constantes em Bytes\`. `const id = Bytes.fromHexString('0xdead...beef')` -There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. It can be imported into `mapping.ts` from `@graphprotocol/graph-ts`. +Há uma [Biblioteca do Graph Typescript](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts), com utilidades para interagir com o armazenamento do Graph Node e conveniências para lidar com entidades e dados de contratos inteligentes. Ela pode ser importada ao `mapping.ts` do `@graphprotocol/graph-ts`. ### Gestão de entidades com IDs idênticas @@ -72,7 +72,7 @@ Se nenhum valor for inserido para um campo na nova entidade com a mesma ID, o ca ## Geração de Código -Para tornar mais fácil e seguro a tipos o trabalho com contratos inteligentes, eventos e entidades, o Graph CLI pode gerar tipos de AssemblyScript a partir do schema GraphQL do subgraph e das ABIs de contratos incluídas nas fontes de dados. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Isto é feito com @@ -80,7 +80,7 @@ Isto é feito com graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..fe878f01f295 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/README.md @@ -6,7 +6,7 @@ TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to [The Graph](https://github.com/graphprotocol/graph-node). -## Usage +## Uso For a detailed guide on how to create a subgraph, please see the [Graph CLI docs](https://github.com/graphprotocol/graph-cli). diff --git a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/api.mdx index c9069e51a627..986540229abe 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,16 +2,16 @@ title: API AssemblyScript --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: -- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- A [biblioteca do Graph TypeScript](https://github.com/grAphprotocol/grAph-tooling/tree/mAin/pAckAges/ts) (`graph-ts`) +- Code generated from Subgraph files by `graph codegen` -You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). +Você também pode adicionar outras bibliotecas como dependências, contanto que sejam compatíveis com [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). -Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). +Já que os mapeamentos de linguagem são escritos em AssemblyScript, vale a pena consultar os recursos padrão de linguagem e biblioteca da [wiki do AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki). ## Referência da API @@ -27,18 +27,18 @@ A biblioteca `@graphprotocol/graph-ts` fornece as seguintes APIs: ### Versões -No manifest do subgraph, `apiVersion` especifica a versão da API de mapeamento, executada pelo Graph Node para um subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Versão | Notas de atualização | -| :-: | --- | -| 0.0.9 | Adiciona novas funções de host [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adiciona validação para existência de campos no schema ao salvar uma entidade. | -| 0.0.7 | Classes `TransactionReceipt` e `Log` adicionadas aos tipos do EthereumCampo
Campo `receipt` adicionado ao objeto Ethereum Event | -| 0.0.6 | Campo `nonce` adicionado ao objeto Ethereum TransactionCampo
`baseFeePerGas` adicionado ao objeto Ethereum Block | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Campo `functionSignature` adicionado ao objeto Ethereum SmartContractCall | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Campo `input` adicionado ao objeto Ethereum Transaction | +| Versão | Notas de atualização | +| :----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adiciona novas funções de host [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adiciona validação para existência de campos no schema ao salvar uma entidade. | +| 0.0.7 | Classes `TransactionReceipt` e `Log` adicionadas aos tipos do EthereumCampo
Campo `receipt` adicionado ao objeto Ethereum Event | +| 0.0.6 | Campo `nonce` adicionado ao objeto Ethereum TransactionCampo
`baseFeePerGas` adicionado ao objeto Ethereum Block | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Campo `functionSignature` adicionado ao objeto Ethereum SmartContractCall | +| 0.0.3 | Campo `from` adicionado ao objeto de chamada no Ethereum
`Callethereum.call.address` renomeado para `ethereum.call.to` | +| 0.0.2 | Campo `input` adicionado ao objeto Ethereum Transaction | ### Tipos Embutidos @@ -166,7 +166,8 @@ _Matemática_ import { TypedMap } from '@graphprotocol/graph-ts' ``` -O `TypedMap` pode servir para armazenar pares de chave e valor (key e value ). Confira [este exemplo](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +O `TypedMap` pode servir para armazenar pares de chave e valor (key e value +). Confira [este exemplo](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). A classe `TypedMap` tem a seguinte API: @@ -223,7 +224,7 @@ import { store } from '@graphprotocol/graph-ts' A API `store` permite carregar, salvar e remover entidades do/para o armazenamento do Graph Node. -As entidades escritas no armazenamento mapeam um-por-um com os tipos de `@entity` definidos no schema GraphQL do subgraph. Para trabalhar com estas entidades de forma conveniente, o comando `graph codegen` fornecido pelo [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) gera classes de entidades, que são subclasses do tipo embutido `Entity`, com getters e setters de propriedade para os campos no schema e métodos para carregar e salvar estas entidades. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Como criar entidades @@ -254,9 +255,9 @@ export function handleTransfer(event: TransferEvent): void { Quando um evento `Transfer` é encontrado durante o processamento da chain, ele é passado para o handler de evento `handleTransfer` com o tipo `Transfer` gerado (apelidado de `TransferEvent` aqui, para evitar confusões com o tipo de entidade). Este tipo permite o acesso a dados como a transação parente do evento e seus parâmetros. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. +Cada entidade deve ter um identificador exclusivo para evitar colisões com outras entidades. É bastante comum que parâmetros de evento incluam um identificador exclusivo que pode ser usado. -> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +> Nota: Usar o hash de transação como ID supõe que nenhum outro evento na mesma transação cria entidades com este hash como o ID. #### Como carregar entidades a partir do armazenamento @@ -272,18 +273,18 @@ if (transfer == null) { // Use a entidade Transfer como antes ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. +Como a entidade pode ainda não existir no armazenamento, o método `load` retorna um valor de tipo `Transfer | null`. Portanto, é bom prestar atenção ao caso `null` antes de usar o valor. -> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Nota: Só é necessário carregar entidades se as mudanças feitas no mapeamento dependem dos dados anteriores de uma entidade. Veja a próxima seção para ver as duas maneiras de atualizar entidades existentes. #### Como consultar entidades criadas dentro de um bloco Desde o `graph-node` v0.31.0, o `@graphprotocol/graph-ts` v0.30.0 e o `@graphprotocol/graph-cli v0.49.0`, o método `loadInBlock` está disponível em todos os tipos de entidade. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. +A API do armazenamento facilita a recuperação de entidades que já foram criadas ou atualizadas no bloco atual. Uma situação típica para isso é que um manipulador cria uma transação a partir de algum evento em cadeia, e um handler posterior quer acessar esta transação — se ela existir. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // ou como a ID for construída @@ -380,11 +381,11 @@ A API do Ethereum fornece acesso a contratos inteligentes, variáveis de estado #### Apoio para Tipos no Ethereum -Assim como em entidades, o `graph codegen` gera classes para todos os contratos inteligentes e eventos usados em um subgraph. Para isto, as ABIs dos contratos devem ser parte da fonte de dados no manifest do subgraph. Tipicamente, os arquivos da ABI são armazenados em uma pasta `abis/`. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -Com as classes geradas, conversões entre tipos no Ethereum e os [tipos embutidos](#built-in-types) acontecem em segundo plano para que os autores de subgraphs não precisem se preocupar com elas. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -Veja um exemplo a seguir. Considerando um schema de subgraph como +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +484,7 @@ class Log { #### Acesso ao Estado do Contrato Inteligente -O código gerado pelo `graph codegen` também inclui classes para os contratos inteligentes usados no subgraph. Estes servem para acessar variáveis de estado público e funções de chamada do contrato no bloco atual. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. É comum acessar o contrato de qual origina um evento. Isto é feito com o seguinte código: @@ -506,13 +507,13 @@ O `Transfer` é apelidado de `TransferEvent` aqui para evitar confusões de nome Enquanto o `ERC20Contract` no Ethereum tiver uma função pública de apenas-leitura chamada `symbol`, ele pode ser chamado com o `.symbol()`. Para variáveis de estado público, um método com o mesmo nome é criado automaticamente. -Qualquer outro contrato que seja parte do subgraph pode ser importado do código gerado e ligado a um endereço válido. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Como Lidar com Chamadas Revertidas -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. +Se houver reversão dos métodos somente-leitura do seu contrato, cuide disso chamando o método do contrato gerado prefixado com `try_`. -- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +- Por exemplo, o contrato da Gravity expõe o método `gravatarToOwner`. Este código poderia manusear uma reversão nesse método: ```typescript let gravity = Gravity.bind(event.address) @@ -524,7 +525,7 @@ if (callResult.reverted) { } ``` -> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. +> Observe que um Graph Node conectado a um cliente Geth ou Infura pode não detetar todas as reversões; se depender disto, recomendamos usar um Graph Node conectado a um cliente Parity. #### ABI de Codificação/Decodificação @@ -582,7 +583,7 @@ let isContract = ethereum.hasCode(eoa).inner // retorna false import { log } from '@graphprotocol/graph-ts' ``` -A API `log` permite que os subgraphs gravem informações à saída padrão do Graph Node, assim como ao Graph Explorer. Mensagens podem ser gravadas com níveis diferentes de log. É fornecida uma sintaxe básica de formatação de strings para compor mensagens de log do argumento. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. A API `log` inclui as seguintes funções: @@ -590,7 +591,7 @@ A API `log` inclui as seguintes funções: - `log.info(fmt: string, args: Array): void` - loga uma mensagem de debug. - `log.warning(fmt: string, args: Array): void` - loga um aviso. - `log.error(fmt: string, args: Array): void` - loga uma mensagem de erro. -- `log.critical(fmt: string, args: Array): void` – loga uma mensagem crítica _e_ encerra o subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. A API `log` toma um string de formato e um arranjo de valores de string. Ele então substitui os temporários com os valores de strings do arranjo. O primeiro `{}` temporário é substituído pelo primeiro valor no arranjo, o segundo `{}` temporário é substituído pelo segundo valor, e assim por diante. @@ -672,7 +673,7 @@ export function handleSomeEvent(event: SomeEvent): void { import { ipfs } from '@graphprotocol/graph-ts' ``` -Smart contracts occasionally anchor IPFS files onchain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. +Contratos inteligentes, ocasionalmente, ancoram arquivos IPFS on-chain. Assim, os mapeamentos obtém os hashes IPFS do contrato e lêem os arquivos correspondentes do IPFS. Os dados dos arquivos serão retornados como `Bytes`, o que costuma exigir mais processamento; por ex., com a API `json` documentada mais abaixo nesta página. Considerando um hash ou local IPFS, um arquivo do IPFS é lido da seguinte maneira: @@ -721,7 +722,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) O único flag atualmente apoiado é o `json`, que deve ser passado ao `ipfs.map`. Com o flag `json`, o arquivo IPFS deve consistir de uma série de valores JSON, com um valor por linha. Chamar `ipfs.map`, irá ler cada linha no arquivo, desserializá-lo em um `JSONValue`, e chamar o callback para cada linha. O callback pode então armazenar dados do `JSONValue` com operações de entidade. As mudanças na entidade só serão armazenadas quando o handler que chamou o `ipfs.map` concluir com sucesso; enquanto isso, elas ficam na memória, e o tamanho do arquivo que o `ipfs.map` pode processar é então limitado. -Em caso de sucesso, o `ipfs.map` retorna `void`. Se qualquer invocação do callback causar um erro, o handler que invocou o `ipfs.map` é abortado, e o subgraph é marcado como falho. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### API de Criptografia @@ -770,44 +771,44 @@ Quando o tipo de um valor é confirmado, ele pode ser convertido num [tipo embut ### Referência de Conversões de Tipos -| Fonte(s) | Destino | Função de conversão | -| -------------------- | -------------------- | ---------------------------- | -| Address | Bytes | nenhum | -| Address | String | s.toHexString() | -| BigDecimal | String | s.toString() | -| BigInt | BigDecimal | s.toBigDecimal() | -| BigInt | String (hexadecimal) | s.toHexString() ou s.toHex() | -| BigInt | String (unicode) | s.toString() | -| BigInt | i32 | s.toI32() | -| Boolean | Boolean | nenhum | -| Bytes (assinado) | BigInt | BigInt.fromSignedBytes(s) | -| Bytes (não assinado) | BigInt | BigInt.fromUnsignedBytes(s) | -| Bytes | String (hexadecimal) | s.toHexString() ou s.toHex() | -| Bytes | String (unicode) | s.toString() | -| Bytes | String (base58) | s.toBase58() | -| Bytes | i32 | s.toI32() | -| Bytes | u32 | s.toU32() | -| Bytes | JSON | json.fromBytes(s) | -| int8 | i32 | nenhum | -| int32 | i32 | nenhum | -| int32 | BigInt | BigInt.fromI32(s) | -| uint24 | i32 | nenhum | -| int64 - int256 | BigInt | nenhum | -| uint32 - uint256 | BigInt | nenhum | -| JSON | boolean | s.toBool() | -| JSON | i64 | s.toI64() | -| JSON | u64 | s.toU64() | -| JSON | f64 | s.toF64() | -| JSON | BigInt | s.toBigInt() | -| JSON | string | s.toString() | -| JSON | Array | s.toArray() | -| JSON | Object | s.toObject() | -| String | Address | Address.fromString(s) | -| Bytes | Address | Address.fromBytes(s) | -| String | BigInt | BigInt.fromString(s) | -| String | BigDecimal | BigDecimal.fromString(s) | -| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | -| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | +| Fonte(s) | Destino | Função de conversão | +| ------------------------ | -------------------- | ------------------------------ | +| Address | Bytes | nenhum | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | String (hexadecimal) | s.toHexString() ou s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | nenhum | +| Bytes (assinado) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (não assinado) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | String (hexadecimal) | s.toHexString() ou s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | nenhum | +| int32 | i32 | nenhum | +| int32 | BigInt | BigInt.fromI32(s) | +| uint24 | i32 | nenhum | +| int64 - int256 | BigInt | nenhum | +| uint32 - uint256 | BigInt | nenhum | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toI64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| Bytes | Address | Address.fromBytes(s) | +| String | BigInt | BigInt.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | ### Metadados de Fontes de Dados @@ -836,7 +837,7 @@ A classe base `Entity` e a subclasse `DataSourceContext` têm helpers para deter ### DataSourceContext no Manifest -A seção `context` dentro do `dataSources` lhe permite definir pares key-value acessíveis dentro dos seus mapeamentos de subgraph. Os tipos disponíveis são `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, e `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Aqui está um exemplo de YAML que ilustra o uso de vários tipos na seção `context`: @@ -887,4 +888,4 @@ dataSources: - `List`: Especifica uma lista de itens. Cada item deve especificar o seu tipo e dados. - `BigInt`: Especifica um valor integral largo. É necessário citar este devido ao seu grande tamanho. -Este contexto, então, pode ser acessado nos seus arquivos de mapeamento de subgraph, o que resulta em subgraphs mais dinâmicos e configuráveis. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/common-issues.mdx index 2f5f5b63c40a..32ea7ff586f9 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Problemas Comuns no AssemblyScript --- -É comum encontrar certos problemas no [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) durante o desenvolvimento do subgraph. Eles variam em dificuldade de debug, mas vale ter consciência deles. A seguir, uma lista não exaustiva destes problemas: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: -- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. +- Variáveis de classe `Private` não são aplicadas no [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). Não há como evitar que estas variáveis sejam alteradas diretamente a partir do objeto de classe. - O escopo não é herdado em [funções de closure](https://www.assemblyscript.org/status.html#on-closures), por ex., não é possível usar variáveis declaradas fora de funções de closure. Há uma explicação [neste vídeo](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/pt/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/pt/subgraphs/developing/creating/install-the-cli.mdx index ca436b6eef1b..1c4bb49525e5 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/install-the-cli.mdx @@ -2,39 +2,39 @@ title: Como instalar o Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Visão geral -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Como Começar ### Como instalar o Graph CLI -The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +A CLI do The Graph é escrita em TypeScript, e é necessário ter o `node`, e `npm` ou `yarn`, instalados para usá-la. Verifique a versão [mais recente](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) da CLI. Execute um dos seguintes comandos na sua máquina local: -#### Using [npm](https://www.npmjs.com/) +#### Uso do [npm](https://www.npmjs.com/) ```bash npm install -g @graphprotocol/graph-cli@latest ``` -#### Using [yarn](https://yarnpkg.com/) +#### Uso do [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Crie um Subgraph ### De um Contrato Existente -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -45,75 +45,61 @@ graph init \ [] ``` -- The command tries to retrieve the contract ABI from Etherscan. +- O comando tenta resgatar o contrato da ABI do Etherscan. - - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + - A CLI do The Graph depende de um endpoint público de RPC. Enquanto falhas ocasionais são de se esperar, basta tentar de novo para resolver. Se as falhas persistirem, considere usar uma ABI local. -- If any of the optional arguments are missing, it guides you through an interactive form. +- Se faltar algum dos argumentos opcionais, você será guiado para um formulário interativo. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### De um Exemplo de Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -### Add New `dataSources` to an Existing Subgraph +### Como Adicionar Novos `dataSources` para um Subgraph Existente -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] -Options: +Opções: - --abi Path to the contract ABI (default: download from Etherscan) - --contract-name Name of the contract (default: Contract) - --merge-entities Whether to merge entities with the same name (default: false) - --network-file Networks config file path (default: "./networks.json") + --abi Caminho à ABI do contrato (padrão: baixar do Etherscan) + --contract-name Nome do contrato (padrão: Contract) + --merge-entities Se fundir ou não entidades com o mesmo nome (padrão: false) + --network-file Caminho ao arquivo de configuração das redes (padrão: "./networks.json") ``` #### Especificações -The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. +O comando `graph add` pegará a ABI do Etherscan (a não ser que um local de ABI seja especificado com a opção --abi), e criará um novo `dataSource` da mesma maneira que o comando `graph init` cria um `dataSource` `--from-contract`, assim atualizando o schema e os mapeamentos de acordo. -- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +- A opção `--merge entities` identifica como o programador gostaria de lidar com conflitos de nome em `entity` e `event`: - - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + - Se for `true`: o novo `dataSource` deve usar `eventHandlers` e `entities` existentes. - - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + - Se for `false`: um novo handler de `entity` e `event` deve ser criado com ${dataSourceName}{EventName}\`. -- The contract `address` will be written to the `networks.json` for the relevant network. +- O `address` (endereço de contrato) será escrito no `networks.json` para a rede relevante. -> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +> Observação: Quando usar a CLI interativa, após executar o `graph init` com êxito, você receberá uma solicitação para adicionar um novo `dataSource`. -### Getting The ABIs +### Como Obter as ABIs Os arquivos da ABI devem combinar com o(s) seu(s) contrato(s). Há algumas maneiras de obter estes arquivos: - Caso construa o seu próprio projeto, provavelmente terá acesso às suas ABIs mais recentes. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Versão | Notas de atualização | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Adicionado apoio a handlers de eventos com acesso a recibos de transação. | -| 0.0.4 | Adicionado apoio à gestão de recursos de subgraph. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/pt/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/pt/subgraphs/developing/creating/ql-schema.mdx index db1f1f513082..61faf189ab9a 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/ql-schema.mdx @@ -1,28 +1,28 @@ --- -title: The Graph QL Schema +title: O Schema GraphQL --- ## Visão geral -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. -> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. +> Nota: Se você nunca escreveu um schema em GraphQL, recomendamos que confira este manual sobre o sistema de tipos da GraphQL. Consulte a documentação sobre schemas GraphQL na seção sobre a [API da GraphQL](/subgraphs/querying/graphql-api/). -### Defining Entities +### Como Definir Entidades -Before defining entities, it is important to take a step back and think about how your data is structured and linked. +Antes de definir as entidades, é importante dar um passo atrás e pensar em como os seus dados são estruturados e ligados. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. -- It may be useful to imagine entities as "objects containing data", rather than as events or functions. -- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. -- Each type that should be an entity is required to be annotated with an `@entity` directive. -- By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. - - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. - - If changes happen in the same block in which the entity was created, then mappings can make changes to immutable entities. Immutable entities are much faster to write and to query so they should be used whenever possible. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. +- Pode ser bem útil imaginar entidades como "objetos que contém dados", e não como eventos ou funções. +- Você define os tipos de entidade em `schema.graphql`, e o Graph Node irá gerar campos de nível superior para queries de instâncias únicas e coleções desse tipo de entidade. +- Cada tipo feito para ser uma entidade precisa ser anotado com uma diretiva `@entity`. +- Por padrão, as entidades são mutáveis, ou seja: os mapeamentos podem carregar as entidades existentes, modificá-las, e armazenar uma nova versão dessa entidade. + - A mutabilidade tem um preço, então, para tipos de entidade que nunca serão modificados, como as que contêm dados extraídos da chain sem alterações, recomendamos marcá-los como imutáveis com `@entity(immutable: true)`. + - Se as alterações acontecerem no mesmo bloco em que a entidade foi criada, então os mapeamentos podem fazer alterações em entidades imutáveis. Entidades imutáveis são muito mais rápidas de escrever e consultar em query, então elas devem ser usadas sempre que possível. #### Bom Exemplo -The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. +A entidade `Gravatar` abaixo é estruturada em torno de um objeto Gravatar, e é um bom exemplo de como pode ser definida uma entidade. ```graphql type Gravatar @entity(immutable: true) { @@ -36,7 +36,7 @@ type Gravatar @entity(immutable: true) { #### Mau Exemplo -The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. +As entidades `GravatarAccepted` e `GravatarDeclined` abaixo têm base em torno de eventos. Não é recomendado mapear eventos ou chamadas de função a entidades identicamente. ```graphql type GravatarAccepted @entity { @@ -56,32 +56,32 @@ type GravatarDeclined @entity { #### Campos Opcionais e Obrigatórios -Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: +Os campos da entidade podem ser definidos como obrigatórios ou opcionais. Os campos obrigatórios são indicados pelo `!` no schema. Se o campo for escalar, tentar armazenar a entidade causará um erro. Se o campo fizer referência a outra entidade, você receberá esse erro: ``` Null value resolved for non-null field 'name' ``` -Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. +Cada entidade deve ter um campo `id`, que deve ser do tipo `Bytes!` ou `String!`. Geralmente é melhor usar `Bytes!` — a não ser que o `id` tenha texto legível para humanos, já que entidades com as ids `Bytes!` são mais fáceis de escrever e consultar que aquelas com um `id` `String!`. O campo `id` serve como a chave primária, e deve ser singular entre todas as entidades do mesmo tipo. Por razões históricas, o tipo `ID!` também é aceite, como um sinónimo de `String!`. -For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. +Para alguns tipos de entidade, o `id` é construído das id's de duas outras entidades; isto é possível com o `concat`, por ex., `let id = left.id.concat(right.id)` para formar a id a partir das id's de `left` e `right`. Da mesma forma, para construir uma id a partir da id de uma entidade existente e um contador `count`, pode ser usado o `let id = left.id.concatI32(count)`. Isto garante a concatenação a produzir id's únicas enquanto o comprimento do `left` for o mesmo para todas as tais entidades; por exemplo, porque o `left.id` é um `Address` (endereço). ### Tipos Embutidos de Escalar #### Escalares Apoiados pelo GraphQL -The following scalars are supported in the GraphQL API: +As seguintes escalas são apoiadas na API da GraphQL: -| Tipo | Descrição | -| --- | --- | -| `Bytes` | Arranjo de bytes, representado como string hexadecimal. Usado frequentemente por hashes e endereços no Ethereum. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Tipo | Descrição | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Arranjo de bytes, representado como string hexadecimal. Usado frequentemente por hashes e endereços no Ethereum. | +| `String` | Escalar para valores string. Caracteres nulos serão removidos automaticamente. | +| `Boolean` | Escalar para valores `boolean`. | +| `Int` | A especificação da GraphQL define `Int` como um inteiro assinado de 32 bits. | +| `Int8` | Um número inteiro assinado em 8 bits, também conhecido como um número inteiro assinado em 64 bits, pode armazenar valores de -9,223,372,036,854,775,808 a 9,223,372,036,854,775,807. É melhor usar isto para representar o i64 do ethereum. | +| `BigInt` | Números inteiros grandes. Usados para os tipos `uint32`, `int64`, `uint64`, ..., `uint256` do Ethereum. Nota: Tudo abaixo de `uint32`, como `int32`, `uint24` ou `int8` é representado como `i32`. | +| `BigDecimal` | Decimais de alta precisão `BigDecimal` representados como um significando e um exponente. O alcance de exponentes é de -6143 até +6144. Arredondado para 34 dígitos significantes. | +| `Timestamp` | É um valor i64 em microssegundos. Usado frequentemente para campos `timestamp` para séries temporais e agregações. | ### Enums @@ -95,9 +95,9 @@ enum TokenStatus { } ``` -Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: +Quando o enum for definido no schema, podes usar a representação do string do valor enum para determinar um campo enum numa entidade. Por exemplo, podes implantar o `tokenStatus` no `SecondOwner` ao definir primeiro a sua entidade e depois determinar o campo com `entity.tokenStatus = "SecondOwner"`. O exemplo abaixo demonstra como ficaria a entidade do Token com um campo enum: -More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). +Para saber mais sobre como escrever enums, veja a [documentação do GraphQL](https://graphql.org/learn/schema/). ### Relacionamentos de Entidades @@ -107,7 +107,7 @@ Relacionamentos são definidos em entidades como qualquer outro campo, sendo que #### Relacionamentos Um-com-Um -Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: +Defina um tipo de entidade `Transaction` com um relacionamento um-com-um opcional, com um tipo de entidade `TransactionReceipt`: ```graphql type Transaction @entity(immutable: true) { @@ -123,7 +123,7 @@ type TransactionReceipt @entity(immutable: true) { #### Relacionamentos Um-com-Vários -Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: +Defina um tipo de entidade `TokenBalance` com um relacionamento um-com-vários, exigido com um tipo de entidade `Token`: ```graphql type Token @entity(immutable: true) { @@ -139,13 +139,13 @@ type TokenBalance @entity { ### Buscas Reversas -Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. +Buscas reversas podem ser definidas numa entidade pelo campo `@derivedFrom`. Isto cria um campo virtual na entidade, que pode ser consultado, mas não pode ser configurado manualmente pela API de mapeamentos. Em vez disto, ele é derivado do relacionamento definido na outra entidade. Para tais relacionamentos, faz raramente sentido armazenar ambos os lados do relacionamento, e tanto o indexing quanto o desempenho dos queries melhorarão quando apenas um lado for armazenado, e o outro derivado. -Para relacionamentos um-com-vários, o relacionamento sempre deve ser armazenado no lado 'um', e o lado 'vários' deve sempre ser derivado. Armazenar o relacionamento desta maneira, em vez de armazenar um arranjo de entidades no lado 'vários', melhorará dramaticamente o desempenho para o indexing e os queries no subgraph. Em geral, evite armazenar arranjos de entidades enquanto for prático. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Exemplo -We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: +Podemos fazer os saldos para um token acessíveis a partir do mesmo token ao derivar um campo `tokenBalances`: ```graphql type Token @entity(immutable: true) { @@ -160,15 +160,15 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript -let token = new Token(event.address) // Create Token -token.save() // tokenBalances is derived automatically +let token = new Token(event.address) // Crie o Token +token.save() // tokenBalances é derivado automaticamente let tokenBalance = new TokenBalance(event.address) tokenBalance.amount = BigInt.fromI32(0) -tokenBalance.token = token.id // Reference stored here +tokenBalance.token = token.id // Referência armazenada aqui tokenBalance.save() ``` @@ -178,7 +178,7 @@ Para relacionamentos vários-com-vários, como um conjunto de utilizadores em qu #### Exemplo -Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. +Defina uma busca reversa a partir de um tipo de entidade `User` para um tipo de entidade `Organization`. No exemplo abaixo, isto é feito ao buscar pelo atributo `members` a partir de dentro da entidade `Organization`. Em queries, o campo `organizations` no `User` será resolvido ao encontrar todas as entidades `Organization` que incluem a ID do utilizador. ```graphql type Organization @entity { @@ -194,7 +194,7 @@ type User @entity { } ``` -A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like +Uma maneira mais eficiente para armazenar este relacionamento é com uma mesa de mapeamento que tem uma entrada para cada par de `User` / `Organization`, com um schema como: ```graphql type Organization @entity { @@ -231,11 +231,11 @@ query usersWithOrganizations { } ``` -Esta maneira mais elaborada de armazenar relacionamentos vários-com-vários armazenará menos dados para o subgraph, portanto, o subgraph ficará muito mais rápido de indexar e consultar. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Como adicionar comentários ao schema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: +Pela especificação do GraphQL, é possível adicionar comentários acima de atributos de entidade do schema com o símbolo de hash `#`. Isto é ilustrado no exemplo abaixo: ```graphql type MyFirstEntity @entity { @@ -251,7 +251,7 @@ Buscas fulltext filtram e ordenam entidades baseadas num texto inserido. Queries Uma definição de query fulltext inclui: o nome do query, o dicionário do idioma usado para processar os campos de texto, o algoritmo de ordem usado para ordenar os resultados, e os campos incluídos na busca. Todo query fulltext pode ter vários campos, mas todos os campos incluídos devem ser de um único tipo de entidade. -To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. +Para adicionar um query fulltext, inclua um tipo `_Schema_` com uma diretiva fulltext no schema em GraphQL. ```graphql type _Schema_ @@ -274,7 +274,7 @@ type Band @entity { } ``` -The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/subgraphs/querying/graphql-api/#queries) for a description of the fulltext search API and more example usage. +O exemplo `bandSearch` serve, em queries, para filtrar entidades `Band` baseadas nos documentos de texto nos campos `name`, `description` e `bio`. Confira a página [API GraphQL - Consultas](/subgraphs/querying/graphql-api/#queries) para uma descrição da API de busca fulltext e mais exemplos de uso. ```graphql query { @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Idiomas apoiados @@ -295,30 +295,30 @@ Escolher um idioma diferente terá um efeito definitivo, porém às vezes sutil, Dicionários apoiados: -| Code | Dicionário | -| ------ | ---------- | -| simple | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | Português | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| Código | Dicionário | +| -------- | --------------- | +| simple | Geral | +| da | Dinamarquês | +| nl | Neerlandês | +| en | Inglês | +| fi | Finlandês | +| fr | Francês | +| de | Alemão | +| hu | Húngaro | +| it | Italiano | +| no | Norueguês | +| pt | Português | +| ro | Romeno | +| ru | Russo | +| es | Espanhol | +| sv | Sueco | +| tr | Turco | ### Algoritmos de Ordem Algoritmos apoiados para a organização de resultados: -| Algorithm | Description | +| Algoritmo | Descrição | | ------------- | --------------------------------------------------------------------------------- | | rank | Organiza os resultados pela qualidade da correspondência (0-1) da busca fulltext. | -| proximityRank | Similar to rank but also includes the proximity of the matches. | +| proximityRank | Similar ao rank, mas também inclui a proximidade das combinações. | diff --git a/website/src/pages/pt/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/pt/subgraphs/developing/creating/starting-your-subgraph.mdx index 1b70a2ec98ad..2d834dedec0b 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -1,23 +1,35 @@ --- -title: Starting Your Subgraph +title: Como Iniciar o Seu Subgraph --- ## Visão geral -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. -### Start Building +### Comece a Construir -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: -1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component -3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema -4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +1. [Como Instalar a CLI](/subgraphs/developing/creating/install-the-cli/) — Configure a sua infraestrutura +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component +3. [Schema da GraphQL](/subgraphs/developing/creating/ql-schema/) — Escreva o seu schema +4. [Como Escrever Mapeamentos em AssemblyScript](/subgraphs/developing/creating/assemblyscript-mappings/) — Escreva os seus mapeamentos +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features -Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). +Explore mais [recursos para APIs](/subgraphs/developing/creating/graph-ts/README/) e realize testes locais com [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Versão | Notas de atualização | +| :----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/pt/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/pt/subgraphs/developing/creating/subgraph-manifest.mdx index 2a4c3af44fe4..79d268e2eb2b 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/subgraph-manifest.mdx @@ -1,35 +1,35 @@ --- -title: Subgraph Manifest +title: Manifest do Subgraph --- ## Visão geral -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL -- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) +- `mapping.ts`: Código de [Mapeamentos do AssemblyScript](https://github.com/AssemblyScript/assemblyscript) que traduz dados de eventos para entidades definidas no seu schema (por exemplo, `mapping.ts` neste guia) -### Subgraph Capabilities +### Capacidades do Subgraph -A single subgraph can: +A single Subgraph can: -- Index data from multiple smart contracts (but not multiple networks). +- Indexar dados de vários contratos inteligentes (mas não de múltiplas redes). -- Index data from IPFS files using File Data Sources. +- Indexar dados de arquivos IPFS usando Fontes de Dados de Arquivo. -- Add an entry for each contract that requires indexing to the `dataSources` array. +- Adicionar uma entrada para cada contrato que precisa ser indexado para o arranjo `dataSources`. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -77,49 +77,49 @@ dataSources: file: ./src/mapping.ts ``` -## Subgraph Entries +## Entradas do Subgraph -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). As entradas importantes para atualizar para o manifest são: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. -- `features`: a list of all used [feature](#experimental-features) names. +- `features`: é uma lista de todos os [nomes de função](#experimental-features) usados. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. -- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. +- `dataSources.source.startBlock`: o número opcional do bloco de onde a fonte de dados começa a indexar. Em muitos casos, sugerimos usar o bloco em que o contrato foi criado. -- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. +- `dataSources.source.endBlock`: O número opcional do bloco onde a fonte de dados pára de indexar, inclusive aquele bloco. Versão de spec mínima exigida: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. -- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. +- `dataSources.mapping.entities`: as entidades que a fonte de dados escreve ao armazenamento. O schema para cada entidade é definido no arquivo schema.graphql. -- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. +- `dataSources.mapping.abis`: um ou mais arquivos de ABI nomeados para o contrato de origem, além de quaisquer outros contratos inteligentes com os quais interage de dentro dos mapeamentos. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Handlers de Eventos -Handlers de eventos em um subgraph reagem a eventos específicos emitidos por contratos inteligentes na blockchain e acionam handlers definidos no manifest do subgraph. Isto permite que subgraphs processem e armazenem dados conforme a lógica definida. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Como Definir um Handler de Evento -Um handler de evento é declarado dentro de uma fonte de dados na configuração YAML do subgraph. Ele especifica quais eventos devem ser escutados e a função correspondente a ser executada quando estes eventos forem detetados. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -144,20 +144,20 @@ dataSources: handler: handleApproval - event: Transfer(address,address,uint256) handler: handleTransfer - topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Filtro de tópico opcional que só filtra eventos com o tópico especificado. + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic. ``` ## Handlers de chamada -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Handlers de chamadas só serão ativados em um de dois casos: quando a função especificada é chamada por uma conta que não for do próprio contrato, ou quando ela é marcada como externa no Solidity e chamada como parte de outra função no mesmo contrato. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Como Definir um Handler de Chamada -To define a call handler in your manifest, simply add a `callHandlers` array under the data source you would like to subscribe to. +Para definir um handler de chamada no seu manifest, apenas adicione um arranjo `callHandlers` sob a fonte de dados para a qual quer se inscrever. ```yaml dataSources: @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -182,11 +182,11 @@ dataSources: handler: handleCreateGravatar ``` -The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. +O `function` é a assinatura de função normalizada para filtrar chamadas. A propriedade `handler` é o nome da função no mapeamento que quer executar quando a função-alvo é chamada no contrato da fonte de dados. ### Função de Mapeamento -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -201,11 +201,11 @@ export function handleCreateGravatar(call: CreateGravatarCall): void { } ``` -The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. +A função `handleCreateGravatar` toma um novo `CreateGravatarCall` que é uma subclasse do ethereum.Call, fornecido pelo @graphprotocol/graph-ts, que inclui as entradas e saídas digitadas da chamada. O tipo `CreateGravatarCall` é gerado ao executar o `graph codegen`. ## Handlers de Blocos -Além de se inscrever a eventos de contratos ou chamadas para funções, um subgraph também pode querer atualizar os seus dados enquanto novos blocos são afixados à chain. Para isto, um subgraph pode executar uma função após cada bloco, ou após blocos que correspondem a um filtro predefinido. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Filtros Apoiados @@ -216,9 +216,9 @@ filter: kind: call ``` -_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ +_O handler definido será chamado uma vez para cada bloco, que contém uma chamada ao contrato (fonte de dados) sob o qual o handler está definido._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. A ausência de um filtro para um handler de blocos garantirá que o handler seja chamado a todos os blocos. Uma fonte de dados só pode conter um handler de bloco para cada tipo de filtro. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -249,9 +249,9 @@ dataSources: #### Filtro Polling -> **Requires `specVersion` >= 0.0.8** +> **Requer `specVersion` >= 0.0.8** > -> **Note:** Polling filters are only available on dataSources of `kind: ethereum`. +> \*\*Nota: Filtros de polling só estão disponíveis nas dataSources de `kind: ethereum`. ```yaml blockHandlers: @@ -261,13 +261,13 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Filtro Once -> **Requires `specVersion` >= 0.0.8** +> **Requer `specVersion` >= 0.0.8** > -> **Note:** Once filters are only available on dataSources of `kind: ethereum`. +> **Observação:** Filtros de once só estão disponíveis nas dataSources de `kind: ethereum`. ```yaml blockHandlers: @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -O handler definido com o filtro once só será chamado uma única vez antes da execução de todos os outros handlers (por isto, o nome "once" / "uma vez"). Esta configuração permite que o subgraph use o handler como um handler de inicialização, para realizar tarefas específicas no começo da indexação. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Função de Mapeamento -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -311,13 +311,13 @@ eventHandlers: handler: handleGive ``` -An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. +Um evento só será ativado quando a assinatura e o topic 0 corresponderem. `topic0` é igual ao hash da assinatura do evento. ## Recibos de Transação em Handlers de Eventos -Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. +A partir do `specVersion` `0.0.5` e `apiVersion` `0.0.7`, os handlers de eventos podem acessar o recibo para a transação que os emitiu. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -326,7 +326,7 @@ eventHandlers: receipt: true ``` -Inside the handler function, the receipt can be accessed in the `Event.receipt` field. When the `receipt` key is set to `false` or omitted in the manifest, a `null` value will be returned instead. +Dentro da função do handler, o recibo pode ser acessado no campo `Event.receipt`. Quando a chave `receipt` é configurada em `false`, ou omitida no manifest, um valor `null` será retornado em vez disto. ## Ordem de Handlers de Gatilhos @@ -338,17 +338,17 @@ Os gatilhos para uma fonte de dados dentro de um bloco são ordenados com o segu Estas regras de organização estão sujeitas à mudança. -> **Note:** When new [dynamic data source](#data-source-templates-for-dynamically-created-contracts) are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +> **Observe:** Quando novas [fontes de dados dinâmicas](#data-source-templates-for-dynamically-created-contracts) forem criadas, os handlers definidos para fontes de dados dinâmicas só começarão o processamento após todos os handlers existentes forem processados, e repetirão a mesma sequência quando ativados. ## Modelos de Fontes de Dados Um padrão comum em contratos inteligentes compatíveis com EVMs é o uso de contratos de registro ou fábrica. Nisto, um contrato cria, gesta ou refere a um número arbitrário de outros contratos, cada um com o seu próprio estado e eventos. -The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. +Os endereços destes subcontratos podem ou não ser conhecidos imediatamente, e muitos destes contratos podem ser criados e/ou adicionados ao longo do tempo. É por isto que, em muitos casos, é impossível definir uma única fonte de dados ou um número fixo de fontes de dados, e é necessária uma abordagem mais dinâmica: _modelos de fontes de dados_. ### Fonte de Dados para o Contrato Principal -First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.org) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created onchain by the factory contract. +Primeiro, defina uma fonte de dados regular para o contrato principal. Abaixo está um exemplo simplificado de fonte de dados para o contrato de fábrica de trocas do [Uniswap](https://uniswap.org). Preste atenção ao handler de evento `NewExchange(address,address)`: é emitido quando um novo contrato de troca é criado on-chain pelo contrato de fábrica. ```yaml dataSources: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -375,13 +375,13 @@ dataSources: ### Modelos de Fontes de Dados para Contratos Criados Dinamicamente -Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a pre-defined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. +Depois, adicione _modelos de fontes de dados_ ao manifest. Estes são idênticos a fontes de dados regulares, mas não têm um endereço de contrato predefinido sob `source`. Tipicamente, é possível definir um modelo para cada tipo de subcontrato administrado ou referenciado pelo contrato parente. ```yaml dataSources: - kind: ethereum/contract name: Factory - # ... outros campos de fonte para o contrato principal ... + # ... other source fields for the main contract ... templates: - name: Exchange kind: ethereum/contract @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -411,7 +411,7 @@ templates: ### Como Instanciar um Modelo de Fontes de Dados -In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. +Na etapa final, atualize o seu mapeamento de contratos para criar uma instância dinâmica de fontes de dados de um dos modelos. Neste exemplo, mudarias o mapeamento do contrato principal para importar o modelo `Exchange` e chamar o método `Exchange.create(address)` nele, para começar a indexar o novo contrato de troca. ```typescript import { Exchange } from '../generated/templates' @@ -423,13 +423,13 @@ export function handleNewExchange(event: NewExchange): void { } ``` -> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> **Observação:** Uma nova fonte de dados só processará as chamadas e eventos para o bloco onde ele foi criado e todos os blocos a seguir. Porém, não serão processados dados históricos, por ex., dados contidos em blocos anteriores. > > Se blocos anteriores conterem dados relevantes à nova fonte, é melhor indexá-los ao ler o estado atual do contrato e criar entidades que representem aquele estado na hora que a nova fonte de dados for criada. ### Contextos de Fontes de Dados -Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: +Contextos de fontes de dados permitem passar configurações extras ao instanciar um modelo. No nosso exemplo, vamos dizer que há trocas associadas com um par de trading particular, incluído no evento `NewExchange`. Essa informação pode ser passada na fonte de dados instanciada, como: ```typescript import { Exchange } from '../generated/templates' @@ -441,7 +441,7 @@ export function handleNewExchange(event: NewExchange): void { } ``` -Inside a mapping of the `Exchange` template, the context can then be accessed: +Dentro de um mapeamento do modelo `Exchange`, dá para acessar o contexto: ```typescript import { dataSource } from '@graphprotocol/graph-ts' @@ -450,11 +450,11 @@ let context = dataSource.context() let tradingPair = context.getString('tradingPair') ``` -There are setters and getters like `setString` and `getString` for all value types. +Há setters e getters como `setString` e `getString` para todos os tipos de valores. ## Blocos Iniciais -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -480,24 +480,24 @@ dataSources: handler: handleNewEvent ``` -> **Note:** The contract creation block can be quickly looked up on Etherscan: +> **Observe:** O bloco de criação do contrato pode ser buscado rapidamente no Etherscan: > > 1. Procure pelo contrato ao inserir o seu endereço na barra de busca. -> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 2. Clique no hash da transação de criação na seção `Contract Creator`. > 3. Carregue a página dos detalhes da transação, onde encontrará o bloco inicial para aquele contrato. ## IndexerHints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. -> This feature is available from `specVersion: 1.0.0` +> Este recurso está disponível a partir da `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +1. `"never"`: Nenhum pruning de dados históricos; retém o histórico completo. +2. `"auto"`: Retém o histórico mínimo necessário determinado pelo Indexador e otimiza o desempenho das queries. 3. Um número específico: Determina um limite personalizado no número de blocos históricos a guardar. ``` @@ -505,25 +505,25 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> O termo "histórico", neste contexto de subgraphs, refere-se ao armazenamento de dados que refletem os estados antigos de entidades mutáveis. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. O histórico, desde um bloco especificado, é necessário para: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rebobinar o subgraph de volta àquele bloco +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block Se os dados históricos desde aquele bloco tiverem passado por pruning, as capacidades acima não estarão disponíveis. -> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. +> Vale usar o `"auto"`, por maximizar o desempenho de queries e ser suficiente para a maioria dos utilizadores que não exigem acesso a dados extensos no histórico. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: Para reter uma quantidade específica de dados históricos: ``` indexerHints: - prune: 1000 # Replace 1000 with the desired number of blocks to retain + prune: 1000 # Substitua 1000 pelo número de blocos que deseja reter ``` Para preservar o histórico completo dos estados da entidade: @@ -532,3 +532,18 @@ Para preservar o histórico completo dos estados da entidade: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Versão | Notas de atualização | +| :----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/pt/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/pt/subgraphs/developing/creating/unit-testing-framework.mdx index 0b92f77c0f4f..a629d088a34c 100644 --- a/website/src/pages/pt/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/pt/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,52 +2,52 @@ title: Estrutura de Testes de Unidades --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. -## Benefits of Using Matchstick +## Vantagens de Usar o Matchstick -- It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- É escrito em Rust e otimizado para o melhor desempenho possível. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Como Começar -### Install Dependencies +### Como Instalar Dependências -In order to use the test helper methods and run tests, you need to install the following dependencies: +Para usar os métodos de test helper e executar os testes, instale as seguintes dependências: ```sh yarn add --dev matchstick-as ``` -### Install PostgreSQL +### Como Instalar o PostgreSQL -`graph-node` depends on PostgreSQL, so if you don't already have it, then you will need to install it. +O `graph-node` depende do PostgreSQL, então se ainda não o tem, será necessário instalá-lo. -> Note: It's highly recommended to use the commands below to avoid unexpected errors. +> Observação: É altamente recomendável usar os comandos abaixo para evitar erros inesperados. -#### Using MacOS +#### Usando o MacOS -Installation command: +Comando de instalação: ```sh brew install postgresql ``` -Create a symlink to the latest libpq.5.lib _You may need to create this dir first_ `/usr/local/opt/postgresql/lib/` +Crie um symlink ao último libpq.5.lib Talvez precise criar este diretório primeiro: `/usr/local/opt/postgresql/lib/` ```sh ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib ``` -#### Using Linux +#### Usando o Linux -Installation command (depends on your distro): +Comando de instalação do Postgres (depende da sua distro): ```sh sudo apt install postgresql ``` -### Using WSL (Windows Subsystem for Linux) +### Usando o WSL (Subsistema do Windows para o Linux) Pode usar o Matchstick no WSL tanto com a abordagem do Docker quanto com a abordagem binária. Como o WSL pode ser um pouco complicado, aqui estão algumas dicas caso encontre problemas @@ -61,13 +61,13 @@ ou /node_modules/gluegun/build/index.js:13 throw up; ``` -Please make sure you're on a newer version of Node.js graph-cli doesn't support **v10.19.0** anymore, and that is still the default version for new Ubuntu images on WSL. For instance Matchstick is confirmed to be working on WSL with **v18.1.0**, you can switch to it either via **nvm** or if you update your global Node.js. Don't forget to delete `node_modules` and to run `npm install` again after updating you nodejs! Then, make sure you have **libpq** installed, you can do that by running +Verifique se está em uma versão mais recente do Node.js. O graph-cli não apoia mais a **v10.19.0**, que ainda é a versão padrão para novas imagens de Ubuntu no WSL. Por exemplo, se o Matchstick é confirmado como funcional no WSL com a **v18.1.0**, pode trocar para essa versão através do **nvm** ou ao atualizar o seu Node.js global. Não se esqueça de apagar o `node_modules` e executar o `npm install` novamente após atualizar o seu nodejs! Depois, garanta que tem o **libpq** instalado. Isto pode ser feito ao executar: ``` sudo apt-get install libpq-dev ``` -And finally, do not use `graph test` (which uses your global installation of graph-cli and for some reason that looks like it's broken on WSL currently), instead use `yarn test` or `npm run test` (that will use the local, project-level instance of graph-cli, which works like a charm). For that you would of course need to have a `"test"` script in your `package.json` file which can be something as simple as +E finalmente, não use o `graph test` (que usa a sua instalação global da graph-cli, e por alguma razão, parece não funcionar no WSL no momento). Em vez disto, use o `yarn test` ou o `npm run test` (que usará a instância local do graph-cli; esta funciona muito bem). Para isto, obviamente você precisa de um script `"test"` no seu arquivo `package.json`, que pode ser algo simples como ```json { @@ -85,9 +85,9 @@ And finally, do not use `graph test` (which uses your global installation of gra } ``` -### Using Matchstick +### Usando o Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### Opções de CLI @@ -109,11 +109,11 @@ Isto só executará esse arquivo de teste específico: graph test path/to/file.test.ts ``` -**Options:** +**Opções:** ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -123,21 +123,21 @@ graph test path/to/file.test.ts ### Docker -From `graph-cli 0.25.2`, the `graph test` command supports running `matchstick` in a docker container with the `-d` flag. The docker implementation uses [bind mount](https://docs.docker.com/storage/bind-mounts/) so it does not have to rebuild the docker image every time the `graph test -d` command is executed. Alternatively you can follow the instructions from the [matchstick](https://github.com/LimeChain/matchstick#docker-) repository to run docker manually. +Desde o `graph-cli 0.25.2`, o comando `graph test` apoia a execução do `matchstick` em um container docker com a flag `-d`. A implementação do docker utiliza o [bind mount](https://docs.docker.com/storage/bind-mounts/) para que não precise reconstruir a imagem do docker toda vez que o comando `graph test -d` for executado. Alternativamente, siga as instruções do repositório do [matchstick](https://github.com/LimeChain/matchstick#docker-) para executar o docker manualmente. -❗ `graph test -d` forces `docker run` to run with flag `-t`. This must be removed to run inside non-interactive environments (like GitHub CI). +❗ `graph test -d` força o `docker run` a ser executado com o flag `-t`. Isto deve ser removido para rodar em ambientes não interativos (como o GitHub CI). -❗ If you have previously ran `graph test` you may encounter the following error during docker build: +❗ Caso já tenha executado o `graph test` anteriormente, o seguinte erro pode aparecer durante a compilação do docker: ```sh error from sender: failed to xattr node_modules/binary-install-raw/bin/binary-: permission denied ``` -In this case create a `.dockerignore` in the root folder and add `node_modules/binary-install-raw/bin` +Neste caso, crie um `.dockerignore` na pasta raiz e adicione `node_modules/binary-install-raw/bin` ### Configuração -Matchstick can be configured to use a custom tests, libs and manifest path via `matchstick.yaml` config file: +O Matchstick pode ser configurado para usar um caminho personalizado de tests, libs e manifest através do arquivo de configuração `matchstick.yaml`: ```yaml testsFolder: path/to/tests @@ -145,25 +145,25 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Subgraph de demonstração +### Demo Subgraph -You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) +Você pode experimentar com os exemplos deste guia clonando o [repositório de Subgraph Demonstrativo](https://github.com/LimeChain/demo-subgraph) ### Tutoriais de vídeo -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Estrutura de testes -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() -`describe(name: String , () => {})` - Defines a test group. +`describe(name: String , () = {})` — Define um grupo de teste. -**_Notes:_** +**_Observações:_** -- _Describes are not mandatory. You can still use test() the old way, outside of the describe() blocks_ +- _Describes (descrições) não são obrigatórias. O test() ainda pode ser usado da maneira antiga, fora dos blocos describe()_ Exemplo: @@ -172,27 +172,27 @@ import { describe, test } from "matchstick-as/assembly/index" import { handleNewGravatar } from "../../src/gravity" describe("handleNewGravatar()", () => { - test("Should create a new Gravatar entity", () => { + test("Isto deve criar uma nova entidade Gravatar", () => { ... }) }) ``` -Nested `describe()` example: +Exemplo aninhado de `describe()`: ```typescript import { describe, test } from "matchstick-as/assembly/index" import { handleUpdatedGravatar } from "../../src/gravity" describe("handleUpdatedGravatar()", () => { - describe("When entity exists", () => { - test("updates the entity", () => { + describe("Quando houver uma entidade", () => { + test("entidade atualizada", () => { ... }) }) - describe("When entity does not exists", () => { - test("it creates a new entity", () => { + describe("Quando não houver uma entidade", () => { + test("nova entidade criada", () => { ... }) }) @@ -203,7 +203,7 @@ describe("handleUpdatedGravatar()", () => { ### test() -`test(name: String, () =>, should_fail: bool)` - Defines a test case. You can use test() inside of describe() blocks or independently. +`test(name: String, () =, should_fail: bool)` — Define um caso de teste. O test() pode ser usado em blocos describe() ou de maneira independente. Exemplo: @@ -212,7 +212,7 @@ import { describe, test } from "matchstick-as/assembly/index" import { handleNewGravatar } from "../../src/gravity" describe("handleNewGravatar()", () => { - test("Should create a new Entity", () => { + test("Isto deve criar uma nova Entidade", () => { ... }) }) @@ -221,7 +221,7 @@ describe("handleNewGravatar()", () => { ou ```typescript -test("handleNewGravatar() should create a new entity", () => { +test("handleNewGravatar() deve criar uma nova entidade", () => { ... }) @@ -232,11 +232,11 @@ test("handleNewGravatar() should create a new entity", () => { ### beforeAll() -Runs a code block before any of the tests in the file. If `beforeAll` is declared inside of a `describe` block, it runs at the beginning of that `describe` block. +Executa um bloco de código antes de quaisquer dos testes no arquivo. Se o `beforeAll` for declarado dentro de um bloco `describe`, ele é executado no começo daquele bloco `describe`. Exemplos: -Code inside `beforeAll` will execute once before _all_ tests in the file. +O código dentro do `beforeAll` será executado uma vez antes de _todos_ os testes no arquivo. ```typescript import { describe, test, beforeAll } from "matchstick-as/assembly/index" @@ -250,39 +250,39 @@ beforeAll(() => { ... }) -describe("When the entity does not exist", () => { - test("it should create a new Gravatar with id 0x1", () => { +describe("Quando a entidade não existe", () => { + test("ela deve criar um novo Gravatar com a id 0x1", () => { ... }) }) -describe("When entity already exists", () => { - test("it should update the Gravatar with id 0x0", () => { +describe("Quando a entidade já existe", () => { + test("ela deve atualizar o Gravatar com a id 0x0", () => { ... }) }) ``` -Code inside `beforeAll` will execute once before all tests in the first describe block +O código antes do `beforeAll` será executado uma vez antes de todos os testes no primeiro bloco describe ```typescript -import { describe, test, beforeAll } from "matchstick-as/assembly/index" +mport { describe, test, beforeAll } from "matchstick-as/assembly/index" import { handleUpdatedGravatar, handleNewGravatar } from "../../src/gravity" import { Gravatar } from "../../generated/schema" describe("handleUpdatedGravatar()", () => { beforeAll(() => { let gravatar = new Gravatar("0x0") - gravatar.displayName = “First Gravatar” + gravatar.displayName = “Primeiro Gravatar” gravatar.save() ... }) - test("updates Gravatar with id 0x0", () => { + test("atualiza Gravatar com id 0x0", () => { ... }) - test("creates new Gravatar with id 0x1", () => { + test("cria novo Gravatar com id 0x1", () => { ... }) }) @@ -292,11 +292,11 @@ describe("handleUpdatedGravatar()", () => { ### afterAll() -Runs a code block after all of the tests in the file. If `afterAll` is declared inside of a `describe` block, it runs at the end of that `describe` block. +Executa um bloco de código depois de todos os testes no arquivo. Se o `afterAll` for declarado dentro de um bloco `describe`, ele será executado no final desse bloco `describe`. Exemplo: -Code inside `afterAll` will execute once after _all_ tests in the file. +O código dentro do `afterAll` será executado uma vez depois de _todos_ os testes no arquivo. ```typescript import { describe, test, afterAll } from "matchstick-as/assembly/index" @@ -309,19 +309,19 @@ afterAll(() => { }) describe("handleNewGravatar, () => { - test("creates Gravatar with id 0x0", () => { + test("cria Gravatar com id 0x0", () => { ... }) }) describe("handleUpdatedGravatar", () => { - test("updates Gravatar with id 0x0", () => { + test("atualiza Gravatar com id 0x0", () => { ... }) }) ``` -Code inside `afterAll` will execute once after all tests in the first describe block +O código dentro do `afterAll` será executado uma vez depois de todos os testes no primeiro bloco describe ```typescript import { describe, test, afterAll, clearStore } from "matchstick-as/assembly/index" @@ -333,17 +333,17 @@ describe("handleNewGravatar", () => { ... }) - test("It creates a new entity with Id 0x0", () => { + test("Cria uma nova entidade com Id 0x0", () => { ... }) - test("It creates a new entity with Id 0x1", () => { + test("Cria uma nova entidade com Id 0x1", () => { ... }) }) describe("handleUpdatedGravatar", () => { - test("updates Gravatar with id 0x0", () => { + test("atualiza Gravatar com id 0x0", () => { ... }) }) @@ -353,24 +353,24 @@ describe("handleUpdatedGravatar", () => { ### beforeEach() -Runs a code block before every test. If `beforeEach` is declared inside of a `describe` block, it runs before each test in that `describe` block. +Executa um bloco de código antes de cada teste no arquivo. Se o `beforeEach` for declarado dentro de um bloco `describe`, ele será executado antes de cada teste nesse bloco `describe`. -Examples: Code inside `beforeEach` will execute before each tests. +Exemplos: O código dentro do `beforeEach` será executado antes de cada teste. ```typescript import { describe, test, beforeEach, clearStore } from "matchstick-as/assembly/index" import { handleNewGravatars } from "./utils" beforeEach(() => { - clearStore() // <-- clear the store before each test in the file + clearStore() // <-- limpa o armazenamento antes de cada teste no arquivo }) describe("handleNewGravatars, () => { - test("A test that requires a clean store", () => { + test("Teste que exige armazenamento limpo", () => { ... }) - test("Second that requires a clean store", () => { + test("Segundo que exige armazenamento limpo", () => { ... }) }) @@ -378,7 +378,7 @@ describe("handleNewGravatars, () => { ... ``` -Code inside `beforeEach` will execute only before each test in the that describe +O código antes do `beforeEach` será executado antes de cada teste no describe ```typescript import { describe, test, beforeEach } from 'matchstick-as/assembly/index' @@ -387,24 +387,24 @@ import { handleUpdatedGravatar, handleNewGravatar } from '../../src/gravity' describe('handleUpdatedGravatars', () => { beforeEach(() => { let gravatar = new Gravatar('0x0') - gravatar.displayName = 'First Gravatar' + gravatar.displayName = 'Primeiro Gravatar' gravatar.imageUrl = '' gravatar.save() }) - test('Updates the displayName', () => { - assert.fieldEquals('Gravatar', '0x0', 'displayName', 'First Gravatar') + test('Atualiza o displayName', () => { + assert.fieldEquals('Gravatar', '0x0', 'displayName', 'Primeiro Gravatar') - // code that should update the displayName to 1st Gravatar + // código que deve atualizar o displayName para 1o. Gravatar - assert.fieldEquals('Gravatar', '0x0', 'displayName', '1st Gravatar') + assert.fieldEquals('Gravatar', '0x0', 'displayName', '1o. Gravatar') store.remove('Gravatar', '0x0') }) - test('Updates the imageUrl', () => { + test('Atualiza o imageUrl', () => { assert.fieldEquals('Gravatar', '0x0', 'imageUrl', '') - // code that should changes the imageUrl to https://www.gravatar.com/avatar/0x0 + // código que deve mudar imageUrl para https://www.gravatar.com/avatar/0x0 assert.fieldEquals('Gravatar', '0x0', 'imageUrl', 'https://www.gravatar.com/avatar/0x0') store.remove('Gravatar', '0x0') @@ -416,11 +416,11 @@ describe('handleUpdatedGravatars', () => { ### afterEach() -Runs a code block after every test. If `afterEach` is declared inside of a `describe` block, it runs after each test in that `describe` block. +Executa um bloco de código depois de cada teste no arquivo. Se o `afterEach` for declarado dentro de um bloco `describe`, será executado após cada teste nesse `describe`. Exemplos: -Code inside `afterEach` will execute after every test. +O código dentro do `afterEach` será executado após cada teste. ```typescript import { describe, test, beforeEach, afterEach } from "matchstick-as/assembly/index" @@ -428,7 +428,7 @@ import { handleUpdatedGravatar, handleNewGravatar } from "../../src/gravity" beforeEach(() => { let gravatar = new Gravatar("0x0") - gravatar.displayName = “First Gravatar” + gravatar.displayName = “Primeiro Gravatar” gravatar.save() }) @@ -441,25 +441,25 @@ describe("handleNewGravatar", () => { }) describe("handleUpdatedGravatar", () => { - test("Updates the displayName", () => { - assert.fieldEquals("Gravatar", "0x0", "displayName", "First Gravatar") + test("Atualiza o displayName", () => { + assert.fieldEquals("Gravatar", "0x0", "displayName", "Primeiro Gravatar") - // code that should update the displayName to 1st Gravatar + // código que deve mudar o displayName para 1o. Gravatar - assert.fieldEquals("Gravatar", "0x0", "displayName", "1st Gravatar") + assert.fieldEquals("Gravatar", "0x0", "displayName", "1o. Gravatar") }) - test("Updates the imageUrl", () => { + test("Atualiza o imageUrl", () => { assert.fieldEquals("Gravatar", "0x0", "imageUrl", "") - // code that should changes the imageUrl to https://www.gravatar.com/avatar/0x0 + // código que deve mudar o imageUrl para https://www.gravatar.com/avatar/0x0 assert.fieldEquals("Gravatar", "0x0", "imageUrl", "https://www.gravatar.com/avatar/0x0") }) }) ``` -Code inside `afterEach` will execute after each test in that describe +O código dentro do `afterEach` será executado após cada teste nesse describe ```typescript import { describe, test, beforeEach, afterEach } from "matchstick-as/assembly/index" @@ -472,7 +472,7 @@ describe("handleNewGravatar", () => { describe("handleUpdatedGravatar", () => { beforeEach(() => { let gravatar = new Gravatar("0x0") - gravatar.displayName = "First Gravatar" + gravatar.displayName = "Primeiro Gravatar" gravatar.imageUrl = "" gravatar.save() }) @@ -482,17 +482,17 @@ describe("handleUpdatedGravatar", () => { }) test("Updates the displayName", () => { - assert.fieldEquals("Gravatar", "0x0", "displayName", "First Gravatar") + assert.fieldEquals("Gravatar", "0x0", "displayName", "Primeiro Gravatar") - // code that should update the displayName to 1st Gravatar + // código que deve atualizar o displayName para 1o. Gravatar - assert.fieldEquals("Gravatar", "0x0", "displayName", "1st Gravatar") + assert.fieldEquals("Gravatar", "0x0", "displayName", "1o. Gravatar") }) test("Updates the imageUrl", () => { assert.fieldEquals("Gravatar", "0x0", "imageUrl", "") - // code that should changes the imageUrl to https://www.gravatar.com/avatar/0x0 + // código que deve mudar o imageUrl para https://www.gravatar.com/avatar/0x0 assert.fieldEquals("Gravatar", "0x0", "imageUrl", "https://www.gravatar.com/avatar/0x0") }) @@ -536,36 +536,36 @@ entityCount(entityType: string, expectedCount: i32) A partir da versão 0.6.0, asserts também apoiam mensagens de erro personalizadas ```typescript -assert.fieldEquals('Gravatar', '0x123', 'id', '0x123', 'Id should be 0x123') -assert.equals(ethereum.Value.fromI32(1), ethereum.Value.fromI32(1), 'Value should equal 1') -assert.notInStore('Gravatar', '0x124', 'Gravatar should not be in store') -assert.addressEquals(Address.zero(), Address.zero(), 'Address should be zero') -assert.bytesEquals(Bytes.fromUTF8('0x123'), Bytes.fromUTF8('0x123'), 'Bytes should be equal') -assert.i32Equals(2, 2, 'I32 should equal 2') -assert.bigIntEquals(BigInt.fromI32(1), BigInt.fromI32(1), 'BigInt should equal 1') -assert.booleanEquals(true, true, 'Boolean should be true') -assert.stringEquals('1', '1', 'String should equal 1') -assert.arrayEquals([ethereum.Value.fromI32(1)], [ethereum.Value.fromI32(1)], 'Arrays should be equal') +assert.fieldEquals('Gravatar', '0x123', 'id', '0x123', 'Id deve ser 0x123') +assert.equals(ethereum.Value.fromI32(1), ethereum.Value.fromI32(1), 'Valor deve ser igual a 1') +assert.notInStore('Gravatar', '0x124', 'Gravatar não deve estar armazenado') +assert.addressEquals(Address.zero(), Address.zero(), 'Address deve ser zero') +assert.bytesEquals(Bytes.fromUTF8('0x123'), Bytes.fromUTF8('0x123'), 'Bytes devem ser iguais') +assert.i32Equals(2, 2, 'I32 deve ser igual a 2') +assert.bigIntEquals(BigInt.fromI32(1), BigInt.fromI32(1), 'BigInt deve ser igual 1') +assert.booleanEquals(true, true, 'Boolean deve ser true') +assert.stringEquals('1', '1', 'String deve ser igual a 1') +assert.arrayEquals([ethereum.Value.fromI32(1)], [ethereum.Value.fromI32(1)], 'Arranjos devem ser iguais') assert.tupleEquals( changetype([ethereum.Value.fromI32(1)]), changetype([ethereum.Value.fromI32(1)]), - 'Tuples should be equal', + 'Tuplas devem ser iguais', ) -assert.assertTrue(true, 'Should be true') -assert.assertNull(null, 'Should be null') -assert.assertNotNull('not null', 'Should be not null') -assert.entityCount('Gravatar', 1, 'There should be 2 gravatars') -assert.dataSourceCount('GraphTokenLockWallet', 1, 'GraphTokenLockWallet template should have one data source') +assert.assertTrue(true, 'Deve ser true') +assert.assertNull(null, 'Deve ser null') +assert.assertNotNull('not null', 'Deve não ser null') +assert.entityCount('Gravatar', 1, 'Deve haver 2 Gravatars') +assert.dataSourceCount('GraphTokenLockWallet', 1, 'Template GraphTokenLockWallet template deve ter uma fonte de dados') assert.dataSourceExists( 'GraphTokenLockWallet', Address.zero().toHexString(), - 'GraphTokenLockWallet should have a data source for zero address', + 'GraphTokenLockWallet deve ter uma fonte de dados para address zero', ) ``` ## Como Escrever um Teste de Unidade -Let's see how a simple unit test would look like using the Gravatar examples in the [Demo Subgraph](https://github.com/LimeChain/demo-subgraph/blob/main/src/gravity.ts). +Vamos ver como seria um simples teste unitário usando os exemplos de Gravatar no [Subgraph de Demonstração](https://github.com/LimeChain/demo-subgraph/blob/main/src/gravity.ts). Suponhamos que temos a seguinte função de handler (com duas funções de helper para facilitar): @@ -627,23 +627,23 @@ import { NewGravatar } from '../../generated/Gravity/Gravity' import { createNewGravatarEvent, handleNewGravatars } from '../mappings/gravity' test('Can call mappings with custom events', () => { - // Create a test entity and save it in the store as initial state (optional) + // Criar uma entidade de teste e guarda no armazenamento como estado inicial (opcional) let gravatar = new Gravatar('gravatarId0') gravatar.save() - // Create mock events + // Criar eventos simulados let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') let anotherGravatarEvent = createNewGravatarEvent(3546, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') - // Call mapping functions passing the events we just created + // Chamar funções de mapeamento passando os eventos que acabamos de criar handleNewGravatars([newGravatarEvent, anotherGravatarEvent]) - // Assert the state of the store + // Assertar o estado do armazenamento assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') assert.fieldEquals('Gravatar', '12345', 'owner', '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') assert.fieldEquals('Gravatar', '3546', 'displayName', 'cap') - // Clear the store in order to start the next test off on a clean slate + // Limpar o armazenamento para começar o próximo teste do zero clearStore() }) @@ -652,23 +652,23 @@ test('Next test', () => { }) ``` -That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: +Quanta coisa! Primeiro, note que estamos a importar coisas do `matchstick-as`, a nossa biblioteca de helper do AssemblyScript (distribuída como um módulo npm). O repositório está [aqui](https://github.com/LimeChain/matchstick-as). O `matchstick-as` nos dá alguns métodos de teste úteis e define a função `test()`, que usaremos para construir os nossos blocos de teste. O resto é bem simples — veja o que acontece: - Configuramos nosso estado inicial e adicionamos uma entidade de Gravatar personalizada; -- We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; -- We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; +- Definimos dois eventos `NewGravatar` com os seus dados, usando a função `createNewGravatarEvent()`; +- Chamamos métodos de handlers para estes eventos — `handleNewGravatars()` — e passamos a lista dos nossos eventos personalizados; - Garantimos o estado da loja. Como isto funciona? — Passamos uma combinação do tipo e da id da Entidade. Depois conferimos um campo específico naquela Entidade e garantimos que ela tem o valor que esperamos que tenha. Estamos a fazer isto tanto para a Entidade Gravatar inicial adicionada ao armazenamento, quanto para as duas entidades Gravatar adicionadas ao chamar a função de handler; -- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. +- E por último — limpamos o armazenamento com `clearStore()`, para que o nosso próximo teste comece com um objeto de armazenamento novo em folha. Podemos definir quantos blocos de teste quisermos. Prontinho — criamos o nosso primeiro teste! 👏 -Para executar os nossos testes, basta apenas executar o seguinte na pasta raiz do seu subgraph: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` E se tudo der certo, deve receber a seguinte resposta: -![Matchstick saying “All tests passed!”](/img/matchstick-tests-passed.png) +![Matchstick diz "Todos os testes passados!”](/img/matchstick-tests-passed.png) ## Cenários de teste comuns @@ -754,18 +754,18 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri ### Como simular arquivos IPFS (do matchstick 0.4.1) -Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. +Os utilizadores podem simular arquivos IPFS com a função `mockIpfsFile(hash, filePath)`. A função aceita dois argumentos: o primeiro é o hash/caminho do arquivo IPFS, e o segundo é o caminho a um arquivo local. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: -`.test.ts` file: +Arquivo `test.ts`: ```typescript import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -795,7 +795,7 @@ test('ipfs.map', () => { }) ``` -`utils.ts` file: +Arquivo `utils.ts`: ```typescript import { Address, ethereum, JSONValue, Value, ipfs, json, Bytes } from "@graphprotocol/graph-ts" @@ -857,11 +857,11 @@ gravatar.save() assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') ``` -Running the assert.fieldEquals() function will check for equality of the given field against the given expected value. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. +A função assert.fieldEquals() conferirá a igualdade do campo dado contra o valor dado esperado. O teste acabará em erro, com mensagem correspondente, caso os valores **NÃO** sejam iguais. Caso contrário, o teste terá êxito. ### Como interagir com metadados de Eventos -Users can use default transaction metadata, which could be returned as an ethereum.Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: +Os utilizadores podem usar metadados-padrão de transações, que podem ser retornados como um ethereum.Event com a função `newMockEvent()`. O seguinte exemplo mostra como podes ler/escrever a estes campos no objeto de Evento: ```typescript // Leitura @@ -878,7 +878,7 @@ newGravatarEvent.address = Address.fromString(UPDATED_ADDRESS) assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hello")); ``` -### Asserting that an Entity is **not** in the store +### Como afirmar que uma Entidade **não** está no armazenamento Os utilizadores podem afirmar que uma entidade não existe no armazenamento. A função toma um tipo e uma id de entidade. Caso a entidade esteja, de facto, na loja, o teste acabará em erro, com uma mensagem de erro relevante. Veja um exemplo rápido de como usar esta funcionalidade: @@ -896,7 +896,7 @@ import { logStore } from 'matchstick-as/assembly/store' logStore() ``` -As of version 0.6.0, `logStore` no longer prints derived fields, instead users can use the new `logEntity` function. Of course `logEntity` can be used to print any entity, not just ones that have derived fields. `logEntity` takes the entity type, entity id and a `showRelated` flag to indicate if users want to print the related derived entities. +Desde a versão 0.6.0, o `logStore` não imprime mais campos derivados; em vez disto, os utilizadores podem usar a nova função `logEntity`. O `logEntity` pode ser usado para imprimir qualquer entidade, não só as que têm campos derivados. O `logEntity` pega o tipo e a ID da entidade, e um flag `showRelated` para indicar se os utilizadores querem imprimir as entidades derivadas relacionadas. ``` import { logEntity } from 'matchstick-as/assembly/store' @@ -911,7 +911,7 @@ Os utilizadores podem encontrar falhas esperadas, com o flag shouldFail nas fun ```typescript test( - 'Should throw an error', + 'Deve chamar um erro', () => { throw new Error() }, @@ -930,27 +930,27 @@ import { test } from "matchstick-as/assembly/index"; import { log } from "matchstick-as/assembly/log"; test("Success", () => { - log.success("Success!". []); + log.success("Sucesso!". []); }); test("Error", () => { - log.error("Error :( ", []); + log.error("Erro! :( ", []); }); test("Debug", () => { - log.debug("Debugging...", []); + log.debug("Debug em progresso...", []); }); test("Info", () => { - log.info("Info!", []); + log.info("Informação!", []); }); test("Warning", () => { - log.warning("Warning!", []); + log.warning("Cuidado!", []); }); ``` Os utilizadores também podem simular uma falha crítica, como no seguinte: ```typescript -test('Blow everything up', () => { - log.critical('Boom!') +test('Explodir tudo', () = { + log.critical('É boooomba!') }) ``` @@ -960,14 +960,14 @@ Logar erros críticos interromperá a execução dos testes e causará um desast Testar campos derivados permite aos utilizadores configurar um campo numa entidade e atualizar outra automaticamente, caso ela derive um dos seus campos da primeira entidade. -Before version `0.6.0` it was possible to get the derived entities by accessing them as entity fields/properties, like so: +Antes da versão `0.6.0`, era possível resgatar as entidades derivadas ao acessá-las como propriedades ou campos de entidade, como no seguinte exemplo: ```typescript let entity = ExampleEntity.load('id') let derivedEntity = entity.derived_entity ``` -As of version `0.6.0`, this is done by using the `loadRelated` function of graph-node, the derived entities can be accessed the same way as in the handlers. +Desde a versão `0.6.0`, isto é feito com a função `loadRelated` do graph-node. As entidades derivadas podem ser acessadas como são nos handlers. ```typescript test('Derived fields example test', () => { @@ -1009,9 +1009,9 @@ test('Derived fields example test', () => { }) ``` -### Testing `loadInBlock` +### Teste de `loadInBlock` -As of version `0.6.0`, users can test `loadInBlock` by using the `mockInBlockStore`, it allows mocking entities in the block cache. +Desde a versão `0.6.0`, é possível testar o `loadInBlock` com o `mockInBlockStore`, que permite a simulação de entidades no cache de blocos. ```typescript import { afterAll, beforeAll, describe, mockInBlockStore, test } from 'matchstick-as' @@ -1026,12 +1026,12 @@ describe('loadInBlock', () => { clearInBlockStore() }) - test('Can use entity.loadInBlock() to retrieve entity from cache store in the current block', () => { + test('Pode usar entity.loadInBlock() para retirar a entidade do armazenamento do cache no bloco atual', () => { let retrievedGravatar = Gravatar.loadInBlock('gravatarId0') assert.stringEquals('gravatarId0', retrievedGravatar!.get('id')!.toString()) }) - test("Returns null when calling entity.loadInBlock() if an entity doesn't exist in the current block", () => { + test("Retorna null ao chamar entity.loadInBlock() se uma entidade não existir no bloco atual", () => { let retrievedGravatar = Gravatar.loadInBlock('IDoNotExist') assert.assertNull(retrievedGravatar) }) @@ -1040,7 +1040,7 @@ describe('loadInBlock', () => { ### Como testar fontes de dados dinâmicas -Testing dynamic data sources can be be done by mocking the return value of the `context()`, `address()` and `network()` functions of the dataSource namespace. These functions currently return the following: `context()` - returns an empty entity (DataSourceContext), `address()` - returns `0x0000000000000000000000000000000000000000`, `network()` - returns `mainnet`. The `create(...)` and `createWithContext(...)` functions are mocked to do nothing so they don't need to be called in the tests at all. Changes to the return values can be done through the functions of the `dataSourceMock` namespace in `matchstick-as` (version 0.3.0+). +É possível testar fontes de dados dinâmicas ao simular o valor de retorno das funções `context()`, `address()` e `network()` do namespace do dataSource. Estas funções atualmente retornam o seguinte: `context()` — retorna uma entidade vazia (DataSourceContext); `address()` — retorna `0x0000000000000000000000000000000000000000`; `network()` — retorna `mainnet`. As funções `create(...)` e `createWithContext(...)` são simuladas para não terem uso, para que não precisem ser chamadas nos testes. Dá para mudar os valores de retorno através das funções do namespace `dataSourceMock` no `matchstick-as` (versão 0.3.0+). Exemplo abaixo: @@ -1070,7 +1070,7 @@ import { handleApproveTokenDestinations } from '../../src/token-lock-wallet' import { ApproveTokenDestinations } from '../../generated/templates/GraphTokenLockWallet/GraphTokenLockWallet' import { TokenLockWallet } from '../../generated/schema' -test('Data source simple mocking example', () => { +test('Exemplo simples de simulação de fonte de dados', () => { let addressString = '0xA16081F360e3847006dB660bae1c6d1b2e17eC2A' let address = Address.fromString(addressString) @@ -1097,44 +1097,44 @@ Note que o dataSourceMock.resetValues() é chamado no final. Isto ### Teste de criação de fontes de dados dinâmicas -As of version `0.6.0`, it is possible to test if a new data source has been created from a template. This feature supports both ethereum/contract and file/ipfs templates. There are four functions for this: +Desde a versão `0.6.0`, é possível testar se uma nova fonte de dados foi criada de um modelo. Esta função apoia modelos ethereum/contract e file/ipfs. Há quatro funçôes para isto: -- `assert.dataSourceCount(templateName, expectedCount)` can be used to assert the expected count of data sources from the specified template -- `assert.dataSourceExists(templateName, address/ipfsHash)` asserts that a data source with the specified identifier (could be a contract address or IPFS file hash) from a specified template was created -- `logDataSources(templateName)` prints all data sources from the specified template to the console for debugging purposes -- `readFile(path)` reads a JSON file that represents an IPFS file and returns the content as Bytes +- `assert.dataSourceCount(templateName, expectedCount)` pode ser usado para impor a contagem esperada de fontes de dados do modelo especificado +- `assert.dataSourceExists(templateName, address/ipfsHash)` impõe que foi criada uma fonte de dados com o identificador especificado (seja um endereço de contrato ou um hash de arquivo IPFS) de um modelo especificado +- `logDataSources(templateName)` imprime todas as fontes de dados do modelo especificado ao console, para propósitos de debug +- `readFile(path)` lê um arquivo JSON que representa um arquivo IPFS e retorna o conteúdo como Bytes -#### Testing `ethereum/contract` templates +#### Teste de modelos `ethereum/contract` ```typescript test('ethereum/contract dataSource creation example', () => { - // Assert there are no dataSources created from GraphTokenLockWallet template + // Impor que não há dataSources criadas de modelo GraphTokenLockWallet assert.dataSourceCount('GraphTokenLockWallet', 0) - // Create a new GraphTokenLockWallet datasource with address 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A + // Criar uma nova datasource GraphTokenLockWallet com o endereço 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A GraphTokenLockWallet.create(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2A')) - // Assert the dataSource has been created + // Assegurar que foi criada a dataSource assert.dataSourceCount('GraphTokenLockWallet', 1) - // Add a second dataSource with context + // Adicionar uma segunda dataSource com contexto let context = new DataSourceContext() context.set('contextVal', Value.fromI32(325)) GraphTokenLockWallet.createWithContext(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'), context) - // Assert there are now 2 dataSources + // Assertar que agora há 2 dataSources assert.dataSourceCount('GraphTokenLockWallet', 2) - // Assert that a dataSource with address "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" was created - // Keep in mind that `Address` type is transformed to lower case when decoded, so you have to pass the address as all lower case when asserting if it exists + // Impor que foi criada uma dataSource com o endereço "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" + // Lembrar que o tipo `Address` transforma para caixa baixa quando decodificado, então o endereço deve ser passado como caixa-baixa ao determinar se existe assert.dataSourceExists('GraphTokenLockWallet', '0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'.toLowerCase()) logDataSources('GraphTokenLockWallet') }) ``` -##### Example `logDataSource` output +##### Exemplo de resultado de `logDataSource` ```bash 🛠 { @@ -1158,11 +1158,11 @@ test('ethereum/contract dataSource creation example', () => { } ``` -#### Testing `file/ipfs` templates +#### Teste de modelos `file/ipfs` -Similarly to contract dynamic data sources, users can test test file data sources and their handlers +Assim como as fontes dinâmicas de dados de contrato, os utilizadores podem testar fontes de dados de arquivos e os seus handlers -##### Example `subgraph.yaml` +##### Exemplo de `subgraph.yaml` ```yaml ... @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1183,7 +1183,7 @@ templates: file: ./abis/GraphTokenLockWallet.json ``` -##### Example `schema.graphql` +##### Exemplo de `schema.graphql` ```graphql """ @@ -1203,7 +1203,7 @@ type TokenLockMetadata @entity { } ``` -##### Example `metadata.json` +##### Exemplo de `metadata.json` ```json { @@ -1218,9 +1218,9 @@ type TokenLockMetadata @entity { ```typescript export function handleMetadata(content: Bytes): void { - // dataSource.stringParams() returns the File DataSource CID - // stringParam() will be mocked in the handler test - // for more info https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files + // dataSource.stringParams() retorna CID de Fonte de Dados de Arquivo + // stringParam() será simulado no teste de handler + // para saber mais https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files let tokenMetadata = new TokenLockMetadata(dataSource.stringParam()) const value = json.fromBytes(content).toObject() @@ -1253,31 +1253,32 @@ import { TokenLockMetadata } from '../../generated/schema' import { GraphTokenLockMetadata } from '../../generated/templates' test('file/ipfs dataSource creation example', () => { - // Generate the dataSource CID from the ipfsHash + ipfs path file - // For example QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example.json + // Gerar CID da dataSource do arquivo de local ipfsHash + ipfs + // Por exemplo QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example.json const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' const CID = `${ipfshash}/example.json` - // Create a new dataSource using the generated CID + // Criar uma nova dataSource com o CID gerado GraphTokenLockMetadata.create(CID) - // Assert the dataSource has been created + // Verificar se foi criada a dataSource assert.dataSourceCount('GraphTokenLockMetadata', 1) assert.dataSourceExists('GraphTokenLockMetadata', CID) logDataSources('GraphTokenLockMetadata') - // Now we have to mock the dataSource metadata and specifically dataSource.stringParam() - // dataSource.stringParams actually uses the value of dataSource.address(), so we will mock the address using dataSourceMock from matchstick-as - // First we will reset the values and then use dataSourceMock.setAddress() to set the CID + // Agora temos que simular os metadados da dataSource, e especificamente dataSource.stringParam() + // dataSource.stringParams usa o valor de dataSource.address(), então vamos simular o endereço +com dataSourceMock de matchstick-as + // Primeiro, vamos reiniciar os valores e usar dataSourceMock.setAddress() para configurar o CID dataSourceMock.resetValues() dataSourceMock.setAddress(CID) - // Now we need to generate the Bytes to pass to the dataSource handler - // For this case we introduced a new function readFile, that reads a local json and returns the content as Bytes + // Agora precisamos gerar os Bytes para passar para o handler da dataSource + // Para este caso, apresentamos uma nova função readFile, que lê um json local e retorna o conteúdo como Bytes const content = readFile(`path/to/metadata.json`) handleMetadata(content) - // Now we will test if a TokenLockMetadata was created + // Agora vamos testar se foi criado um TokenLockMetadata const metadata = TokenLockMetadata.load(CID) assert.bigIntEquals(metadata!.endTime, BigInt.fromI32(1)) @@ -1289,29 +1290,29 @@ test('file/ipfs dataSource creation example', () => { ## Cobertura de Testes -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. -The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. +A ferramenta de cobertura de testes pega os binários de teste `wasm` compilados e os converte a arquivos `wat`, que podem então ser facilmente vistoriados para ver se os handlers definidos em `subgraph.yaml` foram chamados ou não. Como a cobertura de código (e os testes em geral) está num estado primitivo no AssemblyScript e WebAssembly, o **Matchstick** não pode procurar por coberturas de branch. Em vez disto, supomos que, se um handler foi chamado, o evento/a função correspondente já foi simulado com êxito. -### Prerequisites +### Pré-requisitos -To run the test coverage functionality provided in **Matchstick**, there are a few things you need to prepare beforehand: +Para executar a funcionalidade da cobertura de teste fornecida no **Matchstick**, prepare algumas coisas com antecedência: #### Exportar seus handlers -In order for **Matchstick** to check which handlers are being run, those handlers need to be exported from the **test file**. So for instance in our example, in our gravity.test.ts file we have the following handler being imported: +Para que o **Matchstick** confira quais handlers serão executados, estes handlers devem ser exportados do **arquivo de teste** primeiro. No nosso exemplo, temos o seguinte handler a ser importado no nosso arquivo gravity.test.ts: ```typescript import { handleNewGravatar } from '../../src/gravity' ``` -In order for that function to be visible (for it to be included in the `wat` file **by name**) we need to also export it, like this: +Para que essa função seja visível (para ser incluída no arquivo `wat` **por nome**), também precisamos exportá-la assim: ```typescript export { handleNewGravatar } ``` -### Usage +### Uso Assim que tudo estiver pronto, para executar a ferramenta de cobertura de testes, basta: @@ -1319,7 +1320,7 @@ Assim que tudo estiver pronto, para executar a ferramenta de cobertura de testes graph test -- -c ``` -You could also add a custom `coverage` command to your `package.json` file, like so: +Um comando `coverage` personalizado também pode ser adicionado ao seu arquivo `package.json`, assim: ```typescript "scripts": { @@ -1371,41 +1372,31 @@ Global test coverage: 22.2% (2/9 handlers). A saída do log inclui a duração do teste. Veja um exemplo: -`[Thu, 31 Mar 2022 13:54:54 +0300] Program executed in: 42.270ms.` +`[Quinta, 31 Mar 2022 13:54:54 +0300] Programa executado em: 42.270ms.` ## Erros comuns do compilador -> -> Critical: Could not create WasmInstance from valid module with context: unknown import: -> wasi_snapshot_preview1::fd_write has not been defined -> +> Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/subgraphs/developing/creating/graph-ts/api/#logging-api) +Isso significa que você usou `console.log` no seu código, que não é apoiado pelo AssemblyScript. Por favor, considere usar a [API de registo](/subgraphs/developing/creating/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. > -> -> return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, -> defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, -> defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); > > in ~lib/matchstick-as/assembly/defaults.ts(18,12) > > ERROR TS2554: Expected ? arguments, but got ?. > -> -> return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, -> defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); > > in ~lib/matchstick-as/assembly/defaults.ts(24,12) -The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. +A diferença nos argumentos é causada pela diferença no `graph-ts` e no `matchstick-as`. Problemas como este são melhor resolvidos ao atualizar tudo para a versão mais recente. ## Outros Recursos -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/pt/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/pt/subgraphs/developing/deploying/multiple-networks.mdx index 7164b6d5a83c..1a1aca2c7b9e 100644 --- a/website/src/pages/pt/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/pt/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,30 +1,31 @@ --- -title: Deploying a Subgraph to Multiple Networks +title: Como Implantar um Subgraph em Várias Redes +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Como lançar o subgraph a várias redes +## Deploying the Subgraph to multiple networks -Em alguns casos, irá querer lançar o mesmo subgraph a várias redes sem duplicar o seu código completo. O grande desafio nisto é que os endereços de contrato nestas redes são diferentes. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. -### Using `graph-cli` +### Como usar o `graph-cli` -Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: +Tanto o `graph build` (desde a `v0.29.0`) quanto o `graph deploy` (desde a `v0.32.0`) aceitam duas novas opções: ```sh Options: ... - --network Network configuration to use from the networks config file - --network-file Networks config file path (default: "./networks.json") + --network Configuração de rede para usar no arquivo de config de redes + --network-file Local do arquivo de config de redes (padrão: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. -> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. +> Nota: O comando `init` agora irá gerar um `networks.json` automaticamente, com base na informação fornecida. Daí, será possível atualizar redes existentes ou adicionar redes novas. -If you don't have a `networks.json` file, you'll need to manually create one with the following structure: +Caso não tenha um arquivo `networks.json`, você deve criar o mesmo manualmente, com a seguinte estrutura: ```json { @@ -52,9 +53,9 @@ If you don't have a `networks.json` file, you'll need to manually create one wit } ``` -> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. +> Nota: Não é necessário especificar quaisquer dos `templates` (se tiver) no arquivo de configuração, apenas as `dataSources`. Se houver `templates` declarados no arquivo `subgraph.yaml`, sua rede será automaticamente atualizada à especificada na opção `--network`. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file local/do/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -111,9 +112,9 @@ dataSources: kind: ethereum/events ``` -Now you are ready to `yarn deploy`. +Agora está tudo pronto para executar o `yarn deploy`. -> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: +> Nota: Como anteriormente mencionado, desde o `graph-cli 0.32.0`, dá para executar diretamente o `yarn deploy` com a opção `--network`: ```sh # Usar o arquivo networks.json padrão @@ -125,9 +126,9 @@ yarn deploy --network sepolia --network-file local/do/config ### Como usar o template subgraph.yaml -One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). +Uma forma de parametrizar aspetos, como endereços de contratos, com versões mais antigas de `graph-cli` é: gerar partes dele com um sistema de modelos como o [Mustache](https://mustache.github.io/) ou o [Handlebars](https://handlebarsjs.com/). -Por exemplo, vamos supor que um subgraph deve ser lançado à mainnet e à Sepolia, através de diferentes endereços de contratos. Então, seria possível definir dois arquivos de config ao fornecer os endereços para cada rede: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -145,7 +146,7 @@ e } ``` -Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: +Além disso, dá para substituir o nome da rede e os endereços no manifest com variáveis temporários `{{network}}` and `{{address}}` e renomear o manifest para, por exemplo, `subgraph.template.yaml`: ```yaml # ... @@ -162,7 +163,7 @@ dataSources: kind: ethereum/events ``` -In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: +Para poder gerar um manifest para uma rede, pode-se adicionar mais dois comandos ao `package.json` com uma dependência no `mustache`: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -Para lançar este subgraph à mainnet ou à Sepolia, apenas um dos seguintes comandos precisaria ser executado: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -189,29 +190,29 @@ yarn prepare:mainnet && yarn deploy yarn prepare:sepolia && yarn deploy ``` -A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). +Veja um exemplo funcional [aqui](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). -**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. +**Observe:** Este método também pode ser aplicado a situações mais complexas, onde é necessário substituir mais que endereços de contratos e nomes de redes, ou gerar mapeamentos e ABIs de templates também. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Política de arqivamento do Subgraph Studio +## Subgraph Studio Subgraph archive policy -Uma versão de subgraph no Studio é arquivada se, e apenas se, atender aos seguintes critérios: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - A versão não foi publicada na rede (ou tem a publicação pendente) - A versão foi criada há 45 dias ou mais -- O subgraph não foi consultado em 30 dias +- The Subgraph hasn't been queried in 30 days -Além disto, quando uma nova versão é editada, se o subgraph ainda não foi publicado, então a versão N-2 do subgraph é arquivada. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Todos os subgraphs afetados por esta política têm a opção de trazer de volta a versão em questão. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Como conferir a saúde do subgraph +## Checking Subgraph health -Se um subgraph for sincronizado com sucesso, isto indica que ele continuará a rodar bem para sempre. Porém, novos gatilhos na rede podem revelar uma condição de erro não testada, ou ele pode começar a se atrasar por problemas de desempenho ou com os operadores de nodes. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/pt/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/pt/subgraphs/developing/deploying/using-subgraph-studio.mdx index d9e9be3f83e9..5a8e4fb9f905 100644 --- a/website/src/pages/pt/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/pt/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -1,39 +1,39 @@ --- -title: Deploying Using Subgraph Studio +title: Como Implantar com o Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. -## Subgraph Studio Overview +## Visão Geral do Subgraph Studio -In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: +No [Subgraph Studio](https://thegraph.com/studio/), você pode fazer o seguinte: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Criar e gerir as suas chaves de API para subgraphs específicos -- Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network -- Manage your billing +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs +- Restringir as suas chaves de API a domínios específicos e permitir que apenas certos indexadores façam queries com eles +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network +- Gerir o seu faturamento -## Install The Graph CLI +## Instalar a CLI do The Graph -Before deploying, you must install The Graph CLI. +Antes de implantar, você deve instalar a Graph CLI (CLI do The Graph). -You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use The Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. +É necessário ter [Node.js](https://nodejs.org/) e um gerenciador de pacotes da sua escolha (`npm`, `yarn` ou `pnpm`) instalados, para utilizar a Graph CLI. Verifique a versão [mais recente](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) da CLI. -### Install with yarn +### Instalação com o yarn ```bash yarn global add @graphprotocol/graph-cli ``` -### Install with npm +### Instalação com o npm ```bash npm install -g @graphprotocol/graph-cli @@ -41,97 +41,91 @@ npm install -g @graphprotocol/graph-cli ## Como Começar -1. Open [Subgraph Studio](https://thegraph.com/studio/). -2. Connect your wallet to sign in. - - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +1. Abra o [Subgraph Studio](https://thegraph.com/studio/). +2. Conecte a sua carteira para fazer login. + - É possível fazer isso via MetaMask, Carteira da Coinbase, WalletConnect, ou Safe. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### Como Criar um Subgraph no Subgraph Studio -> For additional written detail, review the [Quick Start](/subgraphs/quick-start/). +> Para mais detalhes, consulte o [Guia de Início Rápido](/subgraphs/quick-start/). ### Compatibilidade de Subgraph com a Graph Network -Para ter apoio de Indexadores na Graph Network, os subgraphs devem: +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. -- Index a [supported network](/supported-networks/) -- Não deve usar quaisquer das seguintes características: - - ipfs.cat & ipfs.map - - Erros não-fatais - - Enxerto +## Como inicializar o seu Subgraph -## Initialize Your Subgraph - -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Autenticação -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. -Then, use the following command to authenticate from the CLI: +Em seguida, use o seguinte comando para autenticar a partir da CLI: ```bash graph auth ``` -## Deploying a Subgraph +## Como Implantar um Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy ``` -After running this command, the CLI will ask for a version label. +Após executar este comando, a CLI solicitará um número de versão. -- It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as `v1`, `version1`, or `asdf`. -- The labels you create will be visible in Graph Explorer and can be used by curators to decide if they want to signal on a specific version or not, so choose them wisely. +- É altamente recomendado usar o [semver](https://semver.org/) para números de versão, como `0.0.1`. Dito isto, dá para escolher qualquer string como versão, por exemplo: `v1`, `version1`, `asdf`. +- Os nomes de versão criados serão visíveis no Graph Explorer, e podem ser usados pelos curadores para decidir se querem ou não sinalizar numa versão específica, então escolha com sabedoria. -## Testing Your Subgraph +## Como Testar o Seu Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. -## Publish Your Subgraph +## Edite o Seu Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -## Versioning Your Subgraph with the CLI +## Como Fazer Versões do Seu Subgraph com a CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: -- You can deploy a new version to Studio using the CLI (it will only be private at this point). -- Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- Você pode implantar uma nova versão para o Studio com a CLI (no momento, só será privada). +- Quando o resultado estiver satisfatório, você poderá editar a sua nova implantação para o [Graph Explorer](https://thegraph.com/explorer). +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Arquivamento Automático de Versões de Subgraphs -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. -![Subgraph Studio - Unarchive](/img/Unarchive.png) +![Subgraph Studio — Tirar Arquivo](/img/Unarchive.png) diff --git a/website/src/pages/pt/subgraphs/developing/developer-faq.mdx b/website/src/pages/pt/subgraphs/developing/developer-faq.mdx index 94f963a2fa3a..8878494e4c34 100644 --- a/website/src/pages/pt/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/pt/subgraphs/developing/developer-faq.mdx @@ -1,71 +1,71 @@ --- -title: Developer FAQ -sidebarTitle: FAQ +title: Perguntas frequentes do programador +sidebarTitle: Perguntas Frequentes --- -This page summarizes some of the most common questions for developers building on The Graph. +Esta página resume algumas das perguntas mais comuns para programadores que trabalham no The Graph. -## Subgraph Related +## Perguntas sobre Subgraphs -### 1. O que é um subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Posso mudar a conta do GitHub associada ao meu subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Deve relançar o subgraph, mas se a ID do subgraph (hash IPFS) não mudar, ele não precisará sincronizar do começo. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). +Veja o estado de `Acesso ao contrato inteligente` dentro da secção [API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? -Not currently, as mappings are written in AssemblyScript. +Não atualmente, afinal, os mapeamentos são escritos em AssemblyScript. -One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +Uma solução alternativa possível é armazenar dados brutos em entidades e executar uma lógica que exige bibliotecas de JS no cliente. -### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 9. Ao escutar vários contratos, é possível selecionar a ordem do contrato para escutar eventos? -Dentro de um subgraph, os eventos são sempre processados na ordem em que aparecem nos blocos, mesmo sendo ou não através de vários contratos. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. -### 10. How are templates different from data sources? +### 10. Quais são as diferenças entre modelos e fontes de dados? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. -Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). +Confira a secção "Como instanciar um modelo de fonte de dados" em: [Modelos de Fonte de Dados](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. +Sim. No comando `graph init`, pode-se adicionar várias dataSources ao inserir um contrato após o outro. -You can also use `graph add` command to add a new dataSource. +O comando `graph add` também pode adicionar uma nova dataSource. -### 12. In what order are the event, block, and call handlers triggered for a data source? +### 12. Em qual ordem os handlers de evento, bloco, e chamada são ativados para uma fonte de dados? Primeiro, handlers de eventos e chamadas são organizados pelo índice de transações dentro do bloco. Handlers de evento e chamada dentro da mesma transação são organizados com uma convenção: handlers de eventos primeiro e depois handlers de chamadas, com cada tipo a respeitar a ordem em que são definidos no manifest. Handlers de blocos são executados após handlers de eventos e chamadas, na ordem em que são definidos no manifest. Estas regras de organizações estão sujeitas a mudanças. Com a criação de novas fontes de dados dinâmicas, os handlers definidos para fontes de dados dinâmicas só começarão a processar após o processamento dos handlers das fontes, e se repetirão na mesma sequência sempre que acionados. -### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 13. Como garantir que estou a usar a versão mais recente do graph-node para as minhas implantações locais? Podes executar o seguinte comando: @@ -73,25 +73,25 @@ Podes executar o seguinte comando: docker pull graphprotocol/graph-node:latest ``` -> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. +> Observação: O docker / docker-compose sempre usará a versão do graph-node que foi puxada na primeira vez que o executou, então é importante fazer isto para garantir que está em dia com a versão mais recente do graph-node. -### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. Qual é a forma recomendada de construir ids "autogeradas" para uma entidade ao lidar com eventos? -If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. +Se só uma entidade for criada durante o evento e não houver nada melhor disponível, então o hash da transação + o index do registo seria único. Esses podem ser ofuscados ao converter em Bytes e então passar pelo `crypto.keccak256`, mas isto não deixará os dados mais singulares. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. -## Network Related +## Perguntas sobre Rede -### 16. What networks are supported by The Graph? +### 16. Quais redes são apoiadas pelo The Graph? -You can find the list of the supported networks [here](/supported-networks/). +Veja a lista das redes apoiadas [aqui](/supported-networks/). -### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? +### 17. É possível diferenciar entre redes (mainnet, Sepolia, local) dentro de handlers de eventos? -Yes. You can do this by importing `graph-ts` as per the example below: +Sim. Isto é possível ao importar o `graph-ts` como no exemplo abaixo: ```javascript import { dataSource } from '@graphprotocol/graph-ts' @@ -100,21 +100,21 @@ dataSource.network() dataSource.address() ``` -### 18. Do you support block and call handlers on Sepolia? +### 18. Vocês apoiam handlers de bloco e de chamadas no Sepolia? Sim. O Sepolia apoia handlers de blocos, chamadas e eventos. Vale notar que handlers de eventos têm desempenho muito melhor do que os outros dois e têm apoio em todas as redes compatíveis com EVMs. -## Indexing & Querying Related +## Perguntas sobre Indexação e Queries -### 19. Is it possible to specify what block to start indexing on? +### 19. É possível especificar o bloco de onde a indexação deve começar? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) +Sim. O `dataSources.source.startBlock` no arquivo `subgraph.yaml` especifica o número do bloco de onde a fonte de dados começa a indexar. Geralmente, sugerimos usar o bloco em que o contrato foi criado: [Blocos de início](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) +Sim. Confira o recurso opcional de bloco inicial (start block) para começar a indexar do bloco em que o contrato foi lançado: [Blocos iniciais](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Sim! Execute o seguinte comando, com "organization/subgraphName" substituído com a organização sob a qual ele foi publicado e o nome do seu subgraph: @@ -122,25 +122,25 @@ Sim! Execute o seguinte comando, com "organization/subgraphName" substituído co curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -### 22. Is there a limit to how many objects The Graph can return per query? +### 22. Há um limite de quantos objetos o Graph pode retornar por query? -By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: +Normalmente, respostas a queries são limitadas a 100 itens por coleção. Se quiser receber mais, pode subir para até 1000 itens por coleção; além disto, pode paginar com: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. Se a frontend do meu dApp usa o The Graph para queries, eu preciso escrever a minha chave de API diretamente na frontend? E se pagarmos taxas de query para utilizadores — algum utilizador malicioso pode aumentar demais estas taxas? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## Miscellaneous +## Outras Perguntas -### 24. Is it possible to use Apollo Federation on top of graph-node? +### 24. É possível usar a Apollo Federation juntamente ao graph-node? -Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. +Ainda não há apoio ao Federation. No momento, é possível costurar schemas, seja no cliente ou via um serviço de proxy. -### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +### 25. Quero contribuir ou adicionar um problema no GitHub. Onde posso encontrar os repositórios de código aberto? - [graph-node](https://github.com/graphprotocol/graph-node) - [graph-tooling](https://github.com/graphprotocol/graph-tooling) diff --git a/website/src/pages/pt/subgraphs/developing/introduction.mdx b/website/src/pages/pt/subgraphs/developing/introduction.mdx index e550867e2244..e7a5cdd3cc56 100644 --- a/website/src/pages/pt/subgraphs/developing/introduction.mdx +++ b/website/src/pages/pt/subgraphs/developing/introduction.mdx @@ -1,31 +1,31 @@ --- -title: Introduction to Subgraph Development +title: Introdução à Programação de Subgraphs sidebarTitle: Introdução --- -To start coding right away, go to [Developer Quick Start](/subgraphs/quick-start/). +Para começar a programar imediatamente, confira o [Guia de Início Rápido do Programador](/subgraphs/quick-start/). ## Visão geral -As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. +Todo programador precisa de dados para criar e melhorar o seu dapp (aplicativo descentralizado). Consultar e indexar dados da blockchain é desafiador, mas o The Graph fornece uma solução para este problema. -On The Graph, you can: +Com o The Graph, você pode: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### O Que é a GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -### Developer Actions +### Ações de Programador -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/pt/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/pt/subgraphs/developing/managing/deleting-a-subgraph.mdx index d5305fe2cfbe..49cb207e435e 100644 --- a/website/src/pages/pt/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/pt/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -1,31 +1,31 @@ --- -title: Deleting a Subgraph +title: Como Apagar um Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Passo a Passo -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). -2. Click on the three-dots to the right of the "publish" button. +2. Clique nos três pontos à direita do botão "publish" (editar). -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. -### Important Reminders +### Lembretes Importantes -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Os curadores não poderão mais sinalizar no subgraph depreciado. -- Curadores que já sinalizaram no subgraph poderão retirar a sua sinalização a um preço de ação normal. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/pt/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/pt/subgraphs/developing/managing/transferring-a-subgraph.mdx index 1931370a6df7..7f4ead265671 100644 --- a/website/src/pages/pt/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/pt/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -1,19 +1,19 @@ --- -title: Transferring a Subgraph +title: Transferências de Subgraphs --- -Subgraphs publicados na rede descentralizada terão um NFT mintado no endereço que publicou o subgraph. O NFT é baseado no padrão ERC-721, que facilita transferências entre contas na Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. -## Reminders +## Lembretes -- O dono do NFT controla o subgraph. -- Se o dono atual decidir vender ou transferir o NFT, ele não poderá mais editar ou atualizar aquele subgraph na rede. -- É possível transferir o controle de um subgraph para uma multisig. -- Um membro da comunidade pode criar um subgraph no nome de uma DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. -## View Your Subgraph as an NFT +## Como visualizar o seu subgraph como um NFT -Para visualizar o seu subgraph como um NFT, visite um mercado de NFTs como o **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,15 +27,15 @@ https://rainbow.me/your-wallet-addres ## Passo a Passo -Para transferir a titularidade de um subgraph, faça o seguinte: +To transfer ownership of a Subgraph, do the following: 1. Use a interface embutida no Subgraph Studio: ![Transferência de Titularidade de Subgraph](/img/subgraph-ownership-transfer-1.png) -2. Escolha o endereço para o qual gostaria de transferir o subgraph: +2. Choose the address that you would like to transfer the Subgraph to: - ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) + ![Transferência de Titularidade de Subgraph](/img/subgraph-ownership-transfer-2.png) Também é possível usar a interface embutida de mercados de NFT, como o OpenSea: diff --git a/website/src/pages/pt/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/pt/subgraphs/developing/publishing/publishing-a-subgraph.mdx index ad08b1c68cf8..1d25ded18a61 100644 --- a/website/src/pages/pt/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/pt/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,49 +1,50 @@ --- title: Como Editar um Subgraph na Rede Descentralizada +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -Ao editar um subgraph à rede descentralizada, ele será disponibilizado para: +When you publish a Subgraph to the decentralized network, you make it available for: -- [Curators](/resources/roles/curating/) to begin curating it. -- [Indexers](/indexing/overview/) to begin indexing it. +- [Curadores](/resources/roles/curating/), para começarem a curadoria. +- [Indexadores](/indexing/overview/), para começarem a indexação. -Check out the list of [supported networks](/supported-networks/). +Veja a lista das redes apoiadas [aqui](/supported-networks/). ## Edição do Subgraph Studio -1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard -2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +1. Entre no painel de controlo do [Subgraph Studio](https://thegraph.com/studio/) +2. Clique no botão **Publish** (Editar) +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -Todas as versões editadas de um subgraph existente podem: +All published versions of an existing Subgraph can: -- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). +- Ser editados no Arbitrum One. [Saiba mais sobre The Graph Network no Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Como atualizar metadados para um subgraph editado +### Updating metadata for a published Subgraph -- Após editar o seu subgraph à rede descentralizada, será possível editar os metadados a qualquer hora no Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Após salvar as suas mudanças e publicar as atualizações, elas aparecerão no Graph Explorer. - É importante notar que este processo não criará uma nova versão, já que a sua edição não terá mudado. ## Publicação da CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -1. Open the `graph-cli`. -2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. Uma janela será aberta para o programador conectar a sua carteira, adicionar metadados e lançar o seu subgraph finalizado a uma rede de sua escolha. +1. Abra a `graph-cli`. +2. Use os seguintes comandos: `graph codegen && graph build` e depois `graph publish`. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Como personalizar o seu lançamento -É possível enviar a sua build a um node IPFS específico e personalizar ainda mais o seu lançamento com as seguintes flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -51,44 +52,44 @@ USAGE ] FLAGS - -h, --help Show CLI help. - -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node. - --ipfs-hash= IPFS hash of the subgraph manifest to deploy. - --protocol-network=
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ro/indexing/tooling/graphcast.mdx b/website/src/pages/ro/indexing/tooling/graphcast.mdx index cac63bbd9340..461fe3852377 100644 --- a/website/src/pages/ro/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ro/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Află mai multe diff --git a/website/src/pages/ro/resources/benefits.mdx b/website/src/pages/ro/resources/benefits.mdx index 6e698c54af73..ebc1b62b67a3 100644 --- a/website/src/pages/ro/resources/benefits.mdx +++ b/website/src/pages/ro/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | Rețeaua The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | Rețeaua The Graph | +| :--------------------------: | :-------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | Rețeaua The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | Rețeaua The Graph | +| :--------------------------: | :----------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | Rețeaua The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | Rețeaua The Graph | +| :--------------------------: | :-----------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ro/resources/glossary.mdx b/website/src/pages/ro/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/ro/resources/glossary.mdx +++ b/website/src/pages/ro/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ro/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ro/resources/migration-guides/assemblyscript-migration-guide.mdx index 85f6903a6c69..aead2514ff51 100644 --- a/website/src/pages/ro/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ro/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## How to upgrade? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/ro/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ro/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/ro/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ro/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/ro/resources/roles/curating.mdx b/website/src/pages/ro/resources/roles/curating.mdx index 1cc05bb7b62f..a228ebfb3267 100644 --- a/website/src/pages/ro/resources/roles/curating.mdx +++ b/website/src/pages/ro/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## How to Signal -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Curation FAQs ### 1. What % of query fees do Curators earn? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Can I sell my curation shares? diff --git a/website/src/pages/ro/resources/roles/delegating/undelegating.mdx b/website/src/pages/ro/resources/roles/delegating/undelegating.mdx index c3e31e653941..6a361c508450 100644 --- a/website/src/pages/ro/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/ro/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## Additional Resources diff --git a/website/src/pages/ro/resources/subgraph-studio-faq.mdx b/website/src/pages/ro/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/ro/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ro/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ro/resources/tokenomics.mdx b/website/src/pages/ro/resources/tokenomics.mdx index 4a9b42ca6e0d..dac3383a28e7 100644 --- a/website/src/pages/ro/resources/tokenomics.mdx +++ b/website/src/pages/ro/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Overview -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Backbone of blockchain data @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/ro/sps/introduction.mdx b/website/src/pages/ro/sps/introduction.mdx index b11c99dfb8e5..92d8618165dd 100644 --- a/website/src/pages/ro/sps/introduction.mdx +++ b/website/src/pages/ro/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ro/sps/sps-faq.mdx b/website/src/pages/ro/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/ro/sps/sps-faq.mdx +++ b/website/src/pages/ro/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/ro/sps/triggers.mdx b/website/src/pages/ro/sps/triggers.mdx index 816d42cb5f12..66687aa21889 100644 --- a/website/src/pages/ro/sps/triggers.mdx +++ b/website/src/pages/ro/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/ro/sps/tutorial.mdx b/website/src/pages/ro/sps/tutorial.mdx index 55e563608bce..e20a22ba4b1c 100644 --- a/website/src/pages/ro/sps/tutorial.mdx +++ b/website/src/pages/ro/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Get Started @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/ro/subgraphs/_meta-titles.json b/website/src/pages/ro/subgraphs/_meta-titles.json index 0556abfc236c..3fd405eed29a 100644 --- a/website/src/pages/ro/subgraphs/_meta-titles.json +++ b/website/src/pages/ro/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", + "guides": "How-to Guides", "best-practices": "Best Practices" } diff --git a/website/src/pages/ro/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ro/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/ro/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ro/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ro/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/ro/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/ro/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ro/subgraphs/best-practices/grafting-hotfix.mdx index d514e1633c75..674cf6b87c62 100644 --- a/website/src/pages/ro/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ro/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ro/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/ro/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/ro/subgraphs/best-practices/pruning.mdx b/website/src/pages/ro/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/ro/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ro/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ro/subgraphs/best-practices/timeseries.mdx index cacdc44711fe..9732199531a8 100644 --- a/website/src/pages/ro/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ro/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Overview @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Example: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Example: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/ro/subgraphs/billing.mdx b/website/src/pages/ro/subgraphs/billing.mdx index c9f380bb022c..ec654ca63f55 100644 --- a/website/src/pages/ro/subgraphs/billing.mdx +++ b/website/src/pages/ro/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/ro/subgraphs/cookbook/arweave.mdx b/website/src/pages/ro/subgraphs/cookbook/arweave.mdx index 2372025621d1..e59abffa383f 100644 --- a/website/src/pages/ro/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/ro/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Building Subgraphs on Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are To be able to build and deploy Arweave Subgraphs, you need two packages: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Subgraph's components -There are three components of a subgraph: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ Defines the data sources of interest, and how they should be processed. Arweave Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ``` $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet @@ -99,7 +99,7 @@ Arweave data sources support two types of handlers: ## Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript Mappings @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Querying an Arweave Subgraph -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here is an example subgraph for reference: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a subgraph index Arweave and other chains? +### Can a Subgraph index Arweave and other chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. ### Can I index the stored files on Arweave? Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). -### Can I identify Bundlr bundles in my subgraph? +### Can I identify Bundlr bundles in my Subgraph? This is not currently supported. @@ -188,7 +188,7 @@ The source.owner can be the user's public key or account address. ### What is the current encryption format? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/ro/subgraphs/cookbook/enums.mdx b/website/src/pages/ro/subgraphs/cookbook/enums.mdx index a10970c1539f..9f55ae07c54b 100644 --- a/website/src/pages/ro/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/ro/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/ro/subgraphs/cookbook/grafting.mdx b/website/src/pages/ro/subgraphs/cookbook/grafting.mdx index 57d5169830a7..d9abe0e70d2a 100644 --- a/website/src/pages/ro/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/ro/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Replace a Contract and Keep its History With Grafting --- -In this guide, you will learn how to build and deploy new subgraphs by grafting existing subgraphs. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## What is Grafting? -Grafting reuses the data from an existing subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. Also, it can be used when adding a feature to a subgraph that takes long to index from scratch. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -22,38 +22,38 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## Building an Existing Subgraph -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Grafting Manifest Definition -Grafting requires adding two new items to the original subgraph manifest: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Deploying the Base Subgraph -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the subgraph is indexing properly, you can quickly update the subgraph with grafting. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Deploying the Grafting Subgraph The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ It should return the following: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Additional Resources diff --git a/website/src/pages/ro/subgraphs/cookbook/near.mdx b/website/src/pages/ro/subgraphs/cookbook/near.mdx index 6060eb27e761..e78a69eb7fa2 100644 --- a/website/src/pages/ro/subgraphs/cookbook/near.mdx +++ b/website/src/pages/ro/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Building Subgraphs on NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## What is NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## What are NEAR subgraphs? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Block handlers: these are run on every new block - Receipt handlers: run every time a message is executed at a specified account @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Building a NEAR Subgraph -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -There are three aspects of subgraph definition: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ```bash $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Subgraph Manifest Definition -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ NEAR data sources support two types of handlers: ### Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript Mappings @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -The node configuration will depend on where the subgraph is being deployed. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ We will provide more information on running the above components soon. ## Querying a NEAR Subgraph -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### How does the beta work? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Can a subgraph index both NEAR and EVM chains? +### Can a Subgraph index both NEAR and EVM chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. -### Can subgraphs react to more specific triggers? +### Can Subgraphs react to more specific triggers? Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### Can NEAR subgraphs make view calls to NEAR accounts during mappings? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? This is not supported. We are evaluating whether this functionality is required for indexing. -### Can I use data source templates in my NEAR subgraph? +### Can I use data source templates in my NEAR Subgraph? This is not currently supported. We are evaluating whether this functionality is required for indexing. -### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### My question hasn't been answered, where can I get more help building NEAR subgraphs? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## References diff --git a/website/src/pages/ro/subgraphs/cookbook/polymarket.mdx b/website/src/pages/ro/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/ro/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/ro/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ro/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/ro/subgraphs/cookbook/secure-api-keys-nextjs.mdx index fc7e0ff52eb4..e17e594408ff 100644 --- a/website/src/pages/ro/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/ro/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## Overview -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/ro/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/ro/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..de6fdd9fd9fb --- /dev/null +++ b/website/src/pages/ro/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Overview + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Get Started + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ro/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/ro/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..17b105edac59 --- /dev/null +++ b/website/src/pages/ro/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Get Started + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/ro/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/ro/subgraphs/cookbook/subgraph-debug-forking.mdx index 6610f19da66d..91aa7484d2ec 100644 --- a/website/src/pages/ro/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/ro/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Quick and Easy Subgraph Debugging Using Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## Ok, what is it? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## What?! How? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## Please, show me some code! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. The usual way to attempt a fix is: 1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Wait for it to sync-up. 4. If it breaks again go back to 1, otherwise: Hooray! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. If it breaks again, go back to 1, otherwise: Hooray! Now, you may have 2 questions: @@ -69,18 +69,18 @@ Now, you may have 2 questions: And I answer: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Forking is easy, no need to sweat: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! So, here is what I do: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/ro/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/ro/subgraphs/cookbook/subgraph-uncrashable.mdx index 0cc91a0fa2c3..a08e2a7ad8c9 100644 --- a/website/src/pages/ro/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/ro/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Safe Subgraph Code Generator --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Why integrate with Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. @@ -26,4 +26,4 @@ Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/ro/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/ro/subgraphs/cookbook/transfer-to-the-graph.mdx index 194deb018404..9a4b037cafbc 100644 --- a/website/src/pages/ro/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/ro/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Example -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Additional Resources -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ro/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ro/subgraphs/developing/creating/advanced.mdx index ee9918f5f254..8dbc48253034 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ro/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ro/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2ac894695fe1..cd81dc118f28 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/api.mdx index 35bb04826c98..2e256ae18190 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +483,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls @@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,7 +590,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +887,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/ro/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ro/subgraphs/developing/creating/install-the-cli.mdx index f98ef589aaef..ee168286548b 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Creează un Subgraf ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ro/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ro/subgraphs/developing/creating/ql-schema.mdx index 27562f970620..2eb805320753 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/ro/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ro/subgraphs/developing/creating/starting-your-subgraph.mdx index 4823231d9a40..4931e6b1fd34 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Overview -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ro/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ro/subgraphs/developing/creating/subgraph-manifest.mdx index a42a50973690..085eaf2fb533 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Overview -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). The important entries to update for the manifest are: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Defining a Call Handler @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Release notes | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ro/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ro/subgraphs/developing/creating/unit-testing-framework.mdx index 2133c1d4b5c9..e56e1109bc04 100644 --- a/website/src/pages/ro/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ro/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/ro/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ro/subgraphs/developing/deploying/multiple-networks.mdx index 4f7dcd3864e8..3b2b1bbc70ae 100644 --- a/website/src/pages/ro/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ro/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ro/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ro/subgraphs/developing/deploying/using-subgraph-studio.mdx index 634c2700ba68..77d10212c770 100644 --- a/website/src/pages/ro/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ro/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Must not use any of the following features: - - ipfs.cat & ipfs.map - - Non-fatal errors - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ro/subgraphs/developing/developer-faq.mdx b/website/src/pages/ro/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/ro/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ro/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/ro/subgraphs/developing/introduction.mdx b/website/src/pages/ro/subgraphs/developing/introduction.mdx index 615b6cec4c9c..06bc2b76104d 100644 --- a/website/src/pages/ro/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ro/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ro/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ro/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5a4ac15e07fd..b8c2330ca49d 100644 --- a/website/src/pages/ro/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ro/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/ro/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ro/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/ro/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ro/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ro/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ro/subgraphs/developing/publishing/publishing-a-subgraph.mdx index dca943ad3152..2bc0ec5f514c 100644 --- a/website/src/pages/ro/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ro/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ro/subgraphs/developing/subgraphs.mdx b/website/src/pages/ro/subgraphs/developing/subgraphs.mdx index ff37e00042e6..f061203d6ea6 100644 --- a/website/src/pages/ro/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ro/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgrafuri ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ro/subgraphs/explorer.mdx b/website/src/pages/ro/subgraphs/explorer.mdx index f29f2a3602d9..499fcede88d3 100644 --- a/website/src/pages/ro/subgraphs/explorer.mdx +++ b/website/src/pages/ro/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signal/Un-signal on subgraphs +- Signal/Un-signal on Subgraphs - View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraphs Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -223,13 +223,13 @@ Keep in mind that this chart is horizontally scrollable, so if you scroll all th ### Curating Tab -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Within this tab, you’ll find an overview of: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/ro/subgraphs/guides/arweave.mdx b/website/src/pages/ro/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..e59abffa383f --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Building Subgraphs on Arweave +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. + +## What is Arweave? + +The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. + +Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## What are Arweave Subgraphs? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Building an Arweave Subgraph + +To be able to build and deploy Arweave Subgraphs, you need two packages: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## Subgraph's components + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. + +### 2. Schema - `schema.graphql` + +Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet + +Arweave data sources support two types of handlers: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> The source.owner can be the owner's address, or their Public Key. +> +> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Querying an Arweave Subgraph + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can I index the stored files on Arweave? + +Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). + +### Can I identify Bundlr bundles in my Subgraph? + +This is not currently supported. + +### How can I filter transactions to a specific account? + +The source.owner can be the user's public key or account address. + +### What is the current encryption format? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/ro/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ro/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..ab5076c5ebf4 --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Overview + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +or + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ro/subgraphs/guides/enums.mdx b/website/src/pages/ro/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..9f55ae07c54b --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Additional Resources + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/ro/subgraphs/guides/grafting.mdx b/website/src/pages/ro/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..d9abe0e70d2a --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Replace a Contract and Keep its History With Grafting +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## What is Grafting? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- It changes for which entity types an interface is implemented + +For more information, you can check: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Important Note on Grafting When Upgrading to the Network + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Why Is This Important? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Best Practices + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +By adhering to these guidelines, you minimize risks and ensure a smoother migration process. + +## Building an Existing Subgraph + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Grafting Manifest Definition + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## Deploying the Base Subgraph + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It returns something like this: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## Deploying the Grafting Subgraph + +The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It should return the following: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## Additional Resources + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/ro/subgraphs/guides/near.mdx b/website/src/pages/ro/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..e78a69eb7fa2 --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Building Subgraphs on NEAR +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## What is NEAR? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- Block handlers: these are run on every new block +- Receipt handlers: run every time a message is executed at a specified account + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. + +## Building a NEAR Subgraph + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### Subgraph Manifest Definition + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +NEAR data sources support two types of handlers: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## Deploying a NEAR Subgraph + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Local Graph Node (based on default configuration) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexing NEAR with a Local Graph Node + +Running a Graph Node that indexes NEAR has the following operational requirements: + +- NEAR Indexer Framework with Firehose instrumentation +- NEAR Firehose Component(s) +- Graph Node with Firehose endpoint configured + +We will provide more information on running the above components soon. + +## Querying a NEAR Subgraph + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### How does the beta work? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. + +### Will receipt handlers trigger for accounts and their sub-accounts? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +This is not supported. We are evaluating whether this functionality is required for indexing. + +### Can I use data source templates in my NEAR Subgraph? + +This is not currently supported. We are evaluating whether this functionality is required for indexing. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## References + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/ro/subgraphs/guides/polymarket.mdx b/website/src/pages/ro/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ro/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ro/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..e17e594408ff --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## Overview + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/ro/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ro/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..09f1939c1fde --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Get Started + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ro/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ro/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..91aa7484d2ec --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Quick and Easy Subgraph Debugging Using Forks +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## Ok, what is it? + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## What?! How? + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## Please, show me some code! + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +The usual way to attempt a fix is: + +1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. Wait for it to sync-up. +4. If it breaks again go back to 1, otherwise: Hooray! + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. Make a change in the mappings source, which you believe will solve the issue. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. If it breaks again, go back to 1, otherwise: Hooray! + +Now, you may have 2 questions: + +1. fork-base what??? +2. Forking who?! + +And I answer: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. Forking is easy, no need to sweat: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +So, here is what I do: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/ro/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/ro/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..a08e2a7ad8c9 --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Safe Subgraph Code Generator +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Why integrate with Subgraph Uncrashable? + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/ro/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ro/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..9a4b037cafbc --- /dev/null +++ b/website/src/pages/ro/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfer to The Graph +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### Example + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### Additional Resources + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ro/subgraphs/querying/best-practices.mdx b/website/src/pages/ro/subgraphs/querying/best-practices.mdx index ff5f381e2993..ab02b27cbc03 100644 --- a/website/src/pages/ro/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ro/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ro/subgraphs/querying/from-an-application.mdx b/website/src/pages/ro/subgraphs/querying/from-an-application.mdx index 708dcfde2fdc..fe2372bd15b1 100644 --- a/website/src/pages/ro/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ro/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Pasul 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Pasul 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Pasul 1 diff --git a/website/src/pages/ro/subgraphs/querying/graph-client/README.md b/website/src/pages/ro/subgraphs/querying/graph-client/README.md index 416cadc13c6f..d4850e723c6e 100644 --- a/website/src/pages/ro/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ro/subgraphs/querying/graph-client/README.md @@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/ro/subgraphs/querying/graphql-api.mdx b/website/src/pages/ro/subgraphs/querying/graphql-api.mdx index b3003ece651a..b82afcfa252c 100644 --- a/website/src/pages/ro/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ro/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ------ | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ro/subgraphs/querying/introduction.mdx b/website/src/pages/ro/subgraphs/querying/introduction.mdx index 36ea85c37877..2c9c553293fa 100644 --- a/website/src/pages/ro/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ro/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/ro/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ro/subgraphs/querying/managing-api-keys.mdx index 6964b1a7ad9b..aed3d10422e1 100644 --- a/website/src/pages/ro/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ro/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ro/subgraphs/querying/python.mdx b/website/src/pages/ro/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/ro/subgraphs/querying/python.mdx +++ b/website/src/pages/ro/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/ro/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ro/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/ro/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ro/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ro/subgraphs/quick-start.mdx b/website/src/pages/ro/subgraphs/quick-start.mdx index dc280ec699d3..a803ac8695fa 100644 --- a/website/src/pages/ro/subgraphs/quick-start.mdx +++ b/website/src/pages/ro/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Quick Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ro/substreams/developing/dev-container.mdx b/website/src/pages/ro/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/ro/substreams/developing/dev-container.mdx +++ b/website/src/pages/ro/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/ro/substreams/developing/sinks.mdx b/website/src/pages/ro/substreams/developing/sinks.mdx index 5f6f9de21326..45e5471f0d09 100644 --- a/website/src/pages/ro/substreams/developing/sinks.mdx +++ b/website/src/pages/ro/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/ro/substreams/developing/solana/account-changes.mdx b/website/src/pages/ro/substreams/developing/solana/account-changes.mdx index a282278c7d91..8c821acaee3f 100644 --- a/website/src/pages/ro/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ro/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/ro/substreams/developing/solana/transactions.mdx b/website/src/pages/ro/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/ro/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ro/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/ro/substreams/introduction.mdx b/website/src/pages/ro/substreams/introduction.mdx index e11174ee07c8..0bd1ea21c9f6 100644 --- a/website/src/pages/ro/substreams/introduction.mdx +++ b/website/src/pages/ro/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/ro/substreams/publishing.mdx b/website/src/pages/ro/substreams/publishing.mdx index 3d1a3863c882..3d93e6f9376f 100644 --- a/website/src/pages/ro/substreams/publishing.mdx +++ b/website/src/pages/ro/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/ro/supported-networks.mdx b/website/src/pages/ro/supported-networks.mdx index 42a944e47986..554c558ded7e 100644 --- a/website/src/pages/ro/supported-networks.mdx +++ b/website/src/pages/ro/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Rețele suportate hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ro/token-api/_meta-titles.json b/website/src/pages/ro/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/ro/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/ro/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/ro/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/ro/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/ro/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/ro/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/ro/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/ro/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/ro/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/ro/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/ro/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/ro/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/ro/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/ro/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/ro/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/ro/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/ro/token-api/faq.mdx b/website/src/pages/ro/token-api/faq.mdx new file mode 100644 index 000000000000..6178aee33e86 --- /dev/null +++ b/website/src/pages/ro/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## General + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ro/token-api/mcp/claude.mdx b/website/src/pages/ro/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..12a036b6fc24 --- /dev/null +++ b/website/src/pages/ro/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Configuration + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/ro/token-api/mcp/cline.mdx b/website/src/pages/ro/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..ef98e45939fe --- /dev/null +++ b/website/src/pages/ro/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Configuration + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/ro/token-api/mcp/cursor.mdx b/website/src/pages/ro/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..658108d1337b --- /dev/null +++ b/website/src/pages/ro/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Configuration + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/ro/token-api/monitoring/get-health.mdx b/website/src/pages/ro/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/ro/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/ro/token-api/monitoring/get-networks.mdx b/website/src/pages/ro/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/ro/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/ro/token-api/monitoring/get-version.mdx b/website/src/pages/ro/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/ro/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/ro/token-api/quick-start.mdx b/website/src/pages/ro/token-api/quick-start.mdx new file mode 100644 index 000000000000..4653c3d41ac6 --- /dev/null +++ b/website/src/pages/ro/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Quick Start +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/ru/about.mdx b/website/src/pages/ru/about.mdx index 35f9c6efd933..d940c455bdf7 100644 --- a/website/src/pages/ru/about.mdx +++ b/website/src/pages/ru/about.mdx @@ -24,31 +24,31 @@ The Graph — это мощный децентрализованный прот Децентрализованному приложению (dapp), запущенному в браузере, потребуются **часы или даже дни**, чтобы получить ответ на эти простые вопросы. -Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +В качестве альтернативы у Вас есть возможность настроить собственный сервер, обрабатывать транзакции, хранить их в базе данных и создать конечную точку API для запроса данных. Однако этот вариант [ресурсоемок](/resources/benefits/), требует обслуживания, создает единую точку отказа и нарушает важные требования безопасности, необходимые для децентрализации. Такие свойства блокчейна, как окончательность, реорганизация чейна и необработанные блоки, усложняют процесс, делая получение точных результатов запроса из данных блокчейна трудоемким и концептуально сложным. ## The Graph предлагает решение -The Graph решает эту проблему с помощью децентрализованного протокола, который индексирует и обеспечивает эффективный и высокопроизводительный запрос данных блокчейна. Эти API (индексированные «субграфы») затем могут быть запрошены с помощью стандартного API GraphQL. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Сегодня существует децентрализованный протокол, поддерживаемый реализацией с открытым исходным кодом [Graph Node](https://github.com/graphprotocol/graph-node), который обеспечивает этот процесс. ### Как функционирует The Graph -Индексирование данных блокчейна очень сложный процесс, но The Graph упрощает его. The Graph учится индексировать данные Ethereum с помощью субграфов. Субграфы — это пользовательские API, построенные на данных блокчейна, которые извлекают данные из блокчейна, обрабатывают их и сохраняют так, чтобы их можно было легко запрашивать через GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Специфические особенности -- В The Graph используются описания субграфов, которые называются манифестами субграфов внутри субграфа. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- В описании субграфа описываются смарт-контракты, представляющие интерес для субграфа, события в этих контрактах, на которых следует сосредоточиться, а также способы сопоставления данных о событиях с данными, которые The Graph будет хранить в своей базе данных. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- При создании субграфа Вам необходимо написать манифест субграфа. +- When creating a Subgraph, you need to write a Subgraph manifest. -- После написания `манифеста субграфа` Вы можете использовать Graph CLI для сохранения определения в IPFS и дать команду индексатору начать индексирование данных для этого субграфа. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -На диаграмме ниже представлена ​​более подробная информация о потоке данных после развертывания манифеста субграфа с транзакциями Ethereum. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![График, объясняющий потребителям данных, как The Graph использует Graph Node для обслуживания запросов](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The Graph решает эту проблему с помощью децентр 1. Dapp добавляет данные в Ethereum через транзакцию в смарт-контракте. 2. Смарт-контракт генерирует одно или несколько событий во время обработки транзакции. -3. Graph Node постоянно сканирует Ethereum на наличие новых блоков и данных для Вашего субграфа, которые они могут содержать. -4. The Graph нода затем разбирает события, относящиеся к Вашему субграфу, которые записаны в данном блоке и структурирует их согласно схеме данных описанной в subgraph используя модуль WASM. Затем данные сохраняются в таблицы базы данных Graph Node. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Dapp запрашивает у Graph Node данные, проиндексированные с блокчейна, используя [конечную точку GraphQL](https://graphql.org/learn/) ноды. В свою очередь, Graph Node переводит запросы GraphQL в запросы к его базовому хранилищу данных, чтобы получить эти данные, используя возможности индексации этого хранилища. Dapp отображает эти данные в насыщенном пользовательском интерфейсе для конечных пользователей, который они используют для создания новых транзакций в Ethereum. Цикл повторяется. ## Что далее -В следующих разделах более подробно рассматриваются субграфы, их развертывание и запросы данных. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Прежде чем писать собственный субграф, рекомендуется ознакомиться с [Graph Explorer](https://thegraph.com/explorer) и изучить некоторые из уже развернутых субграфов. Страница каждого субграфа включает в себя тестовую площадку GraphQL, позволяющую запрашивать его данные. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/ru/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ru/archived/arbitrum/arbitrum-faq.mdx index 0375e85a7135..5e7bf098577d 100644 --- a/website/src/pages/ru/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/ru/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ title: Часто задаваемые вопросы об Arbitrum - Безопасность, унаследованную от Ethereum -Масштабирование смарт-контрактов протокола на L2 позволяет участникам сети взаимодействовать чаще и с меньшими затратами на комиссии за газ. Например, Индексаторы могут чаще открывать и закрывать аллокации, чтобы индексировать большее количество субграфов. Разработчики могут с большей легкостью разворачивать и обновлять субграфы, а Делегаторы — чаще делегировать GRT. Кураторы могут добавлять или удалять сигнал для большего количества субграфов — действия, которые ранее считались слишком затратными для частого выполнения из-за стоимости газа. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Решение о продолжении сотрудничества с Arbitrum было принято в прошлом году по итогам обсуждения сообществом The Graph [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ title: Часто задаваемые вопросы об Arbitrum ![Выпадающий список для переключения на Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Что мне нужно делать сейчас как разработчику субграфа, потребителю данных, индексатору, куратору или делегатору? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ Network participants must move to Arbitrum to continue participating in The Grap Все было тщательно протестировано, и разработан план действий на случай непредвиденных обстоятельств, чтобы обеспечить безопасный и непрерывный переход. Подробности можно найти [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Работают ли существующие субграфы на Ethereum? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Есть ли у GRT новый смарт-контракт, развернутый на Arbitrum? diff --git a/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-faq.mdx index ebb1f3b1b165..4982403c1db2 100644 --- a/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type Инструменты переноса L2 используют встроенный механизм Arbitrum для передачи сообщений с L1 на L2. Этот механизм называется "retryable ticket", или "повторный тикет", и используется всеми собственными токен-мостами, включая мост Arbitrum GRT. Подробнее о повторном тикете можно прочитать в [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -Когда Вы переносите свои активы (субграф, стейк, делегирование или курирование) на L2, через мост Arbitrum GRT отправляется сообщение, которое создает повторный тикет на L2. Инструмент переноса включает в транзакцию некоторую стоимость ETH, которая используется для 1) оплаты создания тикета и 2) оплаты стоимости газа для выполнения тикета на L2. Однако, поскольку стоимость газа может измениться за время, пока тикет будет готов к исполнению на L2, возможна ситуация, когда попытка автоматического исполнения не удастся. В этом случае мост Arbitrum сохранит повторный тикет в течение 7 дней, и любой желающий может повторить попытку "погасить" тикет (для этого необходимо иметь кошелек с некоторым количеством ETH, подключенный к мосту Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Это так называемый шаг "Подтверждение" во всех инструментах переноса - в большинстве случаев он выполняется автоматически, поскольку автоисполнение чаще всего бывает успешным, но важно, чтобы Вы проверили, прошел ли он. Если он не исполнился и в течение 7 дней не будет повторных успешных попыток, мост Arbitrum отменит тикет, и Ваши активы (субграф, стейк, делегирование или курирование) будут потеряны и не смогут быть восстановлены. У разработчиков ядра The Graph есть система мониторинга, позволяющая выявлять такие ситуации и пытаться погасить тикеты, пока не стало слишком поздно, но в конечном итоге ответственность за своевременное завершение переноса лежит на Вас. Если у Вас возникли проблемы с подтверждением переноса, пожалуйста, свяжитесь с нами через [эту форму] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms), и разработчики ядра помогут Вам. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Я начал передачу делегирования/стейка/курирования и не уверен, что она дошла до уровня L2. Как я могу убедиться, что она была передана правильно? @@ -36,43 +36,43 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type ## Перенос субграфа -### Как мне перенести свой субграф? +### How do I transfer my Subgraph? -Чтобы перенести Ваш субграф, необходимо выполнить следующие действия: +To transfer your Subgraph, you will need to complete the following steps: 1. Инициировать перенос в основной сети Ethereum 2. Подождать 20 минут для получения подтверждения -3. Подтвердить перенос субграфа в Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Завершить публикацию субграфа в Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Обновить URL-адрес запроса (рекомендуется) -\* Обратите внимание, что Вы должны подтвердить перенос в течение 7 дней, иначе Ваш субграф может быть потерян. В большинстве случаев этот шаг выполнится автоматически, но в случае скачка стоимости комиссии сети в Arbitrum может потребоваться ручное подтверждение. Если в ходе этого процесса возникнут какие-либо проблемы, Вам помогут: обратитесь в службу поддержки по адресу support@thegraph.com или в [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### С чего необходимо начать перенос? -Вы можете начать перенос со страницы [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) или любой другой страницы с информацией о субграфе. Для начала переноса нажмите кнопку "Перенести субграф" на странице сведений о субграфе. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Как долго мне необходимо ждать, пока мой субграф будет перенесен +### How long do I need to wait until my Subgraph is transferred Время переноса занимает около 20 минут. Мост Arbitrum работает в фоновом режиме, чтобы автоматически завершить перенос через мост. В некоторых случаях стоимость комиссии сети может повыситься, и Вам потребуется повторно подтвердить транзакцию. -### Будет ли мой субграф по-прежнему доступен для поиска после того, как я перенесу его на L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Ваш субграф можно будет найти только в той сети, в которой он опубликован. Например, если Ваш субграф находится в сети Arbitrum One, то Вы сможете найти его в Explorer только в сети Arbitrum One, и не сможете найти в сети Ethereum. Обратите внимание, что в переключателе сетей в верхней части страницы выбран Arbitrum One, чтобы убедиться, что Вы находитесь в правильной сети. После переноса субграф L1 будет отображаться как устаревший. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Должен ли мой субграф быть опубликован, чтобы его можно было перенести? +### Does my Subgraph need to be published to transfer it? -Чтобы воспользоваться инструментом переноса субграфа, Ваш субграф должен быть уже опубликован в основной сети Ethereum и иметь какой-либо сигнал курирования, принадлежащий кошельку, которому принадлежит субграф. Если Ваш субграф не опубликован, рекомендуется просто опубликовать его непосредственно на Arbitrum One - связанная с этим стоимость комиссии сети будет значительно ниже. Если Вы хотите перенести опубликованный субграф, но на счете владельца нет сигнала курирования, Вы можете подать сигнал на небольшую сумму (например, 1 GRT) с этого счета; при этом обязательно выберите сигнал "автомиграция". +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Что произойдет с версией моего субграфа в основной сети Ethereum после его переноса на Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -После переноса Вашего субграфа на Arbitrum версия, находящаяся на основной сети Ethereum станет устаревшей. Мы рекомендуем Вам обновить URL-адрес запроса в течение 48 часов. Однако существует отсрочка, в течение которой Ваш URL-адрес на основной сети будет функционировать, чтобы можно было обновить стороннюю поддержку децентрализованных приложений. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Нужно ли мне после переноса повторно опубликовываться на Arbitrum? @@ -80,21 +80,21 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type ### Будет ли моя конечная точка простаивать при повторной публикации? -Это маловероятно, но возможно возникновение кратковременного простоя в зависимости от того, какие индексаторы поддерживают субграф на уровне L1 и продолжают ли они индексировать его до тех пор, пока субграф не будет полностью поддерживаться на уровне L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Публикация и версионность на L2 такие же, как и в основной сети Ethereum? -Да. При публикации в Subgraph Studio выберите Arbitrum One в качестве публикуемой сети. В Studio будет доступна последняя конечная точка, которая указывает на последнюю обновленную версию субграфа. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Будет ли курирование моего субграфа перемещено вместе с моим субграфом? +### Will my Subgraph's curation move with my Subgraph? -Если Вы выбрали автомиграцию сигнала, то 100% Вашего собственного кураторства переместится вместе с Вашим субграфом на Arbitrum One. Весь сигнал курирования субграфа будет преобразован в GRT в момент переноса, а GRT, соответствующий Вашему сигналу курирования, будет использован для обработки сигнала на субграфе L2. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Другие кураторы могут выбрать, снять ли им свою долю GRT, или также перевести ее в L2 для обработки сигнала на том же субграфе. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Могу ли я переместить свой субграф обратно в основную сеть Ethereum после переноса? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -После переноса Ваша версия данного субграфа в основной сети Ethereum станет устаревшей. Если Вы захотите вернуться в основную сеть, Вам нужно будет переразвернуть и снова опубликовать субграф в основной сети. Однако перенос обратно в основную сеть Ethereum настоятельно не рекомендуется, так как вознаграждения за индексирование в конечном итоге будут полностью распределяться на Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Зачем мне необходимо использовать мост ETH для завершения переноса? @@ -206,19 +206,19 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type \*При необходимости - т.е. если Вы используете контрактный адрес. -### Как я узнаю, что курируемый мною субграф перешел в L2? +### How will I know if the Subgraph I curated has moved to L2? -При просмотре страницы сведений о субграфе появится баннер, уведомляющий о том, что данный субграф был перенесен. Вы можете следовать подсказке, чтобы перенести свое курирование. Эту информацию можно также найти на странице сведений о субграфе любого перемещенного субграфа. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Что делать, если я не хочу переносить свое курирование в L2? -Когда субграф устаревает, у Вас есть возможность отозвать свой сигнал. Аналогично, если субграф переместился в L2, Вы можете выбрать, отозвать свой сигнал из основной сети Ethereum или отправить его в L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Как я узнаю, что мое курирование успешно перенесено? Информация о сигнале будет доступна через Explorer примерно через 20 минут после запуска инструмента переноса L2. -### Можно ли перенести курирование на несколько субграфов одновременно? +### Can I transfer my curation on more than one Subgraph at a time? В настоящее время опция массового переноса отсутствует. @@ -266,7 +266,7 @@ If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#type ### Должен ли я индексироваться на Arbitrum перед тем, как перенести стейк? -Вы можете эффективно перенести свой стейк до начала настройки индексации, но Вы не сможете претендовать на вознаграждение на L2 до тех пор, пока не распределите субграфы на L2, не проиндексируете их, а также пока не представите POI. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Могут ли делегаторы перемещать свои делегации до того, как я перемещу свой индексируемый стейк? diff --git a/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-guide.mdx index 1dc689d934d3..b3509a9c7f8d 100644 --- a/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/ru/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph упростил переход на L2 в Arbitrum One. Для каж Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Как перенести свой субграф в Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Преимущества переноса Ваших субграфов +## Benefits of transferring your Subgraphs Сообщество и разработчики ядра The Graph [готовились](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) к переходу на Arbitrum в течение прошлого года. Arbitrum, блокчейн уровня 2 или «L2», наследует безопасность от Ethereum, но обеспечивает значительно более низкую комиссию сети. -Когда Вы публикуете или обновляете свой субграф до The Graph Network, Вы взаимодействуете со смарт-контрактами по протоколу, и для этого требуется проплачивать комиссию сети с помощью ETH. После перемещения Ваших субграфов в Arbitrum, любые будущие обновления Вашего субграфа потребуют гораздо более низких сборов за комиссию сети. Более низкие сборы и тот факт, что кривые связи курирования на L2 ровные, также облегчают другим кураторам курирование Вашего субграфа, увеличивая вознаграждение для индексаторов в Вашем субграфе. Эта менее затратная среда также упрощает индексацию и обслуживание Вашего субграфа. В ближайшие месяцы вознаграждения за индексацию в Arbitrum будут увеличиваться, а в основной сети Ethereum уменьшаться, поэтому все больше и больше индексаторов будут переводить свои стейки и настраивать операции на L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Понимание того, что происходит с сигналом, Вашим субграфом L1 и URL-адресами запроса +## Understanding what happens with signal, your L1 Subgraph and query URLs -Для передачи субграфа в Arbitrum используется мост Arbitrum GRT, который, в свою очередь, использует собственный мост Arbitrum для отправки субграфа на L2. «Перенос» отменяет поддержку субграфа в основной сети и отправляет информацию для повторного создания субграфа на L2 с использованием моста. Он также будет включать сигнал GRT владельца субграфа, который должен быть больше нуля, чтобы мост смог принять передачу. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -Когда Вы решите передать субграф, весь сигнал курирования подграфа будет преобразован в GRT. Это эквивалентно «прекращению поддержки» субграфа в основной сети. GRT, соответствующие Вашему кураторству, будут отправлен на L2 вместе с субграфом, где они будут использоваться для производства сигнала от Вашего имени. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Другие Кураторы могут выбрать, вывести ли свою долю GRT или также перевести ее в L2 для производства сигнала на том же субграфе. Если владелец субграфа не перенесет свой субграф в L2 и вручную аннулирует его с помощью вызова контракта, то Кураторы будут уведомлены и смогут отозвать свое курирование. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Индексаторы больше не будут получать вознаграждение за индексирование субграфа, как только субграф будет перенесён, так как всё курирование конвертируется в GRT. Однако будут индексаторы, которые 1) продолжат обслуживать переданные субграфы в течение 24 часов и 2) немедленно начнут индексировать субграф на L2. Поскольку эти индексаторы уже проиндексировали субграф, не нужно будет ждать синхронизации субграфа, и можно будет запросить субграф L2 практически сразу. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Запросы к субграфу L2 необходимо будет выполнять по другому URL-адресу (на `arbitrum-gateway.thegraph.com`), но URL-адрес L1 будет продолжать работать в течение как минимум 48 часов. После этого шлюз L1 будет перенаправлять запросы на шлюз L2 (на некоторое время), но это увеличит задержку, поэтому рекомендуется как можно скорее переключить все Ваши запросы на новый URL-адрес. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Выбор Вашего кошелька L2 -Когда Вы опубликовали свой субграф в основной сети, Вы использовали подключенный кошелек для его создания, и этот кошелек обладает NFT, который представляет этот субграф и позволяет Вам публиковать обновления. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -При переносе субграфа в Arbitrum Вы можете выбрать другой кошелек, которому будет принадлежать этот NFT субграфа на L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Если Вы используете «обычный» кошелек, такой как MetaMask (Externally Owned Account или EOA, то есть кошелек, который не является смарт-контрактом), тогда это необязательно, и рекомендуется сохранить тот же адрес владельца, что и в L1. -Если Вы используете смарт-контрактный кошелек, такой как кошелёк с мультиподписью (например, Safe), то выбор другого адреса кошелька L2 является обязательным, так как, скорее всего, эта учетная запись существует только в основной сети, и Вы не сможете совершать транзакции в сети Arbitrum с помощью этого кошелька. Если Вы хотите продолжать использовать кошелек смарт-контрактов или мультиподпись, создайте новый кошелек на Arbitrum и используйте его адрес в качестве владельца L2 Вашего субграфа. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Очень важно использовать адрес кошелька, которым Вы управляете и с которого можно совершать транзакции в Arbitrum. В противном случае субграф будет потерян и его невозможно будет восстановить.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Подготовка к переносу: использование моста с некоторым количеством ETH -Передача субграфа включает в себя отправку транзакции через мост, а затем выполнение другой транзакции в Arbitrum. Первая транзакция использует ETH в основной сети и включает некоторое количество ETH для оплаты комиссии сети при получении сообщения на уровне L2. Однако, если этого количества будет недостаточно, Вам придется повторить транзакцию и оплатить комиссию сети непосредственно на L2 (это «Шаг 3: Подтверждение перевода» ниже). Этот шаг **должен быть выполнен в течение 7 дней после начала переноса**. Более того, вторая транзакция («Шаг 4: Завершение перевода на L2») будет выполнена непосредственно на Arbitrum. В связи с этим Вам понадобится некоторое количество ETH на кошельке Arbitrum. Если Вы используете учетную запись с мультиподписью или смарт-контрактом, ETH должен находиться в обычном (EOA) кошельке, который Вы используете для выполнения транзакций, а не в самом кошельке с мультиподписью. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Вы можете приобрести ETH на некоторых биржах и вывести его напрямую на Arbitrum, или Вы можете использовать мост Arbitrum для отправки ETH из кошелька основной сети на L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Поскольку плата за комиссию сети в Arbitrum ниже, Вам понадобится лишь небольшая сумма. Рекомендуется начинать с низкого порога (например, 0,01 ETH), чтобы Ваша транзакция была одобрена. -## Поиск инструмента переноса субграфа +## Finding the Subgraph Transfer Tool -Вы можете найти инструмент переноса L2, когда просматриваете страницу своего субграфа в Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![инструмент переноса](/img/L2-transfer-tool1.png) -Он также доступен в Explorer, если Вы подключены к кошельку, которому принадлежит субграф, и на странице этого субграфа в Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Перенос на L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## Шаг 1: Запуск перевода -Прежде чем начать перенос, Вы должны решить, какому адресу будет принадлежать субграф на L2 (см. «Выбор кошелька L2» выше), также настоятельно рекомендуется иметь некоторое количество ETH для оплаты комиссии сети за соединение мостом с Arbitrum (см. «Подготовка к переносу: использование моста с некоторым количеством ETH" выше). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Также обратите внимание, что для передачи субграфа требуется наличие ненулевого количества сигнала в субграфе с той же учетной записью, которая владеет субграфом; если Вы не просигнализировали на субграфе, Вам придется добавить немного монет для курирования (достаточно добавить небольшую сумму, например 1 GRT). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -После открытия инструмента переноса Вы сможете ввести адрес кошелька L2 в поле «Адрес получающего кошелька» — **убедитесь, что Вы ввели здесь правильный адрес**. После нажатия на «Перевод субграфа», Вам будет предложено выполнить транзакцию в Вашем кошельке (обратите внимание, что некоторое количество ETH включено для оплаты газа L2); это инициирует передачу и отменит Ваш субграф на L1 (см. «Понимание того, что происходит с сигналом, Вашим субграфом L1 и URL-адресами запроса» выше для получения более подробной информации о том, что происходит за кулисами). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Если Вы выполните этот шаг, ** убедитесь в том, что Вы завершили шаг 3 менее чем за 7 дней, иначе субграф и Ваш сигнал GRT будут утеряны.** Это связано с тем, как в Arbitrum работает обмен сообщениями L1-L2: сообщения, которые отправляются через мост, представляют собой «билеты с возможностью повторной попытки», которые должны быть выполнены в течение 7 дней, и для первоначального исполнения может потребоваться повторная попытка, если в Arbitrum будут скачки цен комиссии сети. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Запустите перенос на L2](/img/startTransferL2.png) -## Шаг 2: Ожидание перехода субграфа в L2 +## Step 2: Waiting for the Subgraph to get to L2 -После того, как Вы начнете передачу, сообщение, которое отправляет Ваш субграф с L1 в L2, должно пройти через мост Arbitrum. Это занимает примерно 20 минут (мост ожидает, пока блок основной сети, содержащий транзакцию, будет «защищен» от потенциальных реорганизаций чейна). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). По истечении этого времени ожидания Arbitrum попытается автоматически выполнить перевод по контрактам L2. @@ -80,7 +80,7 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## Шаг 3: Подтверждение переноса -В большинстве случаев этот шаг будет выполняться автоматически, поскольку комиссии сети L2, включенной в шаг 1, должно быть достаточно для выполнения транзакции, которая получает субграф в контрактах Arbitrum. Однако в некоторых случаях возможно, что скачок цен комиссии сети на Arbitrum приведёт к сбою этого автоматического выполнения. В этом случае «тикет», который отправляет ваш субграф на L2, будет находиться в ожидании и потребует повторной попытки в течение 7 дней. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. В этом случае Вам нужно будет подключиться с помощью кошелька L2, в котором есть некоторое количество ETH в сети Arbitrum, переключить сеть Вашего кошелька на Arbitrum и нажать «Подтвердить перевод», чтобы повторить транзакцию. @@ -88,33 +88,33 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## Шаг 4: Завершение переноса в L2 -На данный момент Ваш субграф и GRT получены в Arbitrum, но субграф еще не опубликован. Вам нужно будет подключиться с помощью кошелька L2, который Вы выбрали в качестве принимающего кошелька, переключить сеть Вашего кошелька на Arbitrum и нажать «Опубликовать субграф». +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Опубликуйте субграф](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Дождитесь публикации субграфа](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Субграф будет опубликован, и индексаторы, работающие на Arbitrum, смогут начать его обслуживание. Он также будет создавать сигнал курирования, используя GRT, переданные из L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Шаг 5. Обновление URL-адреса запроса -Ваш субграф успешно перенесен в Arbitrum! Для запроса субграфа новый URL будет следующим: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Обратите внимание, что идентификатор субграфа в Arbitrum будет отличаться от того, который был у Вас в основной сети, но Вы всегда можете найти его в Explorer или Studio. Как упоминалось выше (см. «Понимание того, что происходит с сигналом, Вашим субграфом L1 и URL-адресами запроса»), старый URL-адрес L1 будет поддерживаться в течение некоторого времени, но Вы должны переключить свои запросы на новый адрес, как только субграф будет синхронизирован в L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Как перенести свой субграф в Arbitrum (L2) -## Понимание того, что происходит с курированием передачи субграфов на L2 +## Understanding what happens to curation on Subgraph transfers to L2 -Когда владелец субграфа передает субграф в Arbitrum, весь сигнал субграфа одновременно конвертируется в GRT. Это же относится и к "автоматически мигрировавшему" сигналу, т.е. сигналу, который не относится к конкретной версии или развертыванию субграфа, но который следует за последней версией субграфа. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Это преобразование сигнала в GRT аналогично тому, что произошло бы, если бы владелец субграфа объявил его устаревшим на L1. Когда субграф устаревает или переносится, в то же время «сжигается» весь сигнал курирования (с использованием кривой связывания курирования), а полученный GRT сохраняется в смарт-контракте GNS (то есть контракте, который обрабатывает обновления субграфа и сигнал автоматической миграции). Таким образом, каждый куратор этого субграфа имеет право на GRT, пропорционально количеству акций, которыми он владел в этом субграфе. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -Часть этих GRT, принадлежащая владельцу субграфа, отправляется на L2 вместе с субграфом. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -На этом этапе курируемый GRT больше не будет начислять комиссии за запросы, поэтому кураторы могут выбрать: вывести свой GRT или перевести его на тот же субграф на L2, где его можно использовать для создания нового сигнала курирования. Спешить с этим не стоит, так как GRT может храниться неограниченное время, и каждый получит сумму пропорционально своим долям, независимо от того, когда это будет сделано. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Выбор Вашего кошелька L2 @@ -130,9 +130,9 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools Прежде чем начать перенос, Вы должны решить, какой адрес будет владеть курированием на L2 (см. "Выбор кошелька L2" выше), также рекомендуется уже иметь на Arbitrum некоторое количество ETH для газа на случай, если Вам потребуется повторно выполнить отправку сообщения на L2. Вы можете купить ETH на любых биржах и вывести его напрямую на Arbitrum, или использовать мост Arbitrum для отправки ETH из кошелька основной сети на L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) — поскольку комиссии за газ на Arbitrum очень низкие, Вам понадобится небольшая сумма, например, 0.01 ETH, этого, вероятно, будет более чем достаточно. -Если субграф, который Вы курируете, был перенесен на L2, Вы увидите сообщение в Explorer о том, что Вы курируете перенесённый субграф. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -При просмотре страницы субграфа Вы можете выбрать вывод или перенос курирования. Нажатие на кнопку "Перенести сигнал в Arbitrum", откроет инструмент переноса. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Перенос сигнала](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools ## Снятие Вашего курирования на L1 -Если Вы предпочитаете не отправлять свой GRT на L2 или хотите передать GRT вручную, Вы можете вывести свой курируемый GRT на L1. На баннере на странице субграфа выберите "Вывести сигнал" и подтвердите транзакцию; GRT будет отправлен на Ваш адрес Куратора. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/ru/archived/sunrise.mdx b/website/src/pages/ru/archived/sunrise.mdx index eb18a93c506c..02a161c39f63 100644 --- a/website/src/pages/ru/archived/sunrise.mdx +++ b/website/src/pages/ru/archived/sunrise.mdx @@ -1,80 +1,80 @@ --- -title: Post-Sunrise + Upgrading to The Graph Network FAQ -sidebarTitle: Post-Sunrise Upgrade FAQ +title: "Post-Sunrise + Обновление до The Graph Network: Часто задаваемые вопросы" +sidebarTitle: Часто задаваемые вопросы об обновлении Post-Sunrise --- -> Note: The Sunrise of Decentralized Data ended June 12th, 2024. +> Примечание: Эра децентрализованных данных Sunrise завершилась 12 июня 2024 года. -## What was the Sunrise of Decentralized Data? +## Что представляла собой эра децентрализованных данных Sunrise? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +Эра децентрализованных данных Sunrise была инициативой, возглавляемой Edge & Node. Она позволила разработчикам субграфам беспрепятственно перейти на децентрализованную сеть The Graph. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +Этот план основывался на предыдущих разработках экосистемы The Graph, включая обновление индексатора для обслуживания запросов на недавно опубликованные субграфы. -### What happened to the hosted service? +### Что произошло с хостинг-сервисом? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +Конечные точки запросов на хостинг-сервис больше недоступны, и разработчики не могут развертывать новые субграфы на хостинг-сервисе. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +В процессе обновления владельцы субграфов на хостинг-сервисе могли обновить свои субграфы до сети The Graph. Кроме того, разработчики могли заявить о своих автоматически обновлённых субграфах. -### Was Subgraph Studio impacted by this upgrade? +### Было ли Subgraph Studio затронуто этим обновлением? -No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. +Нет, Subgraph Studio не было затронуто эрой Sunrise. Субгрфы стали немедленно доступны для запросов, благодаря обновлённому Индексатору, который использует ту же инфраструктуру, что и хостинг-сервис. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Почему субграфы были опубликованы на Arbitrum? Это означает, что они начали индексировать другую сеть? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +Сеть The Graph изначально была развернута на основной сети Ethereum, но позже была перенесена на Arbitrum One, чтобы снизить затраты на газ для всех пользователей. В результате все новые субграфы публикуются в сети The Graph на Arbitrum, чтобы Индексаторы могли их поддерживать. Arbitrum — это сеть, в которую публикуются субграфы, но субграфы могут индексировать любую из [поддерживаемых сетей](/supported-networks/) -## About the Upgrade Indexer +## Об обновлении Индексатора -> The upgrade Indexer is currently active. +> Обновлённый Индексатор в настоящее время активен. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +Обновлённый Индексатор был внедрён для улучшения процесса обновления субграфов с хостинг-сервиса на сеть The Graph и поддержки новых версий существующих субграфов, которые ещё не были проиндексированы. -### What does the upgrade Indexer do? +### Что делает обновлённый Индексатор? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. -- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Он помогает запустить блокчейны, которые ещё не получили вознаграждения за индексирование в сети The Graph, и гарантирует, что Индексатор будет доступен для обслуживания запросов как можно быстрее после публикации субграфа. +- Он поддерживает блокчейны, которые ранее были доступны только на хостинг-сервисе. Полный список поддерживаемых блокчейнов можно найти [здесь](/supported-networks/). +- Индексаторы, которые используют обновлённый Индексатор, делают это как общественную услугу, чтобы поддерживать новые субграфы и дополнительные блокчейны, которые ещё не получают вознаграждения за индексирование, до того как их одобрит Совет The Graph. -### Why is Edge & Node running the upgrade Indexer? +### Почему Edge & Node запускает обновленный Индексатор? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node исторически поддерживали хостинг-сервис, и, как результат, уже имеют синхронизированные данные для субграфов, размещённых на хостинг-сервисе. -### What does the upgrade indexer mean for existing Indexers? +### Что означает обновлённый индексатор для существующих Индексаторов? -Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. +Блокчейны, которые ранее поддерживались только на хостинг-сервисе, стали доступны разработчикам в сети The Graph без вознаграждений за индексирование на первом этапе. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +Однако это действие открыло возможность получения сборов за запросы для любого заинтересованного Индексатора и увеличило количество субграфов, опубликованных в сети The Graph. В результате Индексаторы получили больше возможностей для индексирования и обслуживания этих субграфов в обмен на сборы за запросы, даже до того, как вознаграждения за индексирование будут активированы для чейна. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +Обновлённый Индексатор также предоставляет сообществу Индексаторов информацию о потенциальном спросе на субграфы и новые чейны в сети The Graph. -### What does this mean for Delegators? +### Что это означает для Делегаторов? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +Обновлённый Индексатор предоставляет Делегаторам широкие возможности. Поскольку он позволил большему числу субграфов перейти с хостинг-сервиса в сеть The Graph, Делегаторы выигрывают от увеличенной активности в сети. -### Did the upgrade Indexer compete with existing Indexers for rewards? +### Конкурировал ли обновлённый Индексатор с существующими Индексаторами за вознаграждения? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +Нет, обновлённый Индексатор выделяет минимальное количество вознаграждений на каждый субграф и не собирает вознаграждения за индексирование. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +Он работает на основе принципа «по мере необходимости», выполняя роль резервного решения до тех пор, пока как минимум три других Индексатора в сети не обеспечат достаточное качество обслуживания для соответствующих чейнов и субграфов. -### How does this affect subgraph developers? +### Как это влиякт на разработчиков субграфов? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Разработчики субграфов могут запрашивать свои субграфы в сети The Graph практически сразу после их обновления с хостинговой службы или публикации через [Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), так как время на индексирование не требуется. Обратите внимание, что [создание субграфа](/developing/creating-a-subgraph/) не было затронуто этим обновлением. -### How does the upgrade Indexer benefit data consumers? +### Как обновлённый Индексатор приносит пользу потребителям данных? -The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. +Обновлённый Индексатор позволяет использовать чейны на сети, которые ранее поддерживались только на севрисе хостинга. Таким образом, он расширяет объем и доступность данных, которые могут быть запрашиваемы в сети. -### How does the upgrade Indexer price queries? +### Как обновлённый Индексатор оценивает запросы? -The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. +Обновлённый Индексатор оценивает запросы по рыночной цене, чтобы избежать влияния на рынок комиссий за запросы. -### When will the upgrade Indexer stop supporting a subgraph? +### Когда обновлённый Индексатор перестанет поддерживать субграф? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +Обновлённый Индексатор поддерживает субграф, пока как минимум три других Индексатора не начнут успешно и стабильно обслуживать запросы к нему. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Кроме того, обновлённый Индексатор прекращает поддержку субграфа, если к нему не поступали запросы в последние 30 дней. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Другие Индексаторы получают стимулы для поддержки субграфов с постоянным объёмом запросов. Объём запросов к обновлённому Индексатору должен стремиться к нулю, так как он имеет небольшое распределение, и другие Индексаторы должны обслуживать запросы раньше него. diff --git a/website/src/pages/ru/contracts.json b/website/src/pages/ru/contracts.json index 134799f3dd0f..17850d7d1b2f 100644 --- a/website/src/pages/ru/contracts.json +++ b/website/src/pages/ru/contracts.json @@ -1,4 +1,4 @@ { - "contract": "Contract", + "contract": "Контракт", "address": "Address" } diff --git a/website/src/pages/ru/contracts.mdx b/website/src/pages/ru/contracts.mdx index 9226c911fcd8..8c6fbb464abe 100644 --- a/website/src/pages/ru/contracts.mdx +++ b/website/src/pages/ru/contracts.mdx @@ -14,7 +14,7 @@ This is the principal deployment of The Graph Network. ## Mainnet -This was the original deployment of The Graph Network. [Learn more](/archived/arbitrum/arbitrum-faq/) about The Graph's scaling with Arbitrum. +Это было первоначальное развертывание The Graph Network. [Узнайте больше](/archived/arbitrum/arbitrum-faq/) о масштабировании The Graph с Arbitrum. diff --git a/website/src/pages/ru/global.json b/website/src/pages/ru/global.json index 0b02b6ff1575..70dd9a3b9dfe 100644 --- a/website/src/pages/ru/global.json +++ b/website/src/pages/ru/global.json @@ -1,35 +1,78 @@ { "navigation": { "title": "Главное меню", - "show": "Show navigation", - "hide": "Hide navigation", + "show": "Показать панель навигации", + "hide": "Скрыть панель навигации", "subgraphs": "Субграфы", - "substreams": "Substreams", - "sps": "Substreams-Powered Subgraphs", - "indexing": "Indexing", - "resources": "Resources", - "archived": "Archived" + "substreams": "Субпотоки", + "sps": "Субграфы, работающие на основе субпотоков", + "tokenApi": "Token API", + "indexing": "Индексирование", + "resources": "Ресурсы", + "archived": "Архивировано" }, "page": { - "lastUpdated": "Last updated", + "lastUpdated": "Последнее обновление", "readingTime": { - "title": "Reading time", - "minutes": "minutes" + "title": "Время на прочтение", + "minutes": "минуты" }, - "previous": "Previous page", - "next": "Next page", - "edit": "Edit on GitHub", - "onThisPage": "On this page", - "tableOfContents": "Table of contents", - "linkToThisSection": "Link to this section" + "previous": "Предыдущая страница", + "next": "Следующая страница", + "edit": "Редактировать на GitHub", + "onThisPage": "На этой странице", + "tableOfContents": "Содержание", + "linkToThisSection": "Ссылка на этот раздел" }, "content": { - "note": "Note", - "video": "Video" + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, + "video": "Видео" + }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Параметры запроса", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Описание", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Статус", + "description": "Описание", + "liveResponse": "Live Response", + "example": "Пример" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } }, "notFound": { - "title": "Oops! This page was lost in space...", - "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", - "back": "Go Home" + "title": "Ой! Эта страница была утеряна...", + "subtitle": "Проверьте, верно ли указан адрес или перейдите на наш сайт по ссылке ниже.", + "back": "На главную страницу" } } diff --git a/website/src/pages/ru/index.json b/website/src/pages/ru/index.json index 11e1eef7f22c..28d369ba865d 100644 --- a/website/src/pages/ru/index.json +++ b/website/src/pages/ru/index.json @@ -1,99 +1,175 @@ { "title": "Главная страница", "hero": { - "title": "The Graph Docs", - "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", - "cta1": "How The Graph works", - "cta2": "Build your first subgraph" + "title": "Документация The Graph", + "description": "Запустите свой проект web3 с помощью инструментов для извлечения, преобразования и загрузки данных блокчейна.", + "cta1": "Как работает The Graph", + "cta2": "Создайте свой первый субграф" }, "products": { - "title": "The Graph’s Products", - "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "title": "The Graph's Products", + "description": "Выберите решение, которое соответствует Вашим потребностям, — взаимодействуйте с данными блокчейна так, как Вам удобно.", "subgraphs": { "title": "Субграфы", - "description": "Extract, process, and query blockchain data with open APIs.", - "cta": "Develop a subgraph" + "description": "Извлечение, процесс и запрос данных блокчейна с открытым API.", + "cta": "Разработка субграфа" }, "substreams": { - "title": "Substreams", - "description": "Fetch and consume blockchain data with parallel execution.", - "cta": "Develop with Substreams" + "title": "Субпотоки", + "description": "Получение и потребление данных блокчейна с параллельным исполнением.", + "cta": "Разработка с использованием Субпотоков" }, "sps": { - "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", - "cta": "Set up a Substreams-powered subgraph" + "title": "Субграфы, работающие на основе субпотоков", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", + "cta": "Настройка субграфа, работающего на основе Субпотоков" }, "graphNode": { "title": "Graph Node", - "description": "Index blockchain data and serve it via GraphQL queries.", - "cta": "Set up a local Graph Node" + "description": "Индексируйте данные блокчейна и обслуживайте их через запросы GraphQL.", + "cta": "Настройка локальной Graph Node" }, "firehose": { "title": "Firehose", - "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", - "cta": "Get started with Firehose" + "description": "Извлекайте данные блокчейна в плоские файлы, чтобы улучшить время синхронизации и возможности потоковой передачи.", + "cta": "Начало работы с Firehose" } }, "supportedNetworks": { "title": "Поддерживаемые сети", + "details": "Network Details", + "services": "Services", + "type": "Тип", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Документы", + "shortName": "Short Name", + "guides": "Гайды", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { - "base": "The Graph supports {0}. To add a new network, {1}", - "networks": "networks", - "completeThisForm": "complete this form" + "base": "The Graph поддерживает {0}. Для добавления новой сети {1}", + "networks": "сети", + "completeThisForm": "заполнить эту форму" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Имя", + "id": "Идентификатор", + "subgraphs": "Субграфы", + "substreams": "Субпотоки", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Субпотоки", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Выставление счетов", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { "title": "Гайды", "description": "", "explorer": { - "title": "Find Data in Graph Explorer", - "description": "Leverage hundreds of public subgraphs for existing blockchain data." + "title": "Поиск данных в Graph Explorer", + "description": "Использование сотен публичных субграфов для существующих данных блокчейна." }, "publishASubgraph": { - "title": "Publish a Subgraph", - "description": "Add your subgraph to the decentralized network." + "title": "Публикация субграфа", + "description": "Добавьте свой субграф в децентрализованную сеть." }, "publishSubstreams": { - "title": "Publish Substreams", - "description": "Launch your Substreams package to the Substreams Registry." + "title": "Публикация Субпотоков", + "description": "Запустите свой пакет Субпотоков в Реестр Субпотоков." }, "queryingBestPractices": { - "title": "Querying Best Practices", - "description": "Optimize your subgraph queries for faster, better results." + "title": "Лучшие практики запросов", + "description": "Оптимизируйте свои запросы субграфов для получения более быстрых и лучших результатов." }, "timeseries": { - "title": "Optimized Timeseries & Aggregations", - "description": "Streamline your subgraph for efficiency." + "title": "Оптимизация тайм-серий и агрегаций", + "description": "Оптимизируйте свой субграф для большей эффективности." }, "apiKeyManagement": { - "title": "API Key Management", - "description": "Easily create, manage, and secure API keys for your subgraphs." + "title": "Управление API-ключами", + "description": "Легко создавайте, управляйте и защищайте ключи API для своих субграфов." }, "transferToTheGraph": { - "title": "Transfer to The Graph", - "description": "Seamlessly upgrade your subgraph from any platform." + "title": "Перенос в The Graph", + "description": "Легко обновляйте свой субграф с любой платформы." } }, "videos": { - "title": "Video Tutorials", - "watchOnYouTube": "Watch on YouTube", + "title": "Видеоуроки", + "watchOnYouTube": "Смотреть на YouTube", "theGraphExplained": { - "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "title": "Объяснение The Graph за 1 минуту", + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { - "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "title": "Что такое Делегирование?", + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { - "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "title": "Как индексировать Solana с помощью субграфа, работающего на базе Субпотоков", + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { - "reading": "Reading time", - "duration": "Duration", + "reading": "Время на прочтение", + "duration": "Продолжительность", "minutes": "min" } } diff --git a/website/src/pages/ru/indexing/_meta-titles.json b/website/src/pages/ru/indexing/_meta-titles.json index 42f4de188fd4..b204530e4826 100644 --- a/website/src/pages/ru/indexing/_meta-titles.json +++ b/website/src/pages/ru/indexing/_meta-titles.json @@ -1,3 +1,3 @@ { - "tooling": "Indexer Tooling" + "tooling": "Инструментарий Индексатора" } diff --git a/website/src/pages/ru/indexing/chain-integration-overview.mdx b/website/src/pages/ru/indexing/chain-integration-overview.mdx index 3ee1ef3bc4bc..613d4b5151c4 100644 --- a/website/src/pages/ru/indexing/chain-integration-overview.mdx +++ b/website/src/pages/ru/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. Сколько времени займет процесс достижения полной поддержки протокола? diff --git a/website/src/pages/ru/indexing/new-chain-integration.mdx b/website/src/pages/ru/indexing/new-chain-integration.mdx index 427169610d41..8b23af33ebd1 100644 --- a/website/src/pages/ru/indexing/new-chain-integration.mdx +++ b/website/src/pages/ru/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: Интеграция новых чейнов --- -Чейны могут обеспечить поддержку субграфов в своей экосистеме, начав новую интеграцию `graph-node`. Субграфы — это мощный инструмент индексирования, открывающий перед разработчиками целый мир возможностей. Graph Node уже индексирует данные из перечисленных здесь чейнов. Если Вы заинтересованы в новой интеграции, для этого существуют 2 стратегии: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: все решения по интеграции Firehose включают Substreams, крупномасштабный механизм потоковой передачи на базе Firehose со встроенной поддержкой `graph-node`, позволяющий выполнять распараллеленные преобразования. @@ -47,15 +47,15 @@ title: Интеграция новых чейнов ## Рекомендации по EVM — разница между JSON-RPC & Firehose -Хотя как JSON-RPC, так и Firehose оба подходят для субграфов, Firehose всегда востребован разработчиками, желающими создавать с помощью [Substreams](https://substreams.streamingfast.io). Поддержка Substreams позволяет разработчикам создавать [субграфы на основе субпотоков](/subgraphs/cookbook/substreams-powered-subgraphs/) для нового чейна и потенциально может повысить производительность Ваших субграфов. Кроме того, Firehose — в качестве замены уровня извлечения JSON-RPC `graph-node` — сокращает на 90 % количество вызовов RPC, необходимых для общего индексирования. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- Все эти вызовы `getLogs` и циклические передачи заменяются единым потоком, поступающим в сердце `graph-node`; единой блочной моделью для всех обрабатываемых ею субграфов. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> ПРИМЕЧАНИЕ: Интеграция на основе Firehose для чейнов EVM по-прежнему будет требовать от Индексаторов запуска ноды архива RPC чейна для правильного индексирования субрафов. Это происходит из-за неспособности Firehose предоставить состояние смарт-контракта, обычно доступное с помощью метода RPC `eth_call`. (Стоит напомнить, что `eth_calls` не является хорошей практикой для разработчиков) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Конфигурация Graph Node -Настроить Graph Node так же просто, как подготовить локальную среду. После того, как Ваша локальная среда настроена, Вы можете протестировать интеграцию, локально развернув субграф. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Клонировать Graph Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ title: Интеграция новых чейнов ## Субграфы, работающие на основе субпотоков (Substreams) -Для интеграции Firehose/Substreams под управлением StreamingFast включена базовая поддержка фундаментальных модулей Substreams (например, декодированные транзакции, логи и события смарт-контрактов) и инструментов генерации кодов Substreams. Эти инструменты позволяют включать [субграфы на базе субпотоков](/substreams/sps/introduction/). Следуйте [Практическому руководству] (https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) и запустите `substreams codegen subgraph`, чтобы самостоятельно испробовать инструменты кодирования. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/ru/indexing/overview.mdx b/website/src/pages/ru/indexing/overview.mdx index a1a21b206718..f66166d8c7af 100644 --- a/website/src/pages/ru/indexing/overview.mdx +++ b/website/src/pages/ru/indexing/overview.mdx @@ -5,43 +5,43 @@ sidebarTitle: Обзор Индексаторы — это операторы нод в сети The Graph, которые стейкают токены Graph (GRT) для предоставления услуг индексирования и обработки запросов. Индексаторы получают оплату за запросы и вознаграждение за свои услуги индексирования. Они также получают комиссию за запросы, которая возвращаются в соответствии с экспоненциальной функцией возврата. -Токены GRT, которые застейканы в протоколе, подлежат периоду "оттаивания" и могут быть срезаны, если индексаторы являются вредоносными и передают неверные данные приложениям или если они некорректно осуществляют индексирование. Индексаторы также получают вознаграждение за делегированный стейк от делегаторов, внося свой вклад в работу сети. +GRT, застейканные в протоколе, замораживаются на определённый период и могут быть уменьшены, если Индексаторы действуют недобросовестно и предоставляют приложениям неверные данные или неправильно выполняют индексирование. Кроме того, Индексаторы получают вознаграждения за стейк, который им передают Делегаторы, помогая тем самым поддерживать работу сети. -Индексаторы выбирают подграфы для индексирования на основе сигналов от кураторов, в которых кураторы стейкают токены GRT, чтобы обозначить, какие подграфы являются высококачественными и заслуживают приоритетного внимания. Потребители (к примеру, приложения) также могут задавать параметры, по которым индексаторы обрабатывают запросы к их подграфам, и устанавливать предпочтения по цене за запрос. +Индексаторы выбирают субграфы для индексирования на основе сигнала курирования субграфа, где Кураторы стейкают GRT, чтобы указать, какие субграфы являются качественными и должны быть в приоритете. Потребители (например, приложения) также могут задавать параметры для выбора Индексаторов, обрабатывающих запросы к их субграфам, и устанавливать предпочтения по стоимости запросов. -## FAQ +## Часто задаваемые вопросы -### What is the minimum stake required to be an Indexer on the network? +### Какова минимальная величина стейка, требуемая для того, чтобы быть Индексатором в сети? -The minimum stake for an Indexer is currently set to 100K GRT. +Минимальный стейк для Индексатора в настоящее время составляет 100 000 GRT. -### What are the revenue streams for an Indexer? +### Какие источники дохода у Индексатора? -**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. +**Возмещение комиссий за запросы** – выплаты за обработку запросов в сети. Эти платежи проходят через государственные каналы между Индексатором и шлюзом. Каждый запрос от шлюза содержит оплату, а соответствующий ответ — доказательство достоверности результата запроса. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Награды за индексирование** – формируются за счет ежегодной инфляции протокола в размере 3% и распределяются между Индексаторами, которые индексируют развернутые субграфы для сети. -### How are indexing rewards distributed? +### Как распределяются награды за индексирование? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Награды за индексирование поступают из инфляции протокола, установленной на уровне 3% в год. Они распределяются между субграфами пропорционально общему сигналу кураторства на каждом из них, а затем пропорционально между Индексаторами в зависимости от их застейканного объема на данном субграфе. **Чтобы получить награду, распределение должно быть закрыто с действительным доказательством индексирования (POI), соответствующим стандартам, установленным арбитражной хартией.** -Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. +Сообщество создало множество инструментов для расчета наград; их собрание можно найти в [коллекции Гайдов Сообщества](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). Также актуальный список инструментов доступен в каналах #Delegators и #Indexers на [сервере Discord](https://discord.gg/graphprotocol). Здесь мы приводим ссылку на [рекомендованный оптимизатор распределения](https://github.com/graphprotocol/allocation-optimizer), интегрированный с программным стеком Индексатора. -### What is a proof of indexing (POI)? +### Что такое доказательство индексирования (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POI (доказательство индексирования) используется в сети для подтверждения того, что Индексатор действительно индексирует назначенные ему субграфы. При закрытии распределения необходимо предоставить POI для первого блока текущей эпохи, чтобы оно было квалифицировано для получения наград за индексирование. POI для блока представляет собой хеш всех транзакций хранилища объектов для конкретного развертывания субграфа вплоть до этого блока включительно. -### When are indexing rewards distributed? +### Когда распределяются награды за индексирование? -Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). +Аллокации постоянно накапливают награды, пока они активны и распределены в течение 28 эпох. Награды собираются Индексаторами и распределяются при закрытии их аллокаций. Это может происходить вручную, когда Индексатор сам решает их закрыть, или автоматически по истечении 28 эпох. Если после 28 эпох аллокацию закрывает Делегатор, награды не выплачиваются. В настоящее время одна эпоха длится примерно 24 часа. -### Can pending indexing rewards be monitored? +### Можно ли отслеживать ожидаемые награды за индексирование? -The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. +Контракт RewardsManager имеет функцию только для чтения [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316), которая позволяет проверить ожидаемые награды для конкретной аллокации. -Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: +Многие созданные сообществом панели отображают ожидаемые награды, и их можно легко проверить вручную, следуя этим шагам: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Выполните запрос к [основному субграфу](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one), чтобы получить идентификаторы всех активных аллокаций: ```graphql query indexerAllocations { @@ -57,138 +57,138 @@ query indexerAllocations { } ``` -Use Etherscan to call `getRewards()`: +Используйте Etherscan для вызова `getRewards()`: -- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) -- To call `getRewards()`: - - Expand the **9. getRewards** dropdown. - - Enter the **allocationID** in the input. - - Click the **Query** button. +- Перейдите на [интерфейс Etherscan к контракту Rewards](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract). +- Чтобы вызвать `getRewards()`: + - Разверните выпадающее меню **9. getRewards**. + - Введите **allocationID** в поле ввода. + - Нажмите кнопку **Query**. -### What are disputes and where can I view them? +### Что такое споры и где их можно посмотреть? -Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. +Запросы и аллокации Индексатора могут быть оспорены в The Graph в течение периода спора. Период спора варьируется в зависимости от типа спора. Запросы/аттестации имеют окно спора в 7 эпох, тогда как аллокации – 56 эпох. После истечения этих периодов споры против аллокаций или запросов больше не могут быть открыты. Когда спор открывается, Fishermen (участники сети, открывающие споры) должны внести депозит минимум 10 000 GRT, который будет заморожен до завершения спора и вынесения решения. -Disputes have **three** possible outcomes, so does the deposit of the Fishermen. +Споры могут иметь **три** возможных исхода, как и депозит Fishermen. -- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. -- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. -- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. +- Если спор отклонен, GRT, внесенные Fishermen в качестве депозита, будут сожжены, а оспариваемый Индексатор не понесет штраф. +- Если спор завершится вничью, депозит Fishermen будет возвращен, а оспариваемый Индексатор не понесет штраф. +- Если спор принят, депозит Fishermen будет возвращен, оспариваемый Индексатор понесет штраф, а Fishermen получит 50% от списанных GRT. -Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. +Споры можно просматривать в пользовательском интерфейсе на странице профиля Индексатора во вкладке `Disputes`. -### What are query fee rebates and when are they distributed? +### Что такое возврат комиссии за запросы и когда он распределяется? -Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. +Комиссии за запросы собираются шлюзом и распределяются индексаторам в соответствии с экспоненциальной функцией возврата (см. GIP [здесь](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). Экспоненциальная функция возврата предлагается как способ гарантировать, что индексаторы добиваются наилучшего результата, добросовестно обслуживая запросы. Она работает, стимулируя Индексаторов выделять крупные объемы стейка (который может быть урезан в случае ошибки при обработке запроса) относительно суммы комиссий за запросы, которые они могут получить. -Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. +Как только аллокация закрывается, Индексатор может потребовать возврат комиссии. После запроса возврата, комиссии за запросы распределяются между Индексатором и его Делегаторами в соответствии с процентом комиссии за запросы и экспоненциальной функцией возврата. -### What is query fee cut and indexing reward cut? +### Что такое доля комиссии за запросы и доля вознаграждения за индексирование? -The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. +Параметры `queryFeeCut` и `indexingRewardCut` являются параметрами делегирования, которые Индексатор может настроить вместе с `cooldownBlocks`, чтобы контролировать распределение GRT между собой и Делегаторами. Инструкции по настройке параметров делегирования можно найти в последних шагах раздела [Стейкинг в протоколе](/indexing/overview/#stake-in-the-protocol). -- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. +- **queryFeeCut** – процент возврата комиссий за запросы, который будет распределяться в пользу Индексатора. Если установлено значение 95%, Индексатор получит 95% заработанных комиссий за запросы при закрытии аллокации, а оставшиеся 5% пойдут Делегаторам. -- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. +- **indexingRewardCut** – процент вознаграждений за индексирование, который будет распределяться в пользу Индексатора. Если установлено значение 95%, Индексатор получит 95% вознаграждений за индексирование при закрытии аллокации, а оставшиеся 5% будут распределены между Делегаторами. -### How do Indexers know which subgraphs to index? +### Как Индексаторы узнают, какие Субграфы индексировать? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Индексаторы могут отличаться, применяя продвинутые методы для принятия решений об индексировании Субграфов, но в общем случае они оценивают Субграфы на основе нескольких ключевых метрик: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Сигнал кураторства** — пропорция сигнала кураторства сети, применяемого к конкретному субграфу, является хорошим индикатором интереса к этому субграфу, особенно в фазе начальной загрузки, когда объем запросов постепенно увеличивается. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Собранные комиссии за запросы** – исторические данные о сумме комиссий за запросы, собранных для конкретного Субграфа, являются хорошим индикатором будущего спроса. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Объем стейка** – отслеживание поведения других Индексаторов или анализ доли общего стейка, выделенного на конкретные Субграфы, позволяет Индексатору оценивать предложение для запросов к Субграфам, что помогает выявлять Субграфы, которым сеть доверяет, или те, которые нуждаются в большем количестве ресурсов. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Субграфы без наград за индексирование** – некоторые Субграфы не приносят награды за индексирование, главным образом потому, что они используют неподдерживаемые функции, такие как IPFS, или делают запросы к другой сети за пределами основной сети. В интерфейсе будет отображаться сообщение, если Субграф не генерирует награды за индексирование. -### What are the hardware requirements? +### Каковы требования к аппаратному обеспечению? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. -- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Низкие** – достаточно для начала индексирования нескольких субграфов, но, вероятно, потребуется расширение. +- **Стандартные** – настройка по умолчанию, используется в примерах манифестов развертывания k8s/terraform. +- **Средние** – производительный Индексатор, поддерживающий 100 субграфов и 200–500 запросов в секунду. +- **Высокие** – готов индексировать все используемые субграфы и обрабатывать соответствующий трафик запросов. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Настройка | Postgres
(ЦП) | Postgres
(память в ГБ) | Postgres
(диск в ТБ) | VMs
(ЦП) | VMs
(память в ГБ) | +| ----------- | :----------------: | :-------------------------: | :-----------------------: | :-----------: | :--------------------: | +| Низкая | 4 | 8 | 1 | 4 | 16 | +| Стандартная | 8 | 30 | 1 | 12 | 48 | +| Средняя | 16 | 64 | 2 | 32 | 64 | +| Высокая | 72 | 468 | 3.5 | 48 | 184 | -### What are some basic security precautions an Indexer should take? +### Какие основные меры безопасности следует предпринять Индексатору? -- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. +- **Кошелек оператора** - настройка кошелька оператора является важной мерой безопасности, поскольку она позволяет Индексатору поддерживать разделение между своими ключами, которые контролируют величину стейка, и теми, которые контролируют ежедневные операции. Инструкции см. в разделе [Стейкинг в протоколе](/indexing/overview/#stake-in-the-protocol). -- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. +- ** Firewall** – только сервис Индексатора должен быть доступен публично. Особое внимание следует уделить защите административных портов и доступа к базе данных: JSON-RPC-конечная точка Graph Node (порт по умолчанию: **8030**), конечная точка API управления Индексатором (порт по умолчанию: **18000**) и конечная точка базы данных Postgres (порт по умолчанию: **5432**) **не должны** быть открыты. -## Infrastructure +## Инфраструктура -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +В центре инфраструктуры Индексатора находится Graph Node, который отслеживает индексируемые сети, извлекает и загружает данные в соответствии с определением Субграфа и предоставляет их в виде [GraphQL API](/about/#how-the-graph-works). Graph Node должна быть подключена к конечной точке, предоставляющей данные из каждой индексируемой сети, к ноде IPFS для получения данных, к базе данных PostgreSQL для хранения информации, а также к компонентам Индексатора, которые обеспечивают его взаимодействие с сетью. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **База данных PostgreSQL** – это основное хранилище для Graph Node, где хранятся данные Субграфа. Сервис и агент Индексатора также используют эту базу данных для хранения данных каналов состояния, моделей стоимости, правил индексирования и действий по распределению. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Конечная точка данных** – для сетей, совместимых с EVM, Graph Node должна быть подключена к конечной точке, предоставляющей JSON-RPC API, совместимый с EVM. Это может быть как один клиент, так и более сложная конфигурация с балансировкой нагрузки между несколькими клиентами. Важно учитывать, что некоторые Субграфы требуют определённых возможностей клиента, таких как архивный режим и/или API трассировки Parity. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS-нода (версия ниже 5)** – метаданные развертывания Субграфа хранятся в сети IPFS. Graph Node в основном обращается к IPFS-ноде во время развертывания Субграфа, чтобы получить манифест Субграфа и все связанные файлы. Индексаторы сети не обязаны размещать свою собственную IPFS-ноду, так как для сети уже развернута IPFS-нода по адресу: [https://ipfs.network.thegraph.com](https://ipfs.network.thegraph.com). -- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. +- **Сервис Индексатора** – обрабатывает все необходимые внешние коммуникации с сетью. Делится моделями стоимости и статусами индексирования, передаёт запросы от шлюзов в Graph Node и управляет платежами за запросы через каналы состояния со шлюзом. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Агент Индексатора** – обеспечивает взаимодействие Индексатора в блокчейне, включая регистрацию в сети, управление развертыванием Субграфов в его Graph Node и управление распределением ресурсов. -- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. +- **Сервер метрик Prometheus** – Graph Node и компоненты Индексатора записывают свои метрики на сервер метрик. -Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. +Примечание: для поддержки гибкого масштабирования рекомендуется разделять обработку запросов и индексирование между разными наборами нод: нодами запросов и нодами индексирования. -### Ports overview +### Обзор портов -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. +> **Важно**: будьте осторожны при открытии портов в публичный доступ – **административные порты** должны быть закрыты. Это касается JSON-RPC Graph Node и управляющих конечных точек Индексатора, описанных ниже. #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Порт | Назначение | Маршруты | Аргумент CLI | Переменная среды | +| ---- | -------------------------------------------------- | ---------------------------------------------- | ------------------ | ---------------- | +| 8000 | GraphQL HTTP-сервер
(для запросов к Субграфу) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(для подписок на Субграф) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(для управления развертываниями) | / | \--admin-port | - | +| 8030 | API статуса индексирования Субграфа | /graphql | \--index-node-port | - | +| 8040 | Метрики Prometheus | /metrics | \--metrics-port | - | -#### Indexer Service +#### Сервис Индексатора -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Порт | Назначение | Маршруты | Аргумент CLI | Переменная среды | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP-сервер
(для платных запросов к Субграфу) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Метрики Prometheus | /metrics | \--metrics-port | - | -#### Indexer Agent +#### Агент Индексатора -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | -| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Порт | Назначение | Маршруты | Аргумент CLI | Переменная среды | +| ---- | --------------------------- | -------- | -------------------------- | --------------------------------------- | +| 8000 | API управления Индексатором | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | -### Setup server infrastructure using Terraform on Google Cloud +### Настройка серверной инфраструктуры с использованием Terraform в Google Cloud -> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. +> Примечание: Индексаторы могут также использовать AWS, Microsoft Azure или Alibaba. -#### Install prerequisites +#### Установка необходимых компонентов - Google Cloud SDK -- Kubectl command line tool +- Инструмент командной строки Kubectl - Terraform -#### Create a Google Cloud Project +#### Создание проекта в Google Cloud -- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). +- Клонируйте или перейдите в [репозиторий Индексатора](https://github.com/graphprotocol/indexer). -- Navigate to the `./terraform` directory, this is where all commands should be executed. +- Перейдите в каталог `./terraform`, именно здесь должны быть выполнены все команды. ```sh cd terraform ``` -- Authenticate with Google Cloud and create a new project. +- Аутентифицируйтесь в Google Cloud и создайте новый проект. ```sh gcloud auth login @@ -196,9 +196,9 @@ project= gcloud projects create --enable-cloud-apis $project ``` -- Use the Google Cloud Console's billing page to enable billing for the new project. +- Используйте страницу выставления счетов в Google Cloud Console, чтобы включить эту функцию для нового проекта. -- Create a Google Cloud configuration. +- Создайте конфигурацию Google Cloud. ```sh proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") @@ -208,7 +208,7 @@ gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a ``` -- Enable required Google Cloud APIs. +- Включите необходимые API Google Cloud. ```sh gcloud services enable compute.googleapis.com @@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com gcloud services enable sqladmin.googleapis.com ``` -- Create a service account. +- Создайте сервисный аккаунт. ```sh svc_name= @@ -225,7 +225,7 @@ gcloud iam service-accounts create $svc_name \ --description="Service account for Terraform" \ --display-name="$svc_name" gcloud iam service-accounts list -# Get the email of the service account from the list +# Получить email учетной записи сервиса из списка svc=$(gcloud iam service-accounts list --format='get(email)' --filter="displayName=$svc_name") gcloud iam service-accounts keys create .gcloud-credentials.json \ @@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \ --role roles/editor ``` -- Enable peering between database and Kubernetes cluster that will be created in the next step. +- Включите пиринг между базой данных и кластером Kubernetes, который будет создан на следующем шаге. ```sh gcloud compute addresses create google-managed-services-default \ @@ -249,7 +249,7 @@ gcloud services vpc-peerings connect \ --ranges=google-managed-services-default ``` -- Create minimal terraform configuration file (update as needed). +- Создайте минимальный файл конфигурации Terraform (обновите при необходимости). ```sh indexer= @@ -260,24 +260,24 @@ database_password = "" EOF ``` -#### Use Terraform to create infrastructure +#### Используйте Terraform для создания инфраструктуры -Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`. +Прежде чем выполнять какие-либо команды, ознакомьтесь с файлом [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) и создайте файл `terraform.tfvars` в этом каталоге (или измените тот, который мы создали на предыдущем шаге). Для каждой переменной, значение которой вы хотите изменить по умолчанию или которую необходимо настроить, введите соответствующую настройку в `terraform.tfvars`. -- Run the following commands to create the infrastructure. +- Выполните следующие команды для создания инфраструктуры. ```sh -# Install required plugins +# Установить необходимые плагины terraform init -# View plan for resources to be created +# Просмотреть план создаваемых ресурсов terraform plan -# Create the resources (expect it to take up to 30 minutes) +# Создать ресурсы (может занять до 30 минут) terraform apply ``` -Download credentials for the new cluster into `~/.kube/config` and set it as your default context. +Скачайте учетные данные для нового кластера в файл `~/.kube/config` и установите его как ваш контекст по умолчанию. ```sh gcloud container clusters get-credentials $indexer @@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name' | grep $indexer) ``` -#### Creating the Kubernetes components for the Indexer +#### Создание компонентов Kubernetes для Индексатора -- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`. +- Скопируйте директорию `k8s/overlays` в новую директорию `$dir` и измените запись `bases` в файле `$dir/kustomization.yaml`, чтобы она указывала на директорию `k8s/base`. -- Read through all the files in `$dir` and adjust any values as indicated in the comments. +- Прочитайте все файлы в директории `$dir` и скорректируйте значения в соответствии с комментариями. -Deploy all resources with `kubectl apply -k $dir`. +Разверните все ресурсы с помощью команды `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) — это открытый источник на языке Rust, который отслеживает блокчейн Ethereum для детерминированного обновления хранилища данных, доступного для запросов через GraphQL API. Разработчики используют Субграфы для определения своей схемы и набора мэппингов, чтобы преобразовать информацию, полученную из блокчейна, а сама Graph Node синхронизирует весь блокчейн, отслеживает новые блоки и предоставляет данные через конечную точку GraphQL. -#### Getting started from source +#### Начало работы с исходным кодом -#### Install prerequisites +#### Установка необходимых компонентов - **Rust** @@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`. - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Дополнительные требования для пользователей Ubuntu** - для запуска Graph Node на Ubuntu может потребоваться установить несколько дополнительных пакетов. ```sh sudo apt-get install -y clang libpg-dev libssl-dev pkg-config ``` -#### Setup +#### Настройка -1. Start a PostgreSQL database server +1. Запустите сервер базы данных PostgreSQL ```sh initdb -D .postgres @@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Клонируйте репозиторий [Graph Node](https://github.com/graphprotocol/graph-node) и соберите исходный код, выполнив команду `cargo build`. -3. Now that all the dependencies are setup, start the Graph Node: +3. Теперь, когда все зависимости настроены, запустите Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -334,132 +334,132 @@ cargo run -p graph-node --release -- \ --ipfs https://ipfs.network.thegraph.com ``` -#### Getting started using Docker +#### Начало работы с Docker -#### Prerequisites +#### Предварительные требования -- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`. +- **Нода Ethereum** — по умолчанию, настройка Docker Compose будет использовать основную сетевую ноду: [http://host.docker.internal:8545](http://host.docker.internal:8545) для подключения к ноде Ethereum на вашей хостинговой машине. Вы можете заменить это имя сети и URL, обновив файл `docker-compose.yaml`. -#### Setup +#### Настройка -1. Clone Graph Node and navigate to the Docker directory: +1. Клонируйте Graph Node и перейдите в директорию Docker: ```sh git clone https://github.com/graphprotocol/graph-node cd graph-node/docker ``` -2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml `using the included script: +2. Только для пользователей Linux — используйте IP-адрес хоста вместо `host.docker.internal` в файле `docker-compose.yaml`, используя при этом включенный скрипт: ```sh ./setup.sh ``` -3. Start a local Graph Node that will connect to your Ethereum endpoint: +3. Запустите локальную Graph Node, которая будет подключаться к Вашей конечной точке Ethereum: ```sh docker-compose up ``` -### Indexer components +### Компоненты Индексатора -To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: +Для успешного участия в сети требуется почти постоянный мониторинг и взаимодействие, поэтому мы разработали набор приложений на TypeScript для упрощения участия Индексаторов в сети. Существует три компонента для Индексаторов: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Агент Индексатора** — агент мониторит сеть и инфраструктуру Индексатора, управляет тем, какие развертывания субграфов индексируются и распределяются по чейну, а также сколько ресурсов выделяется на каждый из них. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Сервис Индексатора** — единственный компонент, который необходимо открывать для внешнего доступа. Сервис передает запросы субграфов в граф-ноду, управляет каналами состояния для оплаты запросов, а также делится важной информацией для принятия решений с клиентами, такими как шлюзы. -- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. +- **CLI Индексатора** — интерфейс командной строки для управления агентом Индексатора. Он позволяет Индексаторам управлять моделями затрат, ручными распределениями, очередью действий и правилами индексирования. -#### Getting started +#### Начало работы -The Indexer agent and Indexer service should be co-located with your Graph Node infrastructure. There are many ways to set up virtual execution environments for your Indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://discord.gg/graphprotocol)! Remember to [stake in the protocol](/indexing/overview/#stake-in-the-protocol) before starting up your Indexer components! +Агент Индексатора и сервис Индексатора должны быть размещены рядом с Вашей инфраструктурой Graph Node. Существует множество способов настройки виртуальных исполнимых сред для компонентов Индексатора. Здесь мы объясним, как запустить их на физическом сервере, используя NPM пакеты или исходный код, а также через Kubernetes и Docker на Google Cloud Kubernetes Engine. Если эти примеры настроек не подходят для Вашей инфраструктуры, скорее всего, существует сообщество, которое может предоставить руководство. Присоединяйтесь к нам в [Discord](https://discord.gg/graphprotocol)! Не забудьте [застейкать GRT](https://thegraph.com/docs/indexing/overview/#stake-in-the-protocol) перед запуском компонентов Индексатора! -#### From NPM packages +#### Из пакетов NPM ```sh npm install -g @graphprotocol/indexer-service npm install -g @graphprotocol/indexer-agent -# Indexer CLI is a plugin for Graph CLI, so both need to be installed: +# CLI Индексатора является плагином для Graph CLI, поэтому необходимо установить оба пакета: npm install -g @graphprotocol/graph-cli npm install -g @graphprotocol/indexer-cli -# Indexer service +# Сервис Индексатора graph-indexer-service start ... -# Indexer agent +# Агент Индексатора graph-indexer-agent start ... -# Indexer CLI -#Forward the port of your agent pod if using Kubernetes +#CLI Индексатора +#Проброс порта Вашего pod-агента, если используется Kubernetes kubectl port-forward pod/POD_ID 18000:8000 graph indexer connect http://localhost:18000/ graph indexer ... ``` -#### From source +#### Из исходного кода ```sh -# From Repo root directory +# Из корневого каталога репозитория yarn -# Indexer Service +# Сервис Индексатора cd packages/indexer-service ./bin/graph-indexer-service start ... -# Indexer agent +# Агент Индексатора cd packages/indexer-agent ./bin/graph-indexer-service start ... -# Indexer CLI +# CLI Индексатора cd packages/indexer-cli ./bin/graph-indexer-cli indexer connect http://localhost:18000/ ./bin/graph-indexer-cli indexer ... ``` -#### Using docker +#### Использование docker -- Pull images from the registry +- Извлеките образы из реестра ```sh docker pull ghcr.io/graphprotocol/indexer-service:latest docker pull ghcr.io/graphprotocol/indexer-agent:latest ``` -Or build images locally from source +Или создайте образы локально из исходного кода ```sh -# Indexer service +# Сервис Индексатора docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-service \ -t indexer-service:latest \ -# Indexer agent +# Агент Индексатора docker build \ --build-arg NPM_TOKEN= \ -f Dockerfile.indexer-agent \ -t indexer-agent:latest \ ``` -- Run the components +- Запустите компоненты ```sh docker run -p 7600:7600 -it indexer-service:latest ... docker run -p 18000:8000 -it indexer-agent:latest ... ``` -**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/). +**ПРИМЕЧАНИЕ**: после запуска контейнеров сервис Индексатора должен быть доступен по адресу [http://localhost:7600](http://localhost:7600), а агент Индексатора должен предоставлять API управления Индексатором по адресу [http://localhost:18000/](http://localhost:18000/). -#### Using K8s and Terraform +#### Использование K8s и Terraform -See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section +Посмотрите раздел [Настройка серверной инфраструктуры с использованием Terraform в Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) -#### Usage +#### Применение -> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). +> **ПРИМЕЧАНИЕ**: все переменные конфигурации времени выполнения могут быть применены либо в качестве параметров команды при запуске, либо с использованием переменных среды в формате `COMPONENT_NAME_VARIABLE_NAME` (например, `INDEXER_AGENT_ETHEREUM`). -#### Indexer agent +#### Агент Индексатора ```sh graph-indexer-agent start \ @@ -488,7 +488,7 @@ graph-indexer-agent start \ | pino-pretty ``` -#### Indexer service +#### Сервис Индексатора ```sh SERVER_HOST=localhost \ @@ -514,58 +514,58 @@ graph-indexer-service start \ | pino-pretty ``` -#### Indexer CLI +#### CLI Индексатора -The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. +CLI Индексатора — это плагин для [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli), доступный в терминале через команду `graph indexer`. ```sh graph indexer connect http://localhost:18000 graph indexer status ``` -#### Indexer management using Indexer CLI +#### Управление Индексатором с помощью CLI Индексатора -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +Предлагаемым инструментом для взаимодействия с **API управления Индексатором** является **CLI Индексатора**, расширение для **Graph CLI**. Для того чтобы Индексатор мог автономно взаимодействовать с сетью от его имени, ему нужно предоставить входные данные. Механизм, который определяет поведение Индексатора, включает режимы **управления распределениями** и **правила индексирования**. В режиме **автоматического управления** Индексатор может использовать **правила индексирования**, чтобы применить свою стратегию для выбора субграфов, которые он будет индексировать и обслуживать запросы. Эти правила управляются через GraphQL API, которое предоставляется агентом и называется **API управления Индексатором**. В режиме **ручного управления** Индексатор может создавать действия для выделений, используя **очередь действий** и явно утверждать их перед выполнением. В режиме **контроля** **правила индексирования** используются для пополнения **очереди действий**, и для выполнения этих действий также требуется явное одобрение. Эти механизмы позволяют Индексатору выбирать стратегию для индексирования и обеспечения запросов в сети, а также контролировать их выполнение с различными уровнями автоматизации. -#### Usage +#### Применение -The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. +**CLI Индексатора** подключается к агенту Индексатора, обычно через порт-прокси, поэтому CLI не обязательно должен работать на том же сервере или кластере. Чтобы помочь вам начать работу и предоставить некоторый контекст, CLI будет кратко описан здесь. -- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) +- `graph indexer connect ` - подключение к API управления Индексатором. Обычно соединение с сервером устанавливается через порт-прокси, так что CLI можно легко использовать удаленно. (Пример: `kubectl port-forward pod/ 8000:8000`) -- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. +- `graph indexer rules get [options] [ ...]` - получить одно или несколько правил индексирования, используя `all` в качестве ``, чтобы получить все правила, или `global`, чтобы получить глобальные настройки по умолчанию. Дополнительный аргумент `--merged` можно использовать, чтобы указать, что правила, специфичные для развертывания, будут объединены с глобальным правилом. Именно так они применяются в агенте Индексатора. -- `graph indexer rules set [options] ...` - Set one or more indexing rules. +- `graph indexer rules set [options] ...` - установить одно или несколько правил индексирования. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - запустить индексирование развертывания субграфа, если оно доступно, и установить для него `decisionBasis` в значение `always`, чтобы агент Индексатора всегда выбирал его для индексирования. Если глобальное правило установлено на `always`, то все доступные субграфы в сети будут индексироваться. -- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. +- `graph indexer rules stop [options] ` - остановить индексирование развертывания и установить для него `decisionBasis` в значение `never`, чтобы агент Индексатора пропускал это развертывание при принятии решения о том, какие развертывания индексировать. -- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. +- `graph indexer rules maybe [options] ` — установить `decisionBasis` для развертывания в значение `rules`, чтобы агент Индексатора использовал правила индексирования для принятия решения о том, индексировать ли это развертывание. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` — получить одно или несколько действий, используя `all`, или оставить `action-id` пустым, чтобы получить все действия. Дополнительный аргумент `--status` можно использовать для вывода всех действий с определенным статусом. -- `graph indexer action queue allocate ` - Queue allocation action +- `graph indexer action queue allocate ` — добавить действие на распределение в очередь -- `graph indexer action queue reallocate ` - Queue reallocate action +- `graph indexer action queue reallocate ` — добавить действие на перераспределение в виде очереди -- `graph indexer action queue unallocate ` - Queue unallocate action +- `graph indexer action queue unallocate ` — добавить действие на отмену распределения в виде очереди -- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator +- `graph indexer actions cancel [ ...]` - отменить все действия в очереди, если идентификатор не указан, в противном случае отменить массив идентификаторов, разделенных пробелом -- `graph indexer actions approve [ ...]` - Approve multiple actions for execution +- `graph indexer actions approve [ ...]` - одобрить несколько действий для выполнения -- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately +- `graph indexer actions execute approve` - принудительно выполнить одобренные действия немедленно -All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. +Все команды, которые выводят правила, могут выбирать между поддерживаемыми форматами вывода (`table`, `yaml` и `json`) с помощью аргумента `-output`. -#### Indexing rules +#### Правила индексирования -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Правила индексирования могут быть применены как глобальные настройки по умолчанию или для конкретных развертываний субграфов с использованием их идентификаторов. Поля `deployment` и `decisionBasis` являются обязательными, в то время как все остальные поля — опциональными. Когда правило индексирования имеет значение `rules` в поле `decisionBasis`, агент Индексатора будет сравнивать ненулевые пороговые значения этого правила с значениями, полученными из сети для соответствующего развертывания. Если развертывание субграфа имеет значения, превышающие (или ниже) любой из пороговых величин, оно будет выбрано для индексирования. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +Например, если глобальное правило имеет `minStake` равное **5** (GRT), любое развертывание субграфа, на которое выделено более 5 (GRT) стейка, будет проиндексировано. Пороговые правила включают `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake` и `minAverageQueryFees`. -Data model: +Модель данных: ```graphql type IndexingRule { @@ -599,7 +599,7 @@ IndexingDecisionBasis { } ``` -Example usage of indexing rule: +Пример применения правила индексирования: ``` graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK @@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK ``` -#### Actions queue CLI +#### Очередь действий CLI -The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. +`indexer-cli` предоставляет модуль `actions` для ручной работы с очередью действий. Он использует **Graphql API**, размещенный на сервере управления Индексатором, для взаимодействия с очередью действий. -The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: +Рабочий процесс выполнения действий будет извлекать элементы из очереди для выполнения только в том случае, если у них статус `ActionStatus = approved`. В рекомендованном процессе действия добавляются в очередь с состоянием `ActionStatus = queued`, и затем они должны быть утверждены, чтобы быть выполненными на чейне. Общий процесс будет выглядеть следующим образом: -- Action added to the queue by the 3rd party optimizer tool or indexer-cli user -- Indexer can use the `indexer-cli` to view all queued actions -- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. -- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. -- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. -- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. +- Действие добавляется в очередь сторонним инструментом оптимизации или пользователем indexer-cli +- Индексатор может использовать `indexer-cli` для просмотра всех действий в очереди +- Индексатор (или другое программное обеспечение) может одобрять или отменять действия в очереди с помощью `indexer-cli`. Команды одобрения и отмены принимают массив идентификаторов действий в качестве входных данных. +- Исполнитель регулярно опрашивает очередь на наличие одобренных действий. Он извлекает одобренные действия из очереди, пытается выполнить их и обновляет значения в базе данных в зависимости от результата выполнения, присваивая статус `success` или `failed`. +- Если действие выполнено успешно, исполнитель убедится, что существует правило индексирования, которое указывает агенту, как управлять выделением ресурсов в дальнейшем. Это особенно полезно, когда выполняются ручные действия, в то время как агент находится в режиме `auto` или `oversight`. +- Индексатор может отслеживать очередь действий, чтобы увидеть историю выполнения действий и, если необходимо, повторно одобрить и обновить элементы действий, если они не были выполнены. Очередь действий предоставляет историю всех действий, которые были добавлены в очередь и выполнены. -Data model: +Модель данных: ```graphql Type ActionInput { @@ -657,7 +657,7 @@ ActionType { } ``` -Example usage from source: +Пример использования из исходного кода: ```bash graph indexer actions get all @@ -677,141 +677,141 @@ graph indexer actions approve 1 3 5 graph indexer actions execute approve ``` -Note that supported action types for allocation management have different input requirements: +Обратите внимание, что поддерживаемые типы действий для управления аллокацией имеют различные требования к входным данным: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - выделение стейка для конкретного развертывания субграфа - - required action params: + - необходимые параметры действия: - deploymentID - amount -- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere +- `Unallocate` — закрыть аллокацию, освободив стейк для перераспределения в другое место - - required action params: + - необходимые параметры действия: - allocationID - deploymentID - - optional action params: + - необязательные параметры действия: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (принудительно использует указанный POI, даже если он не совпадает с тем, что предоставляет graph-node) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - атомарно закрывает распределение и открывает новое распределение для того же развертывания субграфа - - required action params: + - необходимые параметры действия: - allocationID - deploymentID - amount - - optional action params: + - необязательные параметры действия: - poi - - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + - force (принудительно использует указанный POI, даже если он не совпадает с тем, что предоставляет graph-node) -#### Cost models +#### Модели стоимости -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Модели стоимости обеспечивают динамическое ценообразование для запросов на основе рыночных условий и атрибутов запроса. Сервис Индексатора делится моделью стоимости с шлюзами для каждого субграфа, на который они планируют отвечать. Шлюзы, в свою очередь, используют модель стоимости для принятия решений о выборе Индексатора для каждого запроса и для ведения переговоров о плате с выбранными Индексаторами. #### Agora -The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. +Язык Agora предоставляет гибкий формат для объявления моделей стоимости запросов. Модель стоимости Agora — это последовательность операторов, которые выполняются по порядку для каждого верхнего уровня запроса в GraphQL запросе. Для каждого верхнего уровня запроса первое условие, которое совпадает с ним, определяет цену для этого запроса. -A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. +Заявление состоит из предиката, который используется для сопоставления запросов GraphQL, и выражения стоимости, которое при вычислении выводит стоимость в десятичных долях GRT. Значения, находящиеся в позиции именованных аргументов запроса, могут быть захвачены в предикате и использованы в выражении. Глобальные переменные также могут быть установлены и подставлены в качестве заполнителей в выражении. -Example cost model: +Пример модели стоимости: ``` -# This statement captures the skip value, -# uses a boolean expression in the predicate to match specific queries that use `skip` -# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +# Это выражение захватывает значение skip, +# использует логическое выражение в предикате для соответствия конкретным запросам, использующим `skip`, +# и выражение для вычисления стоимости на основе значения `skip` и глобальной переменной SYSTEM_LOAD query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; -# This default will match any GraphQL expression. -# It uses a Global substituted into the expression to calculate cost +# Этот по умолчанию шаблон будет соответствовать любому выражению GraphQL. +# Он использует глобальную переменную, подставленную в выражение для вычисления стоимости default => 0.1 * $SYSTEM_LOAD; ``` -Example query costing using the above model: +Пример вычисления запросов с использованием вышеуказанной модели: -| Query | Price | -| ---------------------------------------------------------------------------- | ------- | -| { pairs(skip: 5000) { id } } | 0.5 GRT | -| { tokens { symbol } } | 0.1 GRT | -| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | +| Запрос | Цена | +| ----------------------------------------------------------------------------- | ------- | +| { pairs(skip: 5000) { id } } | 0.5 GRT | +| { tokens { symbol } } | 0.1 GRT | +| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | -#### Applying the cost model +#### Применение модели стоимости -Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. +Модели стоимости применяются через CLI Индексатора, который передает их в API управления Индексатором для хранения в базе данных. После этого Индексатор-сервис будет получать эти модели стоимости и передавать их шлюзам, когда те запрашивают их. ```sh indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' indexer cost set model my_model.agora ``` -## Interacting with the network +## Взаимодействие с сетью -### Stake in the protocol +### Стейкинг в протоколе -The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. +Первые шаги для участия в сети в качестве Индексатора заключаются в утверждении протокола, ставке средств и (по желанию) настройке адреса оператора для повседневных взаимодействий с протоколом. -> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). +> Примечание: В целях выполнения этих инструкций будет использоваться Remix для взаимодействия с контрактом, но Вы можете использовать любой инструмент по своему выбору (например, [OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/) и [MyCrypto](https://www.mycrypto.com/account) — несколько известных инструментов). -Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. +После того как Индексатор застейкает GRT в протокол, компоненты Индексатора могут быть запущены и начать взаимодействие с сетью. -#### Approve tokens +#### Подтверждение токенов -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Откройте [приложение Remix](https://remix.ethereum.org/) в браузере -2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). +2. В `File Explorer` создайте файл с именем **GraphToken.abi** с [токеном ABI](https://raw.githubusercontent.com/graphprotocol /contracts/mainnet-deploy-build/build/abis/GraphToken.json). -3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. С выбранным и открытым в редакторе файлом `GraphToken.abi`, перейдите в раздел `Deploy and run transactions` в интерфейсе Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. В разделе Среды выберите `Injected Web3`, а в разделе `Account` выберите адрес своего Индексатора. -5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. +5. Установите адрес контракта GraphToken — вставьте адрес контракта GraphToken (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) рядом с полем `At Address` и нажмите кнопку `At address`, чтобы применить. -6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). +6. Вызовите функцию `approve(spender, amount)`, чтобы одобрить контракт стейкинга. В поле `spender` укажите адрес контракта стейкинга (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`), а в поле `amount` укажите количество токенов для стейкинга (в wei). -#### Stake tokens +#### Стейкинг токенов -1. Open the [Remix app](https://remix.ethereum.org/) in a browser +1. Откройте [приложение Remix](https://remix.ethereum.org/) в браузере -2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. +2. В `File Explorer` создайте файл с именем **Staking.abi** и добавьте в него ABI контракта для стейкинга. -3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. +3. С файлом `Staking.abi`, выбранным и открытым в редакторе, перейдите в раздел `Deploy and run transactions` в интерфейсе Remix. -4. Under environment select `Injected Web3` and under `Account` select your Indexer address. +4. В разделе Среды выберите `Injected Web3`, а в разделе `Account` выберите адрес своего Индексатора. -5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. +5. Установите адрес контракта стейкинга — вставьте адрес контракта стейкинга (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) рядом с полем `At Address` и нажмите кнопку `At address`, чтобы применить. -6. Call `stake()` to stake GRT in the protocol. +6. Вызовите функцию `stake()`, чтобы застейкать GRT в протокол. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Необязательно) Индексаторы могут одобрить другой адрес в качестве оператора для своей инфраструктуры Индексатора, чтобы разделить ключи, которые контролируют средства, и те, которые выполняют повседневные действия, такие как выделение на субграфах и обслуживание (оплачиваемых) запросов. Чтобы установить оператора, вызовите функцию `setOperator()`, указав адрес оператора. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Необязательно) Чтобы контролировать распределение вознаграждений и стратегически привлекать Делегаторов, Индексаторы могут обновить свои параметры делегирования, изменив `indexingRewardCut` (доли на миллион), `queryFeeCut` (доли на миллион) и `cooldownBlocks` (количество блоков). Для этого вызовите функцию `setDelegationParameters()`. Пример ниже устанавливает `queryFeeCut` так, чтобы 95% возмещений за запросы распределялись между Индексатором и 5% — между Делегаторами, устанавливает `indexingRewardCut` так, чтобы 60% вознаграждений за индексирование получал Индексатор, а 40% — Делегаторы, и устанавливает период `cooldownBlocks` на 500 блоков. ``` setDelegationParameters(950000, 600000, 500) ``` -### Setting delegation parameters +### Настройка параметров делегирования -The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. +Функция `setDelegationParameters()` в [стейкинг-контракте](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) является важной для Индексаторов, позволяя им задавать параметры, определяющие их взаимодействие с Делегаторами, что влияет на распределение вознаграждений и способность к делегированию. -### How to set delegation parameters +### Как настроить параметры делегирования -To set the delegation parameters using Graph Explorer interface, follow these steps: +Чтобы установить параметры делегирования с помощью интерфейса Graph Explorer, выполните следующие шаги: -1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). -2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. -3. Connect the wallet you have as a signer. -4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. -5. Submit the transaction to the network. +1. Перейдите в [Graph Explorer](https://thegraph.com/explorer/). +2. Подключите свой кошелек. Выберите мультиподпись (например, Gnosis Safe), затем выберите основную сеть. Примечание: вам нужно будет повторить этот процесс для сети Arbitrum One. +3. Подключите кошелек, который у Вас есть в качестве подписанта. +4. Перейдите в раздел 'Settings' и выберите 'Delegation Parameters'. Эти параметры должны быть настроены для достижения эффективного распределения в желаемом диапазоне. После ввода значений в предоставленные поля ввода интерфейс автоматически рассчитает эффективное распределение. При необходимости отрегулируйте эти значения, чтобы достичь желаемого процента эффективного распределения. +5. Отправьте транзакцию в сеть. -> Note: This transaction will need to be confirmed by the multisig wallet signers. +> Примечание: эта транзакция должна быть подтверждена подписантами кошелька с мультиподписью. -### The life of an allocation +### Срок существования аллокации -After being created by an Indexer a healthy allocation goes through two states. +После создания Индексатором работоспособная аллокация проходит через два состояния. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Активный** - как только распределение создается в блокчейне ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)), оно считается **активным**. Часть собственного залога Индексера и/или делегированного залога выделяется для развертывания субграфа, что позволяет им получать вознаграждения за индексирование и обслуживать запросы для этого развертывания субграфа. Агент Индексатора управляет созданием распределений в соответствии с правилами Индексатора. -- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). +- **Закрытый** - Индексатор может закрыть распределение, как только пройдет 1 эпоха ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)), или его агент Индексатора автоматически закроет распределение после **maxAllocationEpochs** (в настоящее время 28 дней). Когда распределение закрыто с действительным доказательством индексирования (POI), вознаграждения за индексирование распределяются между Индексатором и его делегаторами ([узнать больше](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Индексаторам рекомендуется использовать функциональность оффчейн-синхронизации для синхронизации развертываний субграфов до чейна перед созданием распределения в блокчейне. Эта функция особенно полезна для субграфов, которые могут занять более 28 эпох для синхронизации или которые имеют вероятность неустойчивых сбоев. diff --git a/website/src/pages/ru/indexing/supported-network-requirements.mdx b/website/src/pages/ru/indexing/supported-network-requirements.mdx index f1afe7cb7850..d0d52e5ac9fb 100644 --- a/website/src/pages/ru/indexing/supported-network-requirements.mdx +++ b/website/src/pages/ru/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Требования к поддерживаемым сетям --- -| Сеть | Гайды | Системные требования | Награды за индексирование | -| --- | --- | --- | :-: | -| Арбитрум | [Гайд по Baremetal](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Гайд по Docker](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ ядраа CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | -| Avalanche | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 ядра / 8 потоков CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | -| Base | [Гайд по Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[Гайд по GETH Baremetal](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[Гайд по GETH Docker](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ ядер CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_последнее обновление 14 мая 2024_ | ✅ | -| Binance | [Гайд по Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 ядер / 16 потоков CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_последнее обновление 22 июня 2024_ | ✅ | -| Celo | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | -| Ethereum | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Более высокая тактовая частота по сравнению с количеством ядер
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_последнее обновление в августе 2023_ | ✅ | -| Fantom | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 ядра / 8 потоков CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | -| Gnosis | [Гайд по Baremetal](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 ядер / 12 потоков CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | -| Linea | [Гайд по Baremetal](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ ядра CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_последнее обновление 2 апреля 2024_ | ✅ | -| Optimism | [Гайд по Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[Гайд по GETH Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[Гайд по GETH Docker](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 ядра / 8 потоков CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | -| Polygon | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 ядра CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | -| Scroll | [Гайд по Baremetal](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Гайд по Docker](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 ядра / 8 потоков CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_последнее обновление 3 апреля 2024_ | ✅ | +| Сеть | Гайды | Системные требования | Награды за индексирование | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------------------: | +| Арбитрум | [Гайд по Baremetal](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Гайд по Docker](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ ядраа CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | +| Avalanche | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 ядра / 8 потоков CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | +| Base | [Гайд по Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[Гайд по GETH Baremetal](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[Гайд по GETH Docker](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Гайд по Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 ядер / 16 потоков CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_последнее обновление 22 июня 2024_ | ✅ | +| Celo | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | +| Ethereum | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Более высокая тактовая частота по сравнению с количеством ядер
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_последнее обновление в августе 2023_ | ✅ | +| Fantom | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 ядра / 8 потоков CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | +| Gnosis | [Гайд по Baremetal](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 ядер / 12 потоков CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | +| Linea | [Гайд по Baremetal](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ ядра CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_последнее обновление 2 апреля 2024_ | ✅ | +| Optimism | [Гайд по Erigon Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[Гайд по GETH Baremetal](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[Гайд по GETH Docker](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 ядра / 8 потоков CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | +| Polygon | [Гайд по Docker](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 ядра CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_последнее обновление в августе 2023_ | ✅ | +| Scroll | [Гайд по Baremetal](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Гайд по Docker](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 ядра / 8 потоков CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_последнее обновление 3 апреля 2024_ | ✅ | diff --git a/website/src/pages/ru/indexing/tap.mdx b/website/src/pages/ru/indexing/tap.mdx index fe3b7d982be4..1007b2eaaa9e 100644 --- a/website/src/pages/ru/indexing/tap.mdx +++ b/website/src/pages/ru/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: Руководство по миграции TAP +title: GraphTally Guide --- -Узнайте о новой платежной системе The Graph, **Timeline Aggregation Protocol, TAP**. Эта система обеспечивает быстрые и эффективные микротранзакции с минимальным уровнем доверия. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Обзор -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) — это полная замена существующей в настоящее время платежной системы Scalar. Она предоставляет следующие ключевые функции: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Эффективно обрабатывает микроплатежи. - Добавляет уровень консолидации к транзакциям и затратам ончейна. - Позволяет Индексаторам управлять поступлениями и платежами, гарантируя оплату запросов. - Обеспечивает децентрализованные, не требующие доверия шлюзы и повышает производительность `indexer-service` для нескольких отправителей. -## Специфические особенности +### Специфические особенности -TAP позволяет отправителю совершать несколько платежей получателю, **TAP Receipts**, который объединяет эти платежи в один платеж, **Receipt Aggregate Voucher**, также известный как **RAV**. Затем этот агрегированный платеж можно проверить в блокчейне, что сокращает количество транзакций и упрощает процесс оплаты. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. Для каждого запроса шлюз отправит вам `signed receipt`, который будет сохранен в Вашей базе данных. Затем эти запросы будут агрегированы `tap-agent` через запрос. После этого Вы получите RAV. Вы можете обновить RAV, отправив ему новые квитанции, что приведет к генерации нового RAV с увеличенным значением. @@ -45,28 +45,28 @@ TAP позволяет отправителю совершать несколь ### Контракты -| Контракт | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | -| ------------------- | -------------------------------------------- | -------------------------------------------- | -| TAP-верификатор | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | -| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | -| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | +| Контракт | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | +| ---------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP-верификатор | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | +| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | ### Шлюз -| Компонент | Edge и Node Mainnet (Arbitrum Mainnet) | Edge и Node Testnet (Arbitrum Sepolia) | -| ----------- | --------------------------------------------- | --------------------------------------------- | -| Отправитель | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | -| Подписанты | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | -| Агрегатор | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | +| Компонент | Edge и Node Mainnet (Arbitrum Mainnet) | Edge и Node Testnet (Arbitrum Sepolia) | +| --------------- | --------------------------------------------- | --------------------------------------------- | +| Отправитель | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Подписанты | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Агрегатор | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Требования +### Предварительные требования -Помимо типичных требований для запуска индексатора Вам понадобится конечная точка `tap-escrow-subgraph` для запроса обновлений TAP. Вы можете использовать The Graph Network для запроса или размещения себя на своей `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Субграф Graph TAP Arbitrum Sepolia (для тестовой сети The Graph)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Субграф Graph TAP Arbitrum One (для основной сети The Graph)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Примечание: `indexer-agent` в настоящее время не обрабатывает индексирование этого субграфа, как это происходит при развертывании сетевого субграфа. В итоге Вам придется индексировать его вручную. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Руководство по миграции @@ -79,7 +79,7 @@ TAP позволяет отправителю совершать несколь 1. **Indexer Agent** - Следуйте [этому же процессу] (https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-compents). - - Укажите новый аргумент `--tap-subgraph-endpoint`, чтобы активировать новые кодовые пути TAP и разрешить выкуп TAP RAV. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -99,14 +99,14 @@ TAP позволяет отправителю совершать несколь Для минимальной конфигурации используйте следующий шаблон: ```bash -# Вам придется изменить *все* приведенные ниже значения, чтобы они соответствовали вашим настройкам. +# You will have to change *all* the values below to match your setup. # -# Некоторые из приведенных ниже конфигураций представляют собой глобальные значения graph network, которые Вы можете найти здесь: +# Some of the config below are global graph network values, which you can find here: # # -# Совет профессионала: если Вам нужно загрузить некоторые значения из среды в эту конфигурацию, Вы -# можете перезаписать их переменными среды. Например, следующее можно заменить -# на [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: # # [database] # postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" @@ -116,55 +116,55 @@ indexer_address = "0x1111111111111111111111111111111111111111" operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" [database] -# URL-адрес базы данных Postgres, используемой для компонентов индексатора. Та же база данных, -# которая используется `indexer-agent`. Ожидается, что `indexer-agent` создаст -# необходимые таблицы. +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. postgres_url = "postgres://postgres@postgres:5432/postgres" [graph_node] -# URL-адрес конечной точки запроса Вашей graph-node +# URL to your graph-node's query endpoint query_url = "" -# URL-адрес конечной точки статуса Вашей graph-node +# URL to your graph-node's status endpoint status_url = "" [subgraphs.network] -# URL-адрес запроса для субграфа Graph Network. +# Query URL for the Graph Network Subgraph. query_url = "" -# Необязательно, развертывание нужно искать в локальной `graph-node`, если оно локально проиндексировано. -# Рекомендуется индексировать субграф локально. -# ПРИМЕЧАНИЕ: используйте только `query_url` или `deployment_id` +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the Subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# URL-адрес запроса для субграфа Escrow. +# Query URL for the Escrow Subgraph. query_url = "" -# Необязательно, развертывание нужно искать в локальной `graph-node`, если оно локально проиндексировано. -# Рекомендуется индексировать субграф локально. -# ПРИМЕЧАНИЕ: используйте только `query_url` или `deployment_id` +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the Subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [blockchain] -# Идентификатор чейна сети, в которой работает the graph network работает на +# The chain ID of the network that the graph network is running on chain_id = 1337 -# Контрактный адрес верификатора receipt aggregate voucher (RAV) TAP +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. receipts_verifier_address = "0x2222222222222222222222222222222222222222" ######################################## -# Специальные настройки для tap-agent # +# Specific configurations to tap-agent # ######################################## [tap] -# Это сумма комиссий, которой вы готовы рискнуть в любой момент времени. Например, -# если отправитель не совершает поставку RAV достаточно длительное время, и комиссии превышают это значение -# суммарно, служба-индексатор перестанет принимать запросы от отправителя -# до тех пор, пока комиссии не будут суммированы. -# ПРИМЕЧАНИЕ: Используйте строки для десятичных значений, чтобы избежать ошибок округления -# например: +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: # max_amount_willing_to_lose_grt = "0.1" max_amount_willing_to_lose_grt = 20 [tap.sender_aggregator_endpoints] -# Ключ-значение всех отправителей и их конечных точек агрегатора -# Ниже приведен пример шлюза тестовой сети E&N. +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. 0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" ``` diff --git a/website/src/pages/ru/indexing/tooling/graph-node.mdx b/website/src/pages/ru/indexing/tooling/graph-node.mdx index 43e98a3aad17..7ceceba8e9ec 100644 --- a/website/src/pages/ru/indexing/tooling/graph-node.mdx +++ b/website/src/pages/ru/indexing/tooling/graph-node.mdx @@ -2,39 +2,39 @@ title: Graph Node --- -Graph Node — это компонент, который индексирует подграфы и делает полученные данные доступными для запроса через GraphQL API. Таким образом, он занимает центральное место в стеке индексатора, а правильная работа Graph Node имеет решающее значение для успешного запуска индексатора. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. -This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). +Здесь представлен контекстуальный обзор Graph Node и некоторые более продвинутые параметры, доступные индексаторам. Подробную документацию и инструкции можно найти в [репозитории Graph Node](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. -Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). +Graph Node (и весь стек Индексаторов) можно запускать на «голом железе» или в облачной среде. Эта гибкость центрального компонента индексирования имеет решающее значение для надежности The Graph Protocol. Точно так же Graph Node может быть [создана из исходного кода](https://github.com/graphprotocol/graph-node), или Индексаторы могут использовать один из [предусмотренных Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### База данных PostgreSQL -Основное хранилище для Graph Node, это место, где хранятся данные подграфа, а также метаданные о подграфах и сетевые данные, не зависящие от подграфа, такие как кэш блоков и кэш eth_call. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Клиенты сети Для индексации сети Graph Node требуется доступ к сетевому клиенту через EVM-совместимый JSON-RPC API. Этот RPC может подключаться к одному клиенту или может представлять собой более сложную настройку, которая распределяет нагрузку между несколькими. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). -**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). +**Network Firehoses**. Firehose — это служба gRPC, предоставляющая упорядоченный, но учитывающий форк поток блоков, разработанная разработчиками ядра The Graph для лучшей поддержки крупномасштабного высокопроизводительного индексирования. В настоящее время это не является обязательным требованием для Индексаторов, но Индексаторам рекомендуется ознакомиться с технологией до начала полной поддержки сети. Подробнее о Firehose можно узнать [здесь(https://firehose.streamingfast.io/). ### Ноды IPFS -Метаданные о развертывании подграфа хранятся в сети IPFS. The Graph Node в первую очередь обращается к ноде IPFS во время развертывания подграфа, чтобы получить манифест подграфа и все связанные файлы. Сетевым индексаторам не требуется запускать собственную ноду IPFS. Нода IPFS для сети находиться по адресу https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Сервер метрик Prometheus Чтобы включить мониторинг и отчетность, Graph Node может дополнительно регистрировать метрики на сервере метрик Prometheus. -### Getting started from source +### Начало работы с исходным кодом -#### Install prerequisites +#### Установка необходимых компонентов - **Rust** @@ -42,15 +42,15 @@ While some subgraphs may just require a full node, some may have indexing featur - **IPFS** -- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. +- **Дополнительные требования для пользователей Ubuntu**. Для запуска Graph Node на Ubuntu может потребоваться несколько дополнительных пакетов. ```sh sudo apt-get install -y clang libpq-dev libssl-dev pkg-config ``` -#### Setup +#### Настройка -1. Start a PostgreSQL database server +1. Запустите сервер базы данных PostgreSQL ```sh initdb -D .postgres @@ -58,9 +58,9 @@ pg_ctl -D .postgres -l logfile start createdb graph-node ``` -2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` +2. Клонируйте репозиторий [Graph Node](https://github.com/graphprotocol/graph-node) и соберите исходный код, запустив `cargo build` -3. Now that all the dependencies are setup, start the Graph Node: +3. Теперь, когда все зависимости настроены, запустите Graph Node: ```sh cargo run -p graph-node --release -- \ @@ -71,35 +71,35 @@ cargo run -p graph-node --release -- \ ### Начало работы с Kubernetes -A complete Kubernetes example configuration can be found in the [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s). +Полный пример конфигурации Kubernetes можно найти в [репозитории индексатора](https://github.com/graphprotocol/indexer/tree/main/k8s). ### Порты Во время работы Graph Node предоставляет следующие порты: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Порт | Назначение | Маршруты | Аргумент CLI | Переменная среды | +| ---- | -------------------------------------------------- | ---------------------------------------------- | ------------------ | ---------------- | +| 8000 | GraphQL HTTP-сервер
(для запросов к Субграфу) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(для подписок на Субграф) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(для управления развертываниями) | / | \--admin-port | - | +| 8030 | API статуса индексирования Субграфа | /graphql | \--index-node-port | - | +| 8040 | Метрики Prometheus | /metrics | \--metrics-port | - | -> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. +> **Важно**. Будьте осторожны, открывая порты для общего доступа — **порты администрирования** должны оставаться закрытыми. Это касается конечных точек Graph Node JSON-RPC. ## Расширенная настройка Graph Node -На простейшем уровне Graph Node может работать с одним экземпляром Graph Node, одной базой данных PostgreSQL, нодой IPFS и сетевыми клиентами в соответствии с требованиями субграфов, подлежащих индексированию. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. -This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. +Эту настройку можно масштабировать горизонтально, добавляя несколько Graph Node и несколько баз данных для поддержки этих Graph Node. Опытные пользователи могут воспользоваться некоторыми возможностями горизонтального масштабирования Graph Node, а также некоторыми более продвинутыми параметрами конфигурации через файл `config.toml`l и переменные среды Graph Node. ### `config.toml` -A [TOML](https://toml.io/en/) configuration file can be used to set more complex configurations than those exposed in the CLI. The location of the file is passed with the --config command line switch. +Файл конфигурации [TOML](https://toml.io/en/) можно использовать для установки более сложных конфигураций, чем те, которые представлены в интерфейсе командной строки. Местоположение файла передается с помощью параметра командной строки --config. > При использовании файла конфигурации невозможно использовать параметры --postgres-url, --postgres-secondary-hosts и --postgres-host-weights. -A minimal `config.toml` file can be provided; the following file is equivalent to using the --postgres-url command line option: +Можно предоставить минимальный файл `config.toml`, следующий файл эквивалентен использованию опции командной строки --postgres-url: ```toml [store] @@ -110,17 +110,17 @@ connection="<.. postgres-url argument ..>" indexers = [ "<.. list of all indexing nodes ..>" ] ``` -Full documentation of `config.toml` can be found in the [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). +Полную документацию по `config.toml` можно найти в [документации Graph Node](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). #### Множественные Graph Node -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Обратите внимание, что несколько Graph Nodes могут быть настроены для использования одной и той же базы данных, которая сама по себе может масштабироваться по горизонтали с помощью сегментирования. #### Правила развертывания -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Пример настройки правил развертывания: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -150,7 +150,7 @@ indexers = [ ] ``` -Read more about deployment rules [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). +Подробную информацию о правилах развертывания можно найти [здесь](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). #### Выделенные ноды запросов @@ -167,19 +167,19 @@ query = "" В большинстве случаев одной базы данных Postgres достаточно для поддержки отдельной Graph Node. Когда отдельная Graph Node перерастает одну базу данных Postgres, можно разделить хранилище данных Graph Node между несколькими базами данных Postgres. Все базы данных вместе образуют хранилище отдельной Graph Node. Каждая отдельная база данных называется шардом (сегментом). -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Сегментирование становится полезным, когда Ваша существующая база данных не может справиться с нагрузкой, которую на нее возлагает Graph Node, и когда больше невозможно увеличить размер базы данных. -> Обычно лучше сделать одну базу данных максимально большой, прежде чем начинать с шардов (сегментов). Единственным исключением является случай, когда трафик запросов распределяется между подграфами очень неравномерно; в таких ситуациях может существенно помочь, если подграфы большого объема хранятся в одном сегменте, а все остальное — в другом, потому что такая настройка повышает вероятность того, что данные для подграфов большого объема останутся во внутреннем кеше базы данных и не будут заменяться данными, которые не очень нужны, из подграфов с небольшим объемом. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. Что касается настройки соединений, начните с max_connections в postgresql.conf, установленного на 400 (или, может быть, даже на 200), и посмотрите на метрики store_connection_wait_time_ms и store_connection_checkout_count Prometheus. Длительное время ожидания (все, что превышает 5 мс) является признаком того, что доступных соединений слишком мало; большое время ожидания также будет вызвано тем, что база данных очень загружена (например, высокая загрузка ЦП). Однако, если в остальном база данных кажется стабильной, большое время ожидания указывает на необходимость увеличения количества подключений. В конфигурации количество подключений, которое может использовать каждая отдельная Graph Node, является верхним пределом, и Graph Node не будет держать соединения открытыми, если они ей не нужны. -Read more about store configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). +Подробную информацию о настройке хранилища можно найти [здесь](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). #### Прием выделенного блока -If there are multiple nodes configured, it will be necessary to specify one node which is responsible for ingestion of new blocks, so that all configured index nodes aren't polling the chain head. This is done as part of the `chains` namespace, specifying the `node_id` to be used for block ingestion: +Если настроено несколько нод, необходимо выделить одну, которая будет отвечать за прием новых блоков, чтобы все сконфигурированные ноды индекса не опрашивали заголовок чейна. Это настраивается в рамках пространства имен `chains`, в котором `node_id`, используемый для приема блоков: ```toml [chains] @@ -188,13 +188,13 @@ ingestor = "block_ingestor_node" #### Поддержка нескольких сетей -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Несколько сетей - Несколько провайдеров на сеть (это может позволить разделить нагрузку между провайдерами, а также может позволить настроить полные ноды, а также архивные ноды, при этом Graph Node предпочитает более дешевых поставщиков, если позволяет данная рабочая нагрузка). - Дополнительные сведения о провайдере, такие как функции, аутентификация и тип провайдера (для экспериментальной поддержки Firehose) -The `[chains]` section controls the ethereum providers that graph-node connects to, and where blocks and other metadata for each chain are stored. The following example configures two chains, mainnet and kovan, where blocks for mainnet are stored in the vip shard and blocks for kovan are stored in the primary shard. The mainnet chain can use two different providers, whereas kovan only has one provider. +Раздел `[chains]` управляет провайдерами Ethereum, к которым подключается graph-node, и где хранятся блоки и другие метаданные для каждого чейна. В следующем примере настраиваются два чейна, mainnet и kovan, где блоки для mainnet хранятся в сегменте vip, а блоки для kovan — в основном сегменте. Чейн mainnet может использовать двух разных провайдеров, тогда как у kovan есть только один провайдер. ```toml [chains] @@ -210,50 +210,50 @@ shard = "primary" provider = [ { label = "kovan", url = "http://..", features = [] } ] ``` -Read more about provider configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). +Подробную информацию о настройке провайдера можно найти [здесь](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). ### Переменные среды -Graph Node supports a range of environment variables which can enable features, or change Graph Node behaviour. These are documented [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md). +Graph Node поддерживает ряд переменных среды, которые могут включать функции или изменять поведение Graph Node. Они описаны [здесь] (https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md). ### Непрерывное развертывание Пользователи, использующие масштабируемую настройку индексирования с расширенной конфигурацией, могут получить преимущество от управления своими узлами Graph с помощью Kubernetes. -- The indexer repository has an [example Kubernetes reference](https://github.com/graphprotocol/indexer/tree/main/k8s) -- [Launchpad](https://docs.graphops.xyz/launchpad/intro) is a toolkit for running a Graph Protocol Indexer on Kubernetes maintained by GraphOps. It provides a set of Helm charts and a CLI to manage a Graph Node deployment. +- В репозитории индексатора имеется [пример ссылки на Kubernetes](https://github.com/graphprotocol/indexer/tree/main/k8s) +- [Launchpad](https://docs.graphops.xyz/launchpad/intro) – это набор инструментов для запуска Индексатора Graph Protocol в Kubernetes, поддерживаемый GraphOps. Он предоставляет набор диаграмм Helm и интерфейс командной строки для управления развертыванием Graph Node. ### Управление Graph Node -При наличии работающей Graph Node (или Graph Nodes!), задача состоит в том, чтобы управлять развернутыми подграфами на этих нодах. Graph Node предлагает ряд инструментов для управления подграфами. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Логирование (ведение журналов) -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. -In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). +Кроме того, установка для GRAPH_LOG_QUERY_TIMING`значения`gql\` предоставляет дополнительные сведения о том, как выполняются запросы GraphQL (хотя это приводит к созданию большого объема логов). -#### Monitoring & alerting +#### Мониторинг и оповещения Graph Node предоставляет метрики через конечную точку Prometheus на порту 8040 по умолчанию. Затем можно использовать Grafana для визуализации этих метрик. -The indexer repository provides an [example Grafana configuration](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). +В репозитории индексатора имеется [пример конфигурации Grafana] (https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). #### Graphman -`graphman` is a maintenance tool for Graph Node, helping with diagnosis and resolution of different day-to-day and exceptional tasks. +`graphman` – это инструмент обслуживания Graph Node, помогающий диагностировать и решать различные повседневные и исключительные задачи. -The graphman command is included in the official containers, and you can docker exec into your graph-node container to run it. It requires a `config.toml` file. +Команда graphman включена в официальные контейнеры, и Вы можете выполнить docker exec в контейнере graph-node, чтобы запустить ее. Для этого требуется файл `config.toml`. -Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` +Полная документация по командам `graphman` доступна в репозитории Graph Node. См. [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) в Graph Node `/docs` -### Работа с подграфами +### Working with Subgraphs #### API статуса индексирования -Доступный по умолчанию на порту 8030/graphql, API статуса индексирования предоставляет ряд методов для проверки статуса индексирования для различных подграфов, проверки доказательств индексирования, проверки функций подграфов и многого другого. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. -The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). +Полная схема доступна [здесь](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). #### Производительность индексирования @@ -263,12 +263,12 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - Обработка событий по порядку с помощью соответствующих обработчиков (это может включать вызов чейна для состояния и выборку данных из хранилища) - Запись полученных данных в хранилище -Эти этапы конвейерные (т.е. могут выполняться параллельно), но они зависят друг от друга. Там, где подграфы индексируются медленно, основная причина будет зависеть от конкретного подграфа. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Распространенные причины низкой скорости индексации: -- Time taken to find relevant events from the chain (call handlers in particular can be slow, given the reliance on `trace_filter`) -- Making large numbers of `eth_calls` as part of handlers +- Время, затрачиваемое на поиск соответствующих событий в чейне (в частности, обработчики вызовов могут работать медленно, учитывая зависимость от `trace_filter`) +- Создание большого количества `eth_calls` в составе обработчиков - Большое количество операций с хранилищем во время выполнения - Большой объем данных для сохранения в хранилище - Большое количество событий для обработки @@ -276,35 +276,35 @@ The full schema is available [here](https://github.com/graphprotocol/graph-node/ - Сам провайдер отстает от головного чейна - Задержка получения новых поступлений от провайдера в головном чейне -Метрики индексации подграфов могут помочь диагностировать основную причину замедления индексации. В некоторых случаях проблема связана с самим подграфом, но в других случаях усовершенствованные сетевые провайдеры, снижение конкуренции за базу данных и другие улучшения конфигурации могут заметно повысить производительность индексирования. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Повреждённые подграфы +#### Failed Subgraphs -Во время индексации подграфов может произойти сбой, если они столкнутся с неожиданными данными, какой-то компонент не будет работать должным образом или если в обработчиках событий или конфигурации появится ошибка. Есть два основных типа отказа: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Детерминированные сбои: это сбои, которые не будут устранены при повторных попытках - Недетерминированные сбои: они могут быть связаны с проблемами с провайдером или какой-либо неожиданной ошибкой Graph Node. Когда происходит недетерминированный сбой, Graph Node повторяет попытки обработчиков сбоя, со временем отказываясь от них. -В некоторых случаях сбой может быть устранен индексатором (например, если ошибка вызвана отсутствием нужного поставщика, добавление необходимого поставщика позволит продолжить индексирование). Однако в других случаях требуется изменить код подграфа. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Кэш блокировки и вызова -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Если есть подозрение на несогласованность кэша блоков, например, событие отсутствия квитанции tx: -1. `graphman chain list` to find the chain name. -2. `graphman chain check-blocks by-number ` will check if the cached block matches the provider, and deletes the block from the cache if it doesn’t. - 1. If there is a difference, it may be safer to truncate the whole cache with `graphman chain truncate `. +1. `graphman chain list`, чтобы найти название чейна. +2. `graphman chain check-blocks by-number ` проверит, соответствует ли кэшированный блок провайдеру, и удалит блок из кэша, если это не так. + 1. Если есть разница, может быть безопаснее усечь весь кеш с помощью `graphman chain truncate `. 2. Если блок соответствует провайдеру, то проблема может быть отлажена непосредственно провайдером. #### Запрос проблем и ошибок -После индексации подграфа индексаторы могут рассчитывать на обслуживание запросов через выделенную конечную точку запроса подграфа. Если индексатор планирует обслуживать значительный объем запросов, рекомендуется выделенная нода запросов, а в случае очень больших объемов запросов индексаторы могут настроить сегменты копий так, чтобы запросы не влияли на процесс индексирования. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Однако, даже с выделенной нодой запросов и копиями выполнение некоторых запросов может занять много времени, а в некоторых случаях увеличить использование памяти и негативно повлиять на время выполнения запросов другими пользователями. @@ -312,15 +312,15 @@ However, in some instances, if an Ethereum node has provided incorrect data for ##### Кэширование запросов -Graph Node caches GraphQL queries by default, which can significantly reduce database load. This can be further configured with the `GRAPH_QUERY_CACHE_BLOCKS` and `GRAPH_QUERY_CACHE_MAX_MEM` settings - read more [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md#graphql-caching). +Graph Node по умолчанию кэширует запросы GraphQL, что может значительно снизить нагрузку на базу данных. Это можно дополнительно настроить с помощью параметров `GRAPH_QUERY_CACHE_BLOCKS` и `GRAPH_QUERY_CACHE_MAX_MEM` — подробнее читайте [здесь](https://github.com/graphprotocol/graph-node/blob/master. /docs/environment-variables.md#graphql-caching). ##### Анализ запросов -Проблемные запросы чаще всего выявляются одним из двух способов. В некоторых случаях пользователи сами сообщают, что данный запрос выполняется медленно. В этом случае задача состоит в том, чтобы диагностировать причину замедленности — является ли это общей проблемой или специфичной для этого подграфа или запроса. А затем, конечно же, решить ее, если это возможно. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. В других случаях триггером может быть высокий уровень использования памяти на ноде запроса, и в этом случае сначала нужно определить запрос, вызвавший проблему. -Indexers can use [qlog](https://github.com/graphprotocol/qlog/) to process and summarize Graph Node's query logs. `GRAPH_LOG_QUERY_TIMING` can also be enabled to help identify and debug slow queries. +Индексаторы могут использовать [qlog](https://github.com/graphprotocol/qlog/) для обработки и суммирования логов запросов Graph Node. Также можно включить `GRAPH_LOG_QUERY_TIMING` для выявления и отладки медленных запросов. При медленном запросе у индексаторов есть несколько вариантов. Разумеется, они могут изменить свою модель затрат, чтобы значительно увеличить стоимость отправки проблемного запроса. Это может привести к снижению частоты этого запроса. Однако это часто не устраняет основной причины проблемы. @@ -328,18 +328,18 @@ Indexers can use [qlog](https://github.com/graphprotocol/qlog/) to process and s Таблицы базы данных, в которых хранятся объекты, как правило, бывают двух видов: «подобные транзакциям», когда объекты, однажды созданные, никогда не обновляются, т. е. они хранят что-то вроде списка финансовых транзакций и «подобные учетной записи», где объекты обновляются очень часто, т. е. они хранят что-то вроде финансовых счетов, которые изменяются каждый раз при записи транзакции. Таблицы, подобные учетным записям, характеризуются тем, что они содержат большое количество версий объектов, но относительно мало отдельных объектов. Часто в таких таблицах количество отдельных объектов составляет 1% от общего количества строк (версий объектов) -For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table. +Для таблиц, подобных учетным записям, `graph-node` может генерировать запросы, в которых используются детали того, как Postgres в конечном итоге сохраняет данные с такой высокой скоростью изменения, а именно, что все версии последних блоков находятся в небольшом подразделе общего хранилища для такой таблицы. -The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity. +Команда `graphman stats show ` показывает для каждого типа/таблицы объектов в развертывании, сколько различных объектов и сколько версий объектов содержит каждая таблица. Эти данные основаны на внутренних оценках Postgres и, следовательно, неточны и могут отличаться на порядок. `-1` в столбце `entities` означает, что Postgres считает, что все строки содержат отдельный объект. -In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show
` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions. +В общем, таблицы, в которых количество отдельных объектов составляет менее 1 % от общего количества версий строк/объектов, являются хорошими кандидатами на оптимизацию по аналогии с учетными записями. Если выходные данные `graphman stats show` указывают на то, что эта оптимизация может принести пользу таблице, запуск `graphman stats show
` произведёт полный расчет таблицы. Этот процесс может быть медленным, но обеспечит точную степень соотношения отдельных объектов к общему количеству версий объекта. -Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. +Как только таблица будет определена как учетная запись, запуск `graphman stats account-like .
`, включит оптимизацию, подобную учетной записи, для запросов к этой таблице. Оптимизацию можно снова отключить с помощью `graphman stats account-like --clear .
`. Нодам запроса требуется до 5 минут, чтобы заметить, что оптимизация включена или выключена. После включения оптимизации необходимо убедиться, что изменение фактически не приводит к замедлению запросов к этой таблице. Если Вы настроили Grafana для мониторинга Postgres, медленные запросы будут отображаться в `pg_stat_activity` в больших количествах, это займет несколько секунд. В этом случае оптимизацию необходимо снова отключить. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Удаление подграфов +#### Removing Subgraphs > Это новый функционал, который будет доступен в Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/ru/indexing/tooling/graphcast.mdx b/website/src/pages/ru/indexing/tooling/graphcast.mdx index a3c391cf3e4f..2c5c4818950f 100644 --- a/website/src/pages/ru/indexing/tooling/graphcast.mdx +++ b/website/src/pages/ru/indexing/tooling/graphcast.mdx @@ -2,7 +2,7 @@ title: Graphcast --- -## Introduction +## Введение Is there something you'd like to learn from or share with your fellow Indexers in an automated manner, but it's too much hassle or costs too much gas? @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Перекрестная проверка целостности данных субграфа в режиме реального времени (Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Узнать больше diff --git a/website/src/pages/ru/resources/_meta-titles.json b/website/src/pages/ru/resources/_meta-titles.json index f5971e95a8f6..6e14e6afa310 100644 --- a/website/src/pages/ru/resources/_meta-titles.json +++ b/website/src/pages/ru/resources/_meta-titles.json @@ -1,4 +1,4 @@ { - "roles": "Additional Roles", - "migration-guides": "Migration Guides" + "roles": "Дополнительные роли", + "migration-guides": "Руководства по миграции" } diff --git a/website/src/pages/ru/resources/benefits.mdx b/website/src/pages/ru/resources/benefits.mdx index df6eeac7c628..cf4f127c9a1a 100644 --- a/website/src/pages/ru/resources/benefits.mdx +++ b/website/src/pages/ru/resources/benefits.mdx @@ -1,11 +1,11 @@ --- -title: The Graph vs. Self Hosting +title: The Graph против самостоятельного хостинга socialImage: https://thegraph.com/docs/img/seo/benefits.jpg --- Децентрализованная сеть The Graph была спроектирована и усовершенствована для создания надежной системы индексации и запросов — и с каждым днем она становится лучше благодаря тысячам участников по всему миру. -The benefits of this decentralized protocol cannot be replicated by running a `graph-node` locally. The Graph Network is more reliable, more efficient, and less expensive. +Преимущества этого децентрализованного протокола невозможно воспроизвести, запустив `graph-node` локально. The Graph Network более надежен, эффективен и экономичен. Вот анализ: @@ -19,7 +19,7 @@ The benefits of this decentralized protocol cannot be replicated by running a `g ## Преимущества -### Lower & more Flexible Cost Structure +### Более низкая и гибкая структура затрат No contracts. No monthly fees. Only pay for the queries you use—with an average cost-per-query of $40 per million queries (~$0.00004 per query). Queries are priced in USD and paid in GRT or credit card. @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Сравнение затрат | Самостоятельный хостинг | Сеть The Graph | -| :-: | :-: | :-: | -| Ежемесячная стоимость сервера\* | $350 в месяц | $0 | -| Стоимость запроса | $0+ | $0 per month | -| Время разработки | $400 в месяц | Нет, встроен в сеть с глобально распределенными Индексаторами | -| Запросы в месяц | Ограничен возможностями инфраструктуры | 100,000 (Free Plan) | -| Стоимость одного запроса | $0 | $0 | -| Infrastructure | Централизованная | Децентрализованная | -| Географическая избыточность | $750+ за каждую дополнительную ноду | Включено | -| Время безотказной работы | Варьируется | 99.9%+ | -| Общие ежемесячные расходы | $750+ | $0 | +| Сравнение затрат | Самостоятельный хостинг | Сеть The Graph | +| :-----------------------------: | :-------------------------------------: | :-----------------------------------------------------------: | +| Ежемесячная стоимость сервера\* | $350 в месяц | $0 | +| Стоимость запроса | $0+ | $0 per month | +| Время разработки | $400 в месяц | Нет, встроен в сеть с глобально распределенными Индексаторами | +| Запросы в месяц | Ограничен возможностями инфраструктуры | 100,000 (Free Plan) | +| Стоимость одного запроса | $0 | $0 | +| Инфраструктура | Централизованная | Децентрализованная | +| Географическая избыточность | $750+ за каждую дополнительную ноду | Включено | +| Время безотказной работы | Варьируется | 99.9%+ | +| Общие ежемесячные расходы | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Сравнение затрат | Самостоятельный хостинг | Сеть The Graph | -| :-: | :-: | :-: | -| Ежемесячная стоимость сервера\* | $350 в месяц | $0 | -| Стоимость запроса | $500 в месяц | $120 per month | -| Время разработки | $800 в месяц | Нет, встроен в сеть с глобально распределенными Индексаторами | -| Запросы в месяц | Ограничен возможностями инфраструктуры | ~3,000,000 | -| Стоимость одного запроса | $0 | $0.00004 | -| Infrastructure | Централизованная | Децентрализованная | -| Инженерные расходы | $200 в час | Включено | -| Географическая избыточность | общие затраты на каждую дополнительную ноду составляют $1,200 | Включено | -| Время безотказной работы | Варьируется | 99.9%+ | -| Общие ежемесячные расходы | $1,650+ | $120 | +| Сравнение затрат | Самостоятельный хостинг | Сеть The Graph | +| :-----------------------------: | :-----------------------------------------------------------: | :-----------------------------------------------------------: | +| Ежемесячная стоимость сервера\* | $350 в месяц | $0 | +| Стоимость запроса | $500 в месяц | $120 per month | +| Время разработки | $800 в месяц | Нет, встроен в сеть с глобально распределенными Индексаторами | +| Запросы в месяц | Ограничен возможностями инфраструктуры | ~3,000,000 | +| Стоимость одного запроса | $0 | $0.00004 | +| Инфраструктура | Централизованная | Децентрализованная | +| Инженерные расходы | $200 в час | Включено | +| Географическая избыточность | общие затраты на каждую дополнительную ноду составляют $1,200 | Включено | +| Время безотказной работы | Варьируется | 99.9%+ | +| Общие ежемесячные расходы | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Сравнение затрат | Самостоятельный хостинг | Сеть The Graph | -| :-: | :-: | :-: | -| Ежемесячная стоимость сервера\* | $1100 в месяц за ноду | $0 | -| Стоимость запроса | $4000 | $1,200 per month | -| Необходимое количество нод | 10 | Не подходит | -| Время разработки | $6,000 или больше в месяц | Нет, встроен в сеть с глобально распределенными Индексаторами | -| Запросы в месяц | Ограничен возможностями инфраструктуры | ~30,000,000 | -| Стоимость одного запроса | $0 | $0.00004 | -| Infrastructure | Централизованная | Децентрализованная | -| Географическая избыточность | общие затраты на каждую дополнительную ноду составляют $1,200 | Включено | -| Время безотказной работы | Варьируется | 99.9%+ | -| Общие ежемесячные расходы | $11,000+ | $1,200 | +| Сравнение затрат | Самостоятельный хостинг | Сеть The Graph | +| :-----------------------------: | :-----------------------------------------------------------: | :-----------------------------------------------------------: | +| Ежемесячная стоимость сервера\* | $1100 в месяц за ноду | $0 | +| Стоимость запроса | $4000 | $1,200 per month | +| Необходимое количество нод | 10 | Не подходит | +| Время разработки | $6,000 или больше в месяц | Нет, встроен в сеть с глобально распределенными Индексаторами | +| Запросы в месяц | Ограничен возможностями инфраструктуры | ~30,000,000 | +| Стоимость одного запроса | $0 | $0.00004 | +| Инфраструктура | Централизованная | Децентрализованная | +| Географическая избыточность | общие затраты на каждую дополнительную ноду составляют $1,200 | Включено | +| Время безотказной работы | Варьируется | 99.9%+ | +| Общие ежемесячные расходы | $11,000+ | $1,200 | \* включая расходы на резервное копирование: $50-$100 в месяц @@ -75,18 +75,18 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Курирование сигнала на субграфе - это необязательная единовременная стоимость, равная нулю (например, сигнал стоимостью 1 тыс. долларов может быть курирован на субграфе, а затем отозван - с возможностью получения прибыли в процессе). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). -## No Setup Costs & Greater Operational Efficiency +## Отсутствие затрат на настройку и более высокая эксплуатационная эффективность Нулевая плата за установку. Приступайте к работе немедленно, без каких-либо затрат на настройку или накладные расходы. Никаких требований к оборудованию. Отсутствие перебоев в работе из-за централизованной инфраструктуры и больше времени для концентрации на Вашем основном продукте. Нет необходимости в резервных серверах, устранении неполадок или дорогостоящих инженерных ресурсах. -## Reliability & Resiliency +## Надежность и устойчивость -The Graph’s decentralized network gives users access to geographic redundancy that does not exist when self-hosting a `graph-node`. Queries are served reliably thanks to 99.9%+ uptime, achieved by hundreds of independent Indexers securing the network globally. +Децентрализованная сеть The Graph предоставляет пользователям доступ к географической избыточности, которой не существует при самостоятельном размещении `graph-node`. Запросы обслуживаются надежно благодаря времени безотказной работы более 99,9%, достигаемому сотнями независимых Индексаторов, обеспечивающими безопасность сети по всему миру. -Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. +Итог: The Graph Network дешевле, проще в использовании и дает превосходные результаты по сравнению с запуском `graph-node` локально. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/ru/resources/glossary.mdx b/website/src/pages/ru/resources/glossary.mdx index ffcd4bca2eed..9f55e53ab4e5 100644 --- a/website/src/pages/ru/resources/glossary.mdx +++ b/website/src/pages/ru/resources/glossary.mdx @@ -1,83 +1,83 @@ --- -title: Glossary +title: Глоссарий --- -- **The Graph**: A decentralized protocol for indexing and querying data. +- **The Graph**: Децентрализованный протокол для индексирования и запроса данных. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. -- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Индексаторы**: Участники сети, которые запускают ноды индексирования для индексирования данных из блокчейнов и обслуживания запросов GraphQL. -- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. +- **Потоки доходов Индексатора**: Индексаторы получают вознаграждение в GRT с помощью двух компонентов: скидки на сборы за запросы и вознаграждения за индексирование. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. -- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. +- **Собственный стейк Индексатора**: Сумма GRT, которую Индексаторы стейкают для участия в децентрализованной сети. Минимальная сумма составляет 100 000 GRT, верхнего предела нет. -- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. +- **Лимит делегирования**: Максимальная сумма GRT, которую Индексатор может получить от Делегаторов. Индексаторы могут принимать делегированные средства только в пределах 16-кратного размера их собственного стейка, и превышение этого лимита приводит к снижению вознаграждений. Например, при собственном стейке в 1 млн GRT лимит делегирования составит 16 млн GRT. При этом Индексаторы могут повысить этот лимит, увеличив свой собственный стейк. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. -- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. +- **Налог на делегирование**: Комиссия в размере 0,5%, уплачиваемая Делегаторами, когда они делегируют GRT Индексаторам. GRT, использованный для оплаты комиссий, сжигается. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. -- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. +- **Эпоха**: Единица времени в сети. В настоящее время одна эпоха составляет 6 646 блоков или приблизительно 1 день. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Рыбаки**: Роль в сети The Graph Network, которую выполняют участники, отслеживающие точность и целостность данных, предоставляемых Индексаторами. Когда Рыбак идентифицирует ответ на запрос или POI, который, по его мнению, является неверным, он может инициировать спор против Индексатора. Если спор будет решен в пользу Рыбака, Индексатор потеряет 2,5% своего стейка. Из этой суммы 50% присуждается Рыбаку в качестве вознаграждения за его бдительность, а оставшиеся 50% изымаются из обращения (сжигаются). Этот механизм предназначен для того, чтобы побудить Рыбаков поддерживать надежность сети, гарантируя, что Индексаторы будут нести ответственность за предоставляемые ими данные. -- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. +- **Арбитры**: Арбитры — это участники сети, назначаемые в рамках процесса управления. Роль Арбитра — принимать решения по результатам споров об индексировании и запросах. Их цель — максимизировать полезность и надежность The Graph Network. -- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. +- **Сокращение**: Индексаторы могут сократить свой собственный стейк GRT за предоставление неверного POI или за предоставление неточных данных. Процент сокращения — это параметр протокола, в настоящее время установленный на уровне 2,5% от собственного стейка Индексатора. 50% сокращенного GRT достается Рыбаку, который оспорил неточные данные или неверный POI. Остальные 50% сжигаются. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. -- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. +- **Награды за делегирование**: Вознаграждения, которые Делегаторы получают за делегирование GRT Индексаторам. Награды за делегирование распределяются в GRT. -- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. +- **GRT**: Рабочий служебный токен The Graph. GRT предоставляет участникам сети экономические стимулы за вклад в сеть. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. -- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. +- **Клиент The Graph**: Библиотека для создания децентрализованных приложений на основе GraphQL. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. -- **Graph CLI**: A command line interface tool for building and deploying to The Graph. +- **Graph CLI**: Инструмент интерфейса командной строки для создания и развертывания в The Graph. -- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. +- **Период восстановления**: Время, которое должно пройти, прежде чем Индексатор, изменивший свои параметры делегирования, сможет сделать это снова. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/ru/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ru/resources/migration-guides/assemblyscript-migration-guide.mdx index c52b3b97cda2..c2a4a750b1bc 100644 --- a/website/src/pages/ru/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/ru/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,49 +2,49 @@ title: Руководство по миграции AssemblyScript --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Это позволит разработчикам субграфов использовать более новые возможности языка AS и стандартной библиотеки. +That will enable Subgraph developers to use newer features of the AS language and standard library. -This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 +Это руководство применимо для всех, кто использует `graph-cli`/`graph-ts` версии ниже 0.22.0. Если у Вас уже есть версия выше (или равная) этой, значит, Вы уже использовали версию 0.19.10 AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Особенности ### Новый функционал -- `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) -- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) -- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) -- Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) -- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) -- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) -- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) -- Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) -- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) -- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) -- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) -- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) +- Теперь `TypedArray` можно создавать из `ArrayBuffer`, используя [новый статический метод `wrap`](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) +- Новые функции стандартной библиотеки: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`и `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Добавлена поддержка x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Добавлен `StaticArray`, более эффективный вариант массива ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) +- Добавлен `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Реализован аргумент `radix` в `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) +- Добавлена поддержка разделителей в литералах с плавающей точкой ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) +- Добавлена поддержка функций первого класса ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- Добавлены встроенные функции: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- Реализован `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Добавлена поддержка строк с шаблонными литералами ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) +- Добавлены `encodeURI(Component)` и `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- Добавлены `toString`, `toDateString` и `toTimeString` в `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- Добавлен `toUTCString` для `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- Добавлен встроенный тип `nonnull/NonNullable` ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) ### Оптимизации -- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) -- Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) -- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Функции `Math`, такие как `exp`, `exp2`, `log`, `log2` и`pow`, были заменены на более быстрые варианты ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Немного оптимизирована функция `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- Кэшировано больше обращений к полям в стандартных Map и Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Проведена оптимизация для степеней двойки в `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) ### Прочее -- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) -- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Тип литерала массива теперь может быть выведен из его содержимого ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Обновлена стандартная библиотека до Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) ## Как выполнить обновление? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,11 +52,11 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` -2. Update the `graph-cli` you're using to the `latest` version by running: +2. Обновите используемую Вами версию `graph-cli` до `latest`, выполнив команду: ```bash # если он у Вас установлен глобально @@ -66,14 +66,14 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` -3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: +3. Сделайте то же самое для `graph-ts`, но вместо глобальной установки сохраните его в основных зависимостях: ```bash npm install --save @graphprotocol/graph-ts@latest ``` 4. Следуйте остальной части руководства, чтобы исправить языковые изменения. -5. Run `codegen` and `deploy` again. +5. Снова запустите `codegen` и `deploy`. ## Критические изменения @@ -106,11 +106,11 @@ let maybeValue = load()! // прерывается во время выполн maybeValue.aMethod() ``` -Если Вы не уверены, что выбрать, мы рекомендуем всегда использовать безопасную версию. Если значение не существует, Вы можете просто выполнить раннее выражение if с возвратом в обработчике субграфа. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Затенение переменных -Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: +Раньше можно было использовать [затенение переменных](https://en.wikipedia.org/wiki/Variable_shadowing), и такой код работал: ```typescript let a = 10 @@ -132,7 +132,7 @@ in assembly/index.ts(4,3) ### Нулевые сравнения -Выполняя обновление своего субграфа, иногда Вы можете получить такие ошибки: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -141,12 +141,12 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i in src/mappings/file.ts(41,21) ``` -To solve you can simply change the `if` statement to something like this: +Для решения этой проблемы можно просто изменить оператор `if` на что-то вроде этого: ```typescript if (!decimals) { - // or + // или if (decimals === null) { ``` @@ -155,16 +155,16 @@ To solve you can simply change the `if` statement to something like this: ### Кастинг -The common way to do casting before was to just use the `as` keyword, like this: +Раньше преобразование типов обычно выполнялось с использованием ключевого слова `as`, например: ```typescript let byteArray = new ByteArray(10) -let uint8Array = byteArray as Uint8Array // equivalent to: byteArray +let uint8Array = byteArray as Uint8Array // эквивалентно: byteArray ``` Однако это работает только в двух случаях: -- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Примитивное преобразование (между такими типами, как `u8`, `i32`, `bool`; например: `let b: isize = 10; b as usize`); - Укрупнение по наследованию классов (subclass → superclass) Примеры: @@ -177,55 +177,55 @@ let c: usize = a + (b as usize) ``` ```typescript -// upcasting on class inheritance +// приведение к базовому типу при наследовании классов class Bytes extends Uint8Array {} -let bytes = new Bytes(2) -// bytes // same as: bytes as Uint8Array +let bytes = new Bytes(2); +// bytes // то же самое, что: bytes as Uint8Array ``` -There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: +Есть два сценария, где Вам может понадобиться преобразование типов, но использование `as`/`var` **небезопасно**: - Понижение уровня наследования классов (superclass → subclass) - Между двумя типами, имеющими общий супер класс ```typescript -// downcasting on class inheritance +// понижение уровня наследования классов class Bytes extends Uint8Array {} let uint8Array = new Uint8Array(2) -// uint8Array // breaks in runtime :( +// uint8Array // перерывы в работе :( ``` ```typescript -// between two types that share a superclass +// между двумя типами, имеющими общий суперкласс class Bytes extends Uint8Array {} class ByteArray extends Uint8Array {} let bytes = new Bytes(2) -// bytes // breaks in runtime :( +// bytes // перерывы в работе :( ``` -For those cases, you can use the `changetype` function: +В таких случаях Вы можете использовать функцию `changetype`: ```typescript -// downcasting on class inheritance +// понижение уровня наследования классов class Bytes extends Uint8Array {} let uint8Array = new Uint8Array(2) -changetype(uint8Array) // works :) +changetype(uint8Array) // работает :) ``` ```typescript -// between two types that share a superclass +// между двумя типами, имеющими общий суперкласс class Bytes extends Uint8Array {} class ByteArray extends Uint8Array {} let bytes = new Bytes(2) -changetype(bytes) // works :) +changetype(bytes) // работает :) ``` -If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. +Если Вы просто хотите убрать возможность обнуления, Вы можете продолжить использовать оператор `as` (или `variable`), но помните, что это значение не может быть нулевым, иначе оно приведет к ошибке. ```typescript // удалить значение NULL @@ -238,7 +238,7 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` -For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 +В случае возможности обнуления мы рекомендуем ознакомиться с [функцией проверки обнуления](https://www.assemblyscript.org/basics.html#nullability-checks), которая сделает код чище 🙂 Также мы добавили еще несколько статических методов в некоторые типы, чтобы облегчить кастинг: @@ -249,7 +249,7 @@ For the nullability case we recommend taking a look at the [nullability check fe ### Проверка нулевого значения с доступом к свойству -To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: +Чтобы использовать [функцию проверки на обнуляемость](https://www.assemblyscript.org/basics.html#nullability-checks), Вы можете использовать либо операторы `if`, либо тернарный оператор (`?` и `:`), например: ```typescript let something: string | null = 'data' @@ -267,7 +267,7 @@ if (something) { } ``` -However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: +Однако это работает только тогда, когда Вы выполняете `if` / тернарную операцию для переменной, а не для доступа к свойству, например: ```typescript class Container { @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // не выдает ошибок времени компиляции, как это должно быть ``` -Мы открыли вопрос по этому поводу для компилятора AssemblyScript, но пока, если Вы выполняете подобные операции в своих мэппингах субграфов, Вам следует изменить их так, чтобы перед этим выполнялась проверка на нулевое значение. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -Он будет скомпилирован, но сломается во время выполнения. Это происходит из-за того, что значение не было инициализировано, поэтому убедитесь, что Ваш субграф инициализировал свои значения, например так: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized @@ -381,7 +381,7 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` -You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: +Вам необходимо убедиться, что значение `total.amount` инициализировано, потому что, если Вы попытаетесь получить доступ к сумме, как в последней строке, произойдет сбой. Таким образом, Вы либо инициализируете его первым: ```typescript let total = Total.load('latest') @@ -394,7 +394,7 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` -Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 +Или Вы можете просто изменить свою схему GraphQL, чтобы не использовать для этого свойства тип, допускающий обнуление, тогда мы инициализируем его как ноль на этапе `codegen` 😉 ```graphql type Total @entity { @@ -425,7 +425,7 @@ export class Something { } ``` -The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: +Компилятор выдаст ошибку, потому что Вам нужно либо добавить инициализатор для свойств, являющихся классами, либо добавить оператор `!`: ```typescript export class Something { @@ -451,12 +451,12 @@ export class Something { ### Инициализация массива -The `Array` class still accepts a number to initialize the length of the list, however you should take care because operations like `.push` will actually increase the size instead of adding to the beginning, for example: +Класс `Array` по-прежнему принимает число для инициализации длины списка, однако следует учитывать, что операции, такие как `.push`, будут увеличивать размер массива, а не добавлять элемент в начало. Например: ```typescript let arr = new Array(5) // ["", "", "", "", ""] -arr.push('something') // ["", "", "", "", "", "something"] // size 6 :( +arr.push('something') // ["", "", "", "", "", "something"] // размер 6 :( ``` В зависимости от используемых типов, например, допускающих значение NULL, и способа доступа к ним, можно столкнуться с ошибкой времени выполнения, подобной этой: @@ -465,7 +465,7 @@ arr.push('something') // ["", "", "", "", "", "something"] // size 6 :( ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/array.ts, line 110, column 40, with message: Element type must be nullable if array is holey wasm backtrace: 0: 0x19c4 - !~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type ``` -To actually push at the beginning you should either, initialize the `Array` with size zero, like this: +Чтобы действительно добавить элемент в начало, следует инициализировать `Array` с нулевым размером, например, так: ```typescript let arr = new Array(0) // [] @@ -483,7 +483,7 @@ arr[0] = 'something' // ["something", "", "", "", ""] ### Схема GraphQL -This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. +Это не прямое изменение AssemblyScript, но Вам, возможно, придется обновить файл `schema.graphql`. Теперь Вы больше не можете определять поля в своих типах, которые являются списками, не допускающими значение NULL. Если у Вас такая схема: @@ -498,7 +498,7 @@ type MyEntity @entity { } ``` -You'll have to add an `!` to the member of the List type, like this: +Вам нужно добавить `!` к элементу типа List, например, так: ```graphql type Something @entity { @@ -511,14 +511,14 @@ type MyEntity @entity { } ``` -This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). +Это изменение связано с различиями в обработке возможности обнуления между версиями AssemblyScript и связано с файлом `src/generated/schema.ts` (значение по умолчанию, хотя Вы могли его изменить). ### Прочее -- Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) -- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) -- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) -- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) -- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) -- Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- `Map#set` и `Set#add` приведены в соответствие со спецификацией, возвращая `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Массивы больше не наследуются от ArrayBufferView, а теперь являются отдельными ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Классы, инициализируемые из объектных литералов, больше не могут определять конструктор ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Результат бинарной операции `**` теперь является общим целочисленным знаменателем, если оба операнда - целые числа. Ранее результат был числом с плавающей точкой, как при вызове `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- При приведении к `bool` значение `NaN` теперь принудительно преобразуется в `false` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) +- При сдвиге небольшого целочисленного значения типа `i8`/`u8` или `i16`/`u16` на результат влияют только 3 или 4 младших бита значения RHS, аналогично результату `i32.shl`, на который влияют только 5 младших битов значения RHS. Пример: `someI8 << 8` ранее выдавало значение 0, а теперь выдает `someI8` благодаря маскировке RHS как `8 & 7 = 0` (3 бита) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- Исправлена ошибка в сравнении строк разной длины ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) diff --git a/website/src/pages/ru/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ru/resources/migration-guides/graphql-validations-migration-guide.mdx index b7cb792259b3..01911cda4906 100644 --- a/website/src/pages/ru/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/ru/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Руководство по переходу на валидацию GraphQL +title: GraphQL Validations Migration Guide --- Вскоре `graph-node` будет поддерживать 100-процентное покрытие [спецификации GraphQL Validation] (https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ title: Руководство по переходу на валидацию Grap Вы можете использовать инструмент миграции CLI, чтобы найти любые проблемы в операциях GraphQL и исправить их. В качестве альтернативы вы можете обновить конечную точку своего клиента GraphQL, чтобы использовать конечную точку `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Проверка запросов на этой конечной точке поможет Вам обнаружить проблемы в Ваших запросах. -> Не все субграфы нужно будет переносить, если Вы используете [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) или [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), они уже гарантируют корректность Ваших запросов. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## CLI-инструмент миграции @@ -284,8 +284,8 @@ query { ```graphql query { - # В конце концов, у нас есть два определения "x", указывающие - # на разные поля! + # В конце концов, у нас есть два определения "x", указывающие + # на разные поля! ...A ...B } @@ -437,7 +437,7 @@ query { ```graphql query purposes { # Если в схеме "name" определено как "String", - # этот запрос не пройдёт валидацию. + # этот запрос не пройдёт валидацию. purpose(name: 1) { id } @@ -447,8 +447,8 @@ query purposes { query purposes($name: Int!) { # Если "name" определено в схеме как `String`, - # этот запрос не пройдёт валидацию, потому что - # используемая переменная имеет тип `Int` + # этот запрос не пройдёт валидацию, потому что + # используемая переменная имеет тип `Int` purpose(name: $name) { id } diff --git a/website/src/pages/ru/resources/roles/curating.mdx b/website/src/pages/ru/resources/roles/curating.mdx index ef319cda705e..61053f5d542b 100644 --- a/website/src/pages/ru/resources/roles/curating.mdx +++ b/website/src/pages/ru/resources/roles/curating.mdx @@ -1,88 +1,88 @@ --- -title: Кураторство +title: Курирование --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. -## What Does Signaling Mean for The Graph Network? +## Что означает сигнализация для The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Сигналы кураторов представлены токенами ERC20, называемыми Graph Curation Shares (GCS). Те, кто хочет зарабатывать больше комиссий за запросы, должны направлять свои GRT на субграфы, которые, по их прогнозам, будут генерировать значительный поток комиссий для сети. Кураторы не подвергаются штрафам за некорректное поведение, но существует налог на депозиты Кураторов, чтобы предотвратить принятие решений, которые могут нанести ущерб целостности сети. Кроме того, Кураторы будут получать меньше комиссий за запросы, если они занимаются кураторством субграфов низкого качества, так как будет меньше запросов для обработки или Индексаторов, готовых их обрабатывать. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -При подаче сигнала Кураторы могут решить подать сигнал на определенную версию субграфа или использовать автомиграцию. Если они подают сигнал с помощью автомиграции, доли куратора всегда будут обновляться до последней версии, опубликованной разработчиком. Если же они решат подать сигнал на определенную версию, доли всегда будут оставаться на этой конкретной версии. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Индексаторы могут находить субграфы для индексирования на основе сигналов курирования, которые они видят в Graph Explorer (скриншот ниже). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Как подавать Сигнал -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -Куратор может выбрать конкретную версию подграфа для сигнализации, или же он может выбрать автоматическую миграцию своего сигнала на самую новую рабочую сборку этого подграфа. Оба варианта являются допустимыми стратегиями и имеют свои плюсы и минусы. +Куратор может выбрать подачу сигнала на конкретную версию субграфа или настроить автоматическую миграцию сигнала на последнюю производственную версию этого субграфа. Оба подхода являются допустимыми и имеют свои плюсы и минусы. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. -Автоматическая миграция вашего сигнала на самую новую рабочую сборку может быть ценной, чтобы гарантировать непрерывное начисление комиссий за запросы. Каждый раз, когда вы осуществляете курирование, взимается комиссия в размере 1%. Вы также заплатите комиссию в размере 0,5% при каждой миграции. Разработчикам подграфов не рекомендуется часто публиковать новые версии - они должны заплатить комиссию на курирование в размере 0,5% на все автоматически мигрированные доли курации. +Автоматическая миграция Вашего сигнала на новейшую производственную версию может быть полезной, чтобы гарантировать непрерывное начисление комиссий за запросы. Каждый раз, когда Вы осуществляете курирование, взимается комиссия в размере 1%. Также при каждой миграции взимается налог на курирование в размере 0,5%. Разработчикам субграфов не рекомендуется часто публиковать новые версии, так как они обязаны оплачивать комиссию в размере 0,5% за все автоматически перенесённые кураторские доли. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. -## Withdrawing your GRT +## Вывод Вашего GRT -Curators have the option to withdraw their signaled GRT at any time. +Кураторы имеют возможность в любой момент отозвать свои заявленные GRT. -Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). +В отличие от процесса делегирования, если Вы решите отозвать заявленный Вами GRT, Вам не придется ждать периода размораживания и Вы получите всю сумму (за вычетом 1% налога на курирование). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Как только куратор отзовет свой сигнал, индексаторы могут продолжить индексирование субграфа, даже если в данный момент нет активного сигнала GRT. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Риски -1. Рынок запросов в The Graph по своей сути молод, и существует риск того, что ваш %APY может оказаться ниже, чем вы ожидаете, из-за зарождающейся динамики рынка. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. Подграф может выйти из строя из-за ошибки. За неудавшийся подграф не начисляется плата за запрос. В результате вам придется ждать, пока разработчик исправит ошибку и выложит новую версию. - - Если вы подписаны на новейшую версию подграфа, ваши общие ресурсы автоматически перейдут на эту новую версию. При этом будет взиматься кураторская комиссия в размере 0,5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +1. Рынок запросов в The Graph по своей сути молод, и существует риск того, что Ваш %APY может оказаться ниже, чем Вы ожидаете, из-за зарождающейся динамики рынка. +2. Плата за курирование — когда куратор сигнализирует GRT о субграфе, он платит налог на курирование в размере 1%. Этот сбор сжигается. +3. (Только для Ethereum) Когда кураторы сжигают свои доли для вывода GRT, оценочная стоимость оставшихся долей в GRT уменьшается. Учтите, что в некоторых случаях кураторы могут решить сжечь все свои доли **одновременно**. Такая ситуация может возникнуть, если разработчик dApp перестанет обновлять и улучшать свой субграф или если субграф выйдет из строя. В результате оставшиеся кураторы могут вывести лишь часть своего первоначального GRT. Если Вы ищете роль в сети с меньшим уровнем риска, обратите внимание на [Делегаторов](/resources/roles/delegating/delegating/). +4. Субграф может выйти из строя из-за ошибки. Неисправный субграф не генерирует комиссии за запросы. В таком случае Вам придется ждать, пока разработчик исправит ошибку и развернет новую версию. + - Если Вы подписаны на самую новую версию субграфа, Ваши доли будут автоматически мигрировать на эту новую версию. При этом взимается 0,5% налог на кураторство. + - Если Вы подали сигнал на определенную версию субграфа и она вышла из строя, Вам потребуется вручную сжечь свои кураторские доли. Затем Вы сможете подать сигнал на новую версию субграфа, при этом будет взиматься налог на кураторство в размере 1%. ## Часто задаваемые вопросы по кураторству -### 1. Какой % от оплаты за запрос получают кураторы? +### 1. Какой % от оплаты за запрос получают Кураторы? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +Подавая сигнал о субграфе, Вы получаете долю от всех комиссий за запросы, которые генерирует субграф. 10% от всех сборов за запросы переходят Кураторам пропорционально их доле курирования. Эти 10% подлежат регулированию через механизм управления. -### 2. Как определить, какие подграфы являются высококачественными, чтобы подавать на них сигналы? +### 2. Как определить, какие субграфы являются качественными для подачи сигнала? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Какова стоимость обновления подграфа? +### 3. Какова стоимость обновления субграфа? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +При переносе Вашей кураторской доли в новую версию субграфа взимается курационный налог в размере 1%. Кураторы могут подписаться на самую последнюю версию субграфа. Когда кураторская доля автоматически перенотся в новую версию, Кураторы также платят половину кураторского налога, т. е. 0,5%, потому что обновление субграфов — это внутрисетевое действие, требующее затрат газа. -### 4. Как часто я могу обновлять свой подграф? +### 4. How often can I update my Subgraph? -Рекомендуется не обновлять свои подграфы слишком часто. См. выше для более подробной информации. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Могу ли я продать свои кураторские доли? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). +Акции курирования нельзя «купить» или «продать», как другие токены ERC20, с которыми Вы, возможно, знакомы. Их можно только отчеканить (создать) или сжечь (уничтожить). -As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +Будучи куратором Arbitrum, Вы гарантированно вернете первоначально внесенный Вами GRT (за вычетом налога). -### 6. Am I eligible for a curation grant? +### 6. Имею ли я право на получение гранта на кураторство? -Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. +Гранты на кураторство определяются индивидуально в каждом конкретном случае. Если Вам нужна помощь с кураторством, отправьте запрос на support@thegraph.zendesk.com. Вы все еще в замешательстве? Ознакомьтесь с нашим видеоруководством по кураторству: diff --git a/website/src/pages/ru/resources/roles/delegating/delegating.mdx b/website/src/pages/ru/resources/roles/delegating/delegating.mdx index a0f6b73d1c06..c6e6f6eb8b33 100644 --- a/website/src/pages/ru/resources/roles/delegating/delegating.mdx +++ b/website/src/pages/ru/resources/roles/delegating/delegating.mdx @@ -2,54 +2,54 @@ title: Делегирование --- -To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +Чтобы приступить к делегированию прямо сейчас, ознакомьтесь с разделом [делегирование в The Graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). ## Обзор -Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. +Делегаторы зарабатывают GRT, делегируя GRT Индексаторам, что повышает безопасность и функциональность сети. -## Benefits of Delegating +## Преимущества делегирования -- Strengthen the network’s security and scalability by supporting Indexers. -- Earn a portion of rewards generated by the Indexers. +- Усиление безопасности и масштабируемости сети за счет поддержки Индексаторов. +- Получение части вознаграждений, генерируемых Индексаторами. -## How Does Delegation Work? +## Как работает делегирование? -Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. +Делегаторы получают вознаграждения GRT от Индексатора(ов), которому(ым) они делегируют свои GRT. -An Indexer's ability to process queries and earn rewards depends on three key factors: +Способность Индексатора обрабатывать запросы и получать вознаграждения зависит от трех ключевых факторов: -1. The Indexer's Self-Stake (GRT staked by the Indexer). -2. The total GRT delegated to them by Delegators. -3. The price the Indexer sets for queries. +1. Собственной ставки Индексатора (GRT застейканные Индексатором). +2. Общей суммы GRT, делегированной им Делегаторами. +3. Цены, которую Индексатор устанавливает за запросы. -The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. +Чем больше GRT застейкано и делегировано Индексатору, тем больше запросов он сможет обработать, что приведет к более высоким потенциальным вознаграждениям как для Делегатора, так и для Индексатора. -### What is Delegation Capacity? +### Что такое объем делегирования? -Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. +Под объемом делегирования понимается максимальная сумма GRT, которую Индексатор может принять от Делегаторов, исходя из собственной доли Индексатора. -The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. +The Graph Network включает коэффициент делегирования 16, что означает, что Индексатор может принять делегированные GRT, в 16 раз превышающие его собственный стейк. -For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. +Например, если Индексатор имеет собственную долю в размере 1 млн GRT, его объем делегирования составляет 16 млн. -### Why Does Delegation Capacity Matter? +### Почему объем делегирования имеет значение? -If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. +Если Индексатор превышает свой объем делегирования, вознаграждения всех Делегаторов размываются, поскольку избыточный делегированный GRT не может быть эффективно использован в рамках протокола. -This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. +Поэтому Делегаторам крайне важно оценить текущий объем делегирования Индексатора, прежде чем его выбирать. -Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. +Индексаторы могут увеличить свой объем делегирования, увеличив свой собственный стейк, тем самым повысив лимит делегированных токенов. -## Delegation on The Graph +## Делегирование на The Graph -> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). +> Пожалуйста, обратите внимание на то, что это руководство не охватывает такие шаги, как настройка MetaMask. Сообщество Ethereum предоставляет [исчерпывающий ресурс по кошелькам](https://ethereum.org/en/wallets/). -There are two sections in this guide: +Данное руководство состоит из двух разделов: - Риски связанные с делегацией в сети The Graph - Как расчитать примерный доход @@ -58,7 +58,7 @@ There are two sections in this guide: Ниже указаны основные риски Делегатора. -### The Delegation Tax +### Комиссия за делегирование Делегаторы не могут быть наказаны за некорректное поведение, но они уплачивают комиссию на делегацию, который должен стимулировать обдуманный выбор Индексатора для делегации. @@ -68,19 +68,19 @@ There are two sections in this guide: - В целях безопасности Вам следует рассчитать потенциальную прибыль при делегировании Индексатору. Например, Вы можете подсчитать, сколько дней пройдет, прежде чем Вы вернете налог в размере 0,5% за своё делегирование. -### The Undelegation Period +### Период отмены делегирования -When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. +Когда Делегатор решает отменить делегирование, на его токены распространяется 28-дневный период отмены делегирования. -This means they cannot transfer their tokens or earn any rewards for 28 days. +Это означает, что они не смогут переводить свои токены или получать какие-либо вознаграждения в течение 28 дней. -After the undelegation period, GRT will return to your crypto wallet. +По истечении периода отмены делегирования GRT вернутся на Ваш криптокошелек. ### Почему это важно? -If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. +Если Вы выберете Индексатора, которому нельзя доверять или который плохо выполняет свою работу, Вам захочется отозвать делегирование. Это приведёт к тому, что Вы потеряете возможности получения наград. -As a result, it’s recommended that you choose an Indexer wisely. +Поэтому рекомендуется тщательно выбирать Индексатора. ![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) @@ -96,25 +96,25 @@ As a result, it’s recommended that you choose an Indexer wisely. - **Снижение комиссии за запросы** — это то же самое, что и снижение вознаграждения за индексирование, но она применяется к доходам от комиссий за запросы, которые собирает Индексатор. -- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. +- Настоятельно рекомендуется посетить [Discord The Graph](https://discord.gg/graphprotocol), чтобы узнать, какие Индексаторы имеют лучшую социальную и техническую репутацию. -- Many Indexers are active in Discord and will be happy to answer your questions. +- Многие Индексаторы активно участвуют в Discord и будут рады ответить на Ваши вопросы. ## Расчет ожидаемой доходности Делегаторов -> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). +> Рассчитайте свою ROI (рентабельность инвестиций) от делегирования [здесь](https://thegraph.com/explorer/delegate?chain=arbitrum-one). -A Delegator must consider a variety of factors to determine a return: +Делегатор должен учитывать ряд факторов для определения доходности: -An Indexer's ability to use the delegated GRT available to them impacts their rewards. +Способность Индексатора использовать доступные ему делегированные GRT влияет на его вознаграждения. -If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. +Если Индексатор не распределяет все имеющиеся в его распоряжении GRT, он может упустить возможность максимизировать потенциальный доход как для себя, так и для своих Делегаторов. -Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. +Индексаторы могут закрыть распределение и получить вознаграждение в любое время в течение периода от 1 до 28 дней. Однако, если вознаграждения не будут собраны своевременно, общая сумма вознаграждений может оказаться ниже, даже если определенный процент вознаграждений останется незабранным. ### Учёт части комиссии за запросы и части комиссии за индексирование -You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. +Вам следует выбрать Индексатора, который открыто устанавливает размер комиссии за запрос и снижение платы за индексирование. Формула следующая: diff --git a/website/src/pages/ru/resources/roles/delegating/undelegating.mdx b/website/src/pages/ru/resources/roles/delegating/undelegating.mdx index d9422b997a77..586a085096d2 100644 --- a/website/src/pages/ru/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/ru/resources/roles/delegating/undelegating.mdx @@ -1,73 +1,69 @@ --- -title: Undelegating +title: Отмена делегирования --- -Learn how to withdraw your delegated tokens through [Graph Explorer](https://thegraph.com/explorer) or [Arbiscan](https://arbiscan.io/). +Узнайте, как вывести свои делегированные токены через [Graph Explorer](https://thegraph.com/explorer) или [Arbiscan](https://arbiscan.io/). -> To avoid this in the future, it's recommended that you select an Indexer wisely. To learn how to select and Indexer, check out the Delegate section in Graph Explorer. +> Чтобы избежать этого в будущем, рекомендуется тщательно выбирать Индексатор. Чтобы узнать, как выбрать Индексатор, ознакомьтесь с разделом "Делегировать" в Graph Explorer. -## How to Withdraw Using Graph Explorer +## Как вывести с помощью Graph Explorer ### Пошаговое руководство -1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. +1. Посетите [Graph Explorer](https://thegraph.com/explorer). Убедитесь, что вы находитесь в Explorer, а **не** в Subgraph Studio. -2. Click on your profile. You can find it on the top right corner of the page. +2. Нажмите на свой профиль. Он находится в верхнем правом углу страницы. + - Убедитесь, что ваш кошелек подключен. Если он не подключен, вместо этого вы увидите кнопку "подключить". - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. +3. Когда вы окажетесь в своем профиле, нажмите на вкладку "Делегирование". В этой вкладке вы сможете увидеть список Индексаторов, которым вы делегировали свои токены. -3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. +4. Нажмите на Индексатора, из которого вы хотите вывести свои токены. + - Убедитесь, что вы записали конкретного Индексатора, так как вам нужно будет найти его снова для вывода. -4. Click on the Indexer from which you wish to withdraw your tokens. +5. Выберите опцию "Отменить делегирование", кликнув на три точки рядом с Индексатором с правой стороны, как показано на изображении ниже: - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. + ![Кнопка отменить делегирование](/img/undelegate-button.png) -5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: +6. После примерно [28 эпох](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 дней) вернитесь в раздел "Делегирование" и найдите конкретного Индексатора, делегацию которого вы отменили. - ![Undelegate button](/img/undelegate-button.png) +7. Как только вы найдете Индексатора, кликните на три точки рядом с ним и продолжите вывод всех ваших токенов. -6. After approximately [28 epochs](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 days), return to the Delegate section and locate the specific Indexer you undelegated from. +## Как вывести средства с использованием Arbiscan -7. Once you find the Indexer, click on the three dots next to them and proceed to withdraw all your tokens. - -## How to Withdraw Using Arbiscan - -> This process is primarily useful if the UI in Graph Explorer experiences issues. +> Этот процесс в основном полезен, если пользовательский интерфейс в Graph Explorer испытывает проблемы. ### Пошаговое руководство -1. Find your delegation transaction on Arbiscan. - - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) - -2. Navigate to "Transaction Action" where you can find the staking extension contract: +1. Найдите вашу транзакцию делегирования на Arbiscan. + - Вот [пример транзакции на Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) +2. Перейдите в раздел "Действие по транзакции", где вы можете найти контракт расширения стейкинга: + - [Это контракт расширения стейкинга для приведенного выше примера](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) -3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) +3. Затем нажмите на «Контракт». ![Вкладка контракта на Arbiscan, между трансфер NFT и События](/img/arbiscan-contract.png) -4. Scroll to the bottom and copy the Contract ABI. There should be a small button next to it that allows you to copy everything. +4. Прокрутите вниз и скопируйте Contract ABI. Рядом с ним должна быть небольшая кнопка, которая позволяет скопировать всё. -5. Click on your profile button in the top right corner of the page. If you haven't created an account yet, please do so. +5. Нажмите на кнопку своего профиля в верхнем правом углу страницы. Если вы ещё не создали аккаунт, пожалуйста, сделайте это. -6. Once you're in your profile, click on "Custom ABI”. +6. После того как вы окажетесь в своём профиле, нажмите на "Custom ABI". -7. Paste the custom ABI you copied from the staking extension contract, and add the custom ABI for the address: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**sample address**) +7. Вставьте пользовательский ABI, который вы скопировали из контракта расширения для стейкинга, и добавьте пользовательский ABI для адреса: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**пример адреса**) -8. Go back to the [staking extension contract](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Now, call the `unstake` function in the [Write as Proxy tab](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), which has been added thanks to the custom ABI, with the number of tokens that you delegated. +8. Перейдите обратно к [контракту расширения для стейкинга](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Теперь вызовите функцию `unstake` на вкладке [Write as Proxy](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), которая была добавлена благодаря пользовательскому ABI, с количеством токенов, которые вы делегировали. -9. If you don't know how many tokens you delegated, you can call `getDelegation` on the Read Custom tab. You will need to paste your address (delegator address) and the address of the Indexer that you delegated to, as shown in the following screenshot: +9. Если вы не знаете, сколько токенов вы делегировали, вы можете вызвать `getDelegation` на вкладке Read Custom. Вам нужно будет вставить свой адрес (адрес делегатора) и адрес Индексатора, которому вы делегировали, как показано на следующем скриншоте: - ![Both of the addresses needed](/img/get-delegate.png) + ![Оба адреса, которые нужны](/img/get-delegate.png) - - This will return three numbers. The first number is the amount you can unstake. + - Это вернет три числа. Первое число — это количество токенов, которые вы можете вывести. -10. After you have called `unstake`, you can withdraw after approximately 28 epochs (28 days) by calling the `withdraw` function. +10. После того как вы вызовете `unstake`, вы сможете вывести токены примерно через 28 эпох (28 дней), вызвав функцию `withdraw`. -11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: +11. Вы можете увидеть, сколько токенов будет доступно для вывода, вызвав функцию `getWithdrawableDelegatedTokens` в разделе Read Custom и передав ей ваш делегированный кортеж. См. скриншот ниже: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Вызовите \`getWithdrawableDelegatedTokens\`, чтобы увидеть количество токенов, которые можно вывести](/img/withdraw-available.png) ## Дополнительные ресурсы -To delegate successfully, review the [delegating documentation](/resources/roles/delegating/delegating/) and check out the delegate section in Graph Explorer. +Чтобы успешно делегировать, ознакомьтесь с [документацией по делегированию](/resources/roles/delegating/delegating/) и проверьте раздел делегирования в Graph Explorer. diff --git a/website/src/pages/ru/resources/subgraph-studio-faq.mdx b/website/src/pages/ru/resources/subgraph-studio-faq.mdx index 4e0eee2dba2d..5c63bc3b3b6d 100644 --- a/website/src/pages/ru/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/ru/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Часто задаваемые вопросы о Subgraph Studio ## 1. Что такое Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Как создать ключ API? @@ -12,20 +12,20 @@ title: Часто задаваемые вопросы о Subgraph Studio ## 3. Могу ли я создать несколько ключей API? -Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). +Да! Вы можете создать несколько ключей API для использования в разных проектах. Перейдите по этой [ссылке](https://thegraph.com/studio/apikeys/). ## 4. Как мне настроить ограничения домена для ключа API? После создания ключа API в разделе «Безопасность» Вы можете определить домены, которые могут запрашивать определенный ключ API. -## 5. Могу ли я передать свой субграф другому владельцу? +## 5. Can I transfer my Subgraph to another owner? -Да, субграфы, которые были опубликованы в Arbitrum One, могут быть перенесены в новый кошелек или на кошелек с мультиподписью. Вы можете сделать это, щелкнув три точки рядом с кнопкой «Опубликовать» на странице сведений о субграфе и выбрав «Передать право собственности». +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Обратите внимание, что Вы больше не сможете просматривать или редактировать субграф в Studio после его переноса. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Как мне найти URL-адреса запросов для субграфов, если я не являюсь разработчиком субграфа, который хочу использовать? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Помните, что Вы можете создать ключ API и запрашивать любой субграф, опубликованный в сети, даже если сами создаете субграф. Эти запросы через новый ключ API являются платными, как и любые другие в сети. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/ru/resources/tokenomics.mdx b/website/src/pages/ru/resources/tokenomics.mdx index e4ab88d45844..4749ec91ae63 100644 --- a/website/src/pages/ru/resources/tokenomics.mdx +++ b/website/src/pages/ru/resources/tokenomics.mdx @@ -1,103 +1,103 @@ --- title: Токеномика сети The Graph sidebarTitle: Tokenomics -description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. +description: Сеть The Graph стимулируется мощной токеномикой. Вот как работает GRT, нативный токен The Graph, предназначенный для предоставления рабочих утилит. --- ## Обзор -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Специфические особенности -The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. +Модель The Graph похожа на модель B2B2C, но она управляется децентрализованной сетью, где участники сотрудничают, чтобы предоставлять данные конечным пользователям в обмен на вознаграждения GRT. GRT – это утилитарный токен The Graph. Он координирует и стимулирует взаимодействие между поставщиками данных и потребителями внутри сети. -The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/subgraphs/billing/). +The Graph играет важную роль в обеспечении большей доступности данных блокчейна и поддерживает рынок для их обмена. Чтобы узнать больше о модели The Graph «плати за то, что тебе нужно», ознакомьтесь с её [бесплатными планами и планами развития](/subgraphs/billing/). -- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) +- Адрес токена GRT в основной сети: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) -- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +- Адрес токена GRT на Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -## The Roles of Network Participants +## Роли участников сети -There are four primary network participants: +Есть четыре основных участника сети: -1. Delegators - Delegate GRT to Indexers & secure the network +1. Делегаторы - Делегируют токены GRT Индексаторам и защищают сеть -2. Кураторы - Ищут лучшие субграфы для Индексаторов +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Индексаторы - Магистральный канал передачи данных блокчейна -Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). +Рыбаки и Арбитры также вносят свой вклад в успех сети, поддерживая работу других основных участников. Для получения дополнительной информации о сетевых ролях [прочитайте эту статью](https://thegraph.com/blog/the-graph-grt-token- Economics/). -![Tokenomics diagram](/img/updated-tokenomics-image.png) +![Диаграмма токеномики](/img/aktualisiertes-tokenomics-bild.png) -## Delegators (Passively earn GRT) +## Делегаторы (Пассивный заработок GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. -For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. +Например, если бы Делегатор делегировал 15 тыс. GRT Индексатору, предлагающему 10%, Делегатор получал бы вознаграждение в размере ~ 1,500 GRT в год. -There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. +Существует комиссия на делегирование в размере 0,5%, которая взимается всякий раз, когда Делегатор делегирует GRT в сети. Если Делегатор решает отозвать свои делегированные GRT, он должен подождать 28 эпох, которые занимают период отмены делегирования. Каждая эпоха состоит из 6646 блоков, что означает, что 28 эпох в конечном итоге составляют приблизительно 26 дней. -If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. +Если Вы это читаете, значит, Вы можете стать Делегатором прямо сейчас, перейдя на [страницу участников сети](https://thegraph.com/explorer/participants/indexers) и делегировав GRT выбранному Индексатору. -## Curators (Earn GRT) +## Кураторы (Заработок GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. -## Developers +## Разработчики -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Создание субграфа +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Запрос к существующему субграфу +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. -Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. +Субграфы [запрашиваются с помощью GraphQL](/subgraphs/querying/introduction/), а плата за запрос производится с помощью GRT в [Subgraph Studio](https://thegraph.com/studio/). Плата за запрос распределяется между участниками сети на основе их вклада в протокол. -1% of the query fees paid to the network are burned. +1% от комиссии за запрос, оплаченной в сети, сжигается. -## Indexers (Earn GRT) +## Индексаторы (Заработок GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. -Indexers can earn GRT rewards in two ways: +Индексаторы могут зарабатывать GRT двумя способами: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. -In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. +Для запуска ноды индексирования Индексаторы должны застейкать в сети не менее 100 000 GRT в качестве собственного стейка. Индексаторы заинтересованы в том, чтобы делать собственный стейк GRT пропорционально количеству обслуживаемых ими запросов. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. -The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. +Сумма вознаграждений, которые получает Индексатор, может варьироваться в зависимости от размера его собственного стейка, принятых делегированных средств, качества обслуживания и многих других факторов. -## Token Supply: Burning & Issuance +## Объем токенов: Сжигание и Эмиссия -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. -![Total burned GRT](/img/total-burned-grt.jpeg) +![Общее количество сожжённых GRT](/img/total-burned-grt.jpeg) -In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. +В дополнение к этим регулярным процессам сжигания токенов, токен GRT также имеет механизм слэшинга (наказания) за злонамеренное или безответственное поведение Индексаторов. Если Индексатор подвергается слэшингу, 50% его вознаграждения за индексирование за эпоху сжигается (в то время как другая половина достается Рыбаку), а его собственная сумма стейка уменьшается на 2,5%, причем половина этой суммы сгорает. Это создаёт мощный стимул для Индексаторов действовать в интересах сети, обеспечивая её безопасность и стабильность. -## Improving the Protocol +## Улучшение протокола -The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). +The Graph Network постоянно развивается, и в экономический дизайн протокола регулярно вносятся улучшения, чтобы обеспечить наилучший опыт для всех участников сети. The Graph Council следит за изменениями протокола, и участники сообщества активно привлекаются к этому процессу. Примите участие в улучшении протокола на [форуме The Graph](https://forum.thegraph.com/). diff --git a/website/src/pages/ru/sps/introduction.mdx b/website/src/pages/ru/sps/introduction.mdx index 13b65e8d36fe..d4c5118ad8f6 100644 --- a/website/src/pages/ru/sps/introduction.mdx +++ b/website/src/pages/ru/sps/introduction.mdx @@ -1,30 +1,31 @@ --- -title: Introduction to Substreams-Powered Subgraphs -sidebarTitle: Introduction +title: Введение в субграфы, работающие на основе Субпотоков +sidebarTitle: Введение --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Обзор -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Используя пакет Субпотоков (`.spkg`) в качестве источника данных, Ваш субграф получает доступ к потоку предварительно индексированных данных блокчейна. Это позволяет более эффективно и масштабируемо обрабатывать данные, особенно в крупных или сложных блокчейн-сетях. ### Специфические особенности -There are two methods of enabling this technology: +Существует два способа активации этой технологии: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Использование [триггеров](/sps/triggers/) Субпотоков**: Получайте данные из любого модуля Субпотоков, импортируя Protobuf-модель через обработчик субграфа, и переносите всю логику в субграф. Этот метод создает объекты субграфа непосредственно внутри субграфа. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. Использование [Изменений Объектов](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)\*\*: Записывая большую часть логики в Субпотоки, Вы можете напрямую передавать вывод модуля в [graph-node](/indexing/tooling/graph-node/). В graph-node можно использовать данные Субпотоков для создания объектов субграфа. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Дополнительные ресурсы -Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: +Перейдите по следующим ссылкам, чтобы ознакомиться с руководствами по использованию инструментов для генерации кода и быстро создать свой первый проект от начала до конца: - [Solana](/substreams/developing/solana/transactions/) - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/ru/sps/sps-faq.mdx b/website/src/pages/ru/sps/sps-faq.mdx index fc2d9862921f..45edab5a3d00 100644 --- a/website/src/pages/ru/sps/sps-faq.mdx +++ b/website/src/pages/ru/sps/sps-faq.mdx @@ -1,6 +1,6 @@ --- -title: Substreams-Powered Subgraphs FAQ -sidebarTitle: FAQ +title: Часто задаваемые вопросы о Субграфах, работающих на основе Субпотоков +sidebarTitle: Часто задаваемые вопросы --- ## Что такое субпотоки? @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## Что такое субграфы, работающие на основе Субпотоков? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Субграфы, работающие на основе Субпотоков](/sps/introduction/) объединяют мощь Субпотоков с возможностью запросов субграфов. При публикации субграфа, работающего на основе Субпотоков данные, полученные в результате преобразований Субпотоков, могут [генерировать изменения объектов](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs), которые совместимы с объектами субграфа. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## Чем субграфы, работающие на основе Субпотоков, отличаются от субграфов? +## How are Substreams-powered Subgraphs different from Subgraphs? -Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. +Субграфы состоят из источников данных, которые указывают он-чейн события и то, как эти события должны быть преобразованы с помощью обработчиков, написанных на AssemblyScript. Эти события обрабатываются последовательно, в зависимости от того, в каком порядке они происходят он-чейн. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +В отличие от этого, субграфы, работающие на основе Субпотоков имеют один источник данных, который ссылается на пакет Субпотоков, обрабатываемый Graph Node. Субпотоки имеют доступ к дополнительным детализированным данным из он-чейна в отличии от традиционных субграфов, а также могут массово использовать параллельную обработку, что значительно ускоряет время обработки. -## Каковы преимущества использования субграфов, работающих на основе Субпотоков? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## В чем преимущества Субпотоков? @@ -35,7 +35,7 @@ Substreams-powered subgraphs combine all the benefits of Substreams with the que - Высокопроизводительное индексирование: индексирование на порядки быстрее благодаря крупномасштабным кластерам параллельных операций (как пример, BigQuery). -- Возможность загружать куда угодно: Загружайте Ваши данные в любое удобное для Вас место: PostgreSQL, MongoDB, Kafka, субграфы, плоские файлы, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Программируемость: Используйте код для настройки извлечения, выполнения агрегирования во время преобразования и моделирования выходных данных для нескольких приемников. @@ -63,34 +63,34 @@ Firehose, разработанный [StreamingFast](https://www.streamingfast.i - Использует плоские файлы: Данные блокчейна извлекаются в плоские файлы — самый дешевый и наиболее оптимизированный доступный вычислительный ресурс. -## Где разработчики могут получить доступ к дополнительной информации о субграфах работающих на основе Субпотоков и о Субпотоках? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? -The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. +Из [документации по Субпотокам](/substreams/introduction/) Вы узнаете, как создавать модули Субпотоков. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. [Новейший инструмент Substreams Codegen](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) позволит Вам загрузить проект Substreams без использования какого-либо кода. ## Какова роль модулей Rust в Субпотоках? -Модули Rust - это эквивалент мапперов AssemblyScript в субграфах. Они компилируются в WASM аналогичным образом, но модель программирования допускает параллельное выполнение. Они определяют, какие преобразования и агрегации необходимо применить к необработанным данным блокчейна. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. -See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. +Подробную информацию см. в [документации по модулям](https://docs.substreams.dev/reference-material/substreams-components/modules#modules). ## Что делает Субпотоки компонуемыми? При использовании Субпотоков компоновка происходит на уровне преобразования, что позволяет повторно использовать кэшированные модули. -Например, Алиса может создать ценовой модуль DEX, Боб может использовать его для создания агрегатора объемов для некоторых интересующих его токенов, а Лиза может объединить четыре отдельных ценовых модуля DEX, чтобы создать ценовой оракул. Один запрос Субпотоков упакует все эти отдельные модули, свяжет их вместе, чтобы предложить гораздо более уточненный поток данных. Затем этот поток может быть использован для заполнения субграфа и запрашиваться потребителями. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## Как Вы можете создать и развернуть субграф, работающий на основе Субпотоков? -After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). +После [определения](/sps/introduction/) субграфа, работающего на основе Субпотоков, Вы можете использовать Graph CLI для его развертывания в [Subgraph Studio](https://thegraph.com/studio/). -## Где я могу найти примеры Субпотоков и субграфов, работающих на основе Субпотоков? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -Вы можете посетить [этот репозиторий на Github](https://github.com/pinax-network/awesome-substreams), чтобы найти примеры Субпотоков и субграфов, работающих на основе Субпотоков. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## Что означают Субпотоки и субграфы, работающие на основе Субпотоков, для сети The Graph? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? Интеграция обещает множество преимуществ, включая чрезвычайно высокопроизводительную индексацию и большую компонуемость за счет использования модулей сообщества и развития на их основе. diff --git a/website/src/pages/ru/sps/triggers.mdx b/website/src/pages/ru/sps/triggers.mdx index d4f8ef896db2..3e047577c67a 100644 --- a/website/src/pages/ru/sps/triggers.mdx +++ b/website/src/pages/ru/sps/triggers.mdx @@ -1,18 +1,18 @@ --- -title: Substreams Triggers +title: Триггеры Субпотоков --- Use Custom Triggers and enable the full use GraphQL. ## Обзор -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +Следующий код демонстрирует, как определить функцию `handleTransactions` в обработчике субграфа. Эта функция принимает сырые байты Субпотоков в качестве параметра и декодирует их в объект `Transactions`. Для каждой транзакции создается новый объект субграфа. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -34,13 +34,13 @@ export function handleTransactions(bytes: Uint8Array): void { } ``` -Here's what you're seeing in the `mappings.ts` file: +Вот что Вы видите в файле `mappings.ts`: -1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object -2. Looping over the transactions -3. Create a new subgraph entity for every transaction +1. Байты, содержащие данные Субпотоков, декодируются в сгенерированный объект `Transactions`. Этот объект используется как любой другой объект на AssemblyScript +2. Итерация по транзакциям (процесс поочерёдного прохода по всем транзакциям для их анализа или обработки) +3. Создание нового объекта субграфа для каждой транзакции -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +Чтобы ознакомиться с подробным примером субграфа на основе триггера, [ознакомьтесь с руководством](/sps/tutorial/). ### Дополнительные ресурсы diff --git a/website/src/pages/ru/sps/tutorial.mdx b/website/src/pages/ru/sps/tutorial.mdx index b9e55f8bc89f..bc75c6605e24 100644 --- a/website/src/pages/ru/sps/tutorial.mdx +++ b/website/src/pages/ru/sps/tutorial.mdx @@ -1,32 +1,32 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Руководство: Настройка Субграфа, работающего на основе Субпотоков в сети Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Начнем For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) -### Prerequisites +### Предварительные требования -Before starting, make sure to: +Прежде чем начать, убедитесь, что: -- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. -- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. +- Завершили изучение [руководства по началу работы](https://github.com/streamingfast/substreams-starter), чтобы настроить свою среду разработки с использованием контейнера для разработки. +- Ознакомлены с The Graph и основными концепциями блокчейна, такими как транзакции и Protobuf. -### Step 1: Initialize Your Project +### Шаг 1: Инициализация Вашего проекта -1. Open your Dev Container and run the following command to initialize your project: +1. Откройте свой контейнер для разработки и выполните следующую команду для инициализации проекта: ```bash substreams init ``` -2. Select the "minimal" project option. +2. Выберите вариант проекта "minimal". -3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: +3. Замените содержимое сгенерированного файла `substreams.yaml` следующей конфигурацией, которая фильтрует транзакции для аккаунта Orca в идентификаторе программы токенов SPL: ```yaml specVersion: v0.1.0 @@ -34,12 +34,12 @@ package: name: my_project_sol version: v0.1.0 -imports: # Pass your spkg of interest +imports: # Укажите нужный Вам spkg solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg modules: - name: map_spl_transfers - use: solana:map_block # Select corresponding modules available within your spkg + use: solana:map_block # Выберите соответствующие модули, доступные в Вашем spkg initialBlock: 260000082 - name: map_transactions_by_programid @@ -47,20 +47,19 @@ modules: network: solana-mainnet-beta -params: # Modify the param fields to meet your needs - # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA - map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +params: # Измените параметры в соответствии со своими требованиями + # Для program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA: map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE ``` -### Step 2: Generate the Subgraph Manifest +### Шаг 2: Создание манифеста субграфа -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +После инициализации проекта создайте манифест субграфа, выполнив следующую команду в Dev Container: ```bash substreams codegen subgraph ``` -You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: +Вы создадите манифест `subgraph.yaml`, который импортирует пакет Субпотоков в качестве источника данных: ```yaml --- @@ -70,20 +69,20 @@ dataSources: network: solana-mainnet-beta source: package: - moduleName: map_spl_transfers # Module defined in the substreams.yaml + moduleName: map_spl_transfers # Модуль, определенный в substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers ``` -### Step 3: Define Entities in `schema.graphql` +### Шаг 3: Определите объекты в `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Определите поля, которые хотите сохранить в объектах субграфа, обновив файл `schema.graphql`. -Here is an example: +Пример: ```graphql type MyTransfer @entity { @@ -95,13 +94,13 @@ type MyTransfer @entity { } ``` -This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. +Эта схема определяет объект `MyTransfer` с такими полями, как `id`, `amount`, `source`, `designation` и `signers`. -### Step 4: Handle Substreams Data in `mappings.ts` +### Шаг 4: Обработка данных Субпотоков в `mappings.ts` With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -132,19 +131,19 @@ export function handleTriggers(bytes: Uint8Array): void { } ``` -### Step 5: Generate Protobuf Files +### Шаг 5: Сгенерируйте файлы Protobuf -To generate Protobuf objects in AssemblyScript, run the following command: +Чтобы сгенерировать объекты Protobuf в AssemblyScript, выполните следующую команду: ```bash npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +Эта команда преобразует определения Protobuf в AssemblyScript, позволяя использовать их в обработчике субграфа. ### Заключение -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Поздравляем! Вы успешно настроили субграф на основе триггеров с поддержкой Субпотоков для токена Solana SPL. Следующий шаг Вы можете сделать, настроив схему, мэппинги и модули в соответствии со своим конкретным вариантом использования. ### Video Tutorial @@ -152,4 +151,4 @@ Congratulations! You've successfully set up a trigger-based Substreams-powered s ### Дополнительные ресурсы -For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). +Для более продвинутой настройки и оптимизации ознакомьтесь с официальной [документацией по Субпотокам](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/src/pages/ru/subgraphs/_meta-titles.json b/website/src/pages/ru/subgraphs/_meta-titles.json index 0556abfc236c..935e730c6eb3 100644 --- a/website/src/pages/ru/subgraphs/_meta-titles.json +++ b/website/src/pages/ru/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { - "querying": "Querying", - "developing": "Developing", - "cookbook": "Cookbook", - "best-practices": "Best Practices" + "querying": "Запрос", + "developing": "Разработка", + "guides": "How-to Guides", + "best-practices": "Лучшие практики" } diff --git a/website/src/pages/ru/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ru/subgraphs/best-practices/avoid-eth-calls.mdx index f44611137483..042a1c001522 100644 --- a/website/src/pages/ru/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Лучшая практика субграфа 4 — увеличение скорости индексирования за счет избегания eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## Краткое содержание -`eth_calls` — это вызовы, которые могут выполняться из субграфа к ноде Ethereum. Эти вызовы требуют значительного количества времени для возврата данных, что замедляет индексирование. По возможности, проектируйте смарт-контракты так, чтобы они отправляли все необходимые Вам данные, чтобы избежать использования `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Почему избегание `eth_calls` является наилучшей практикой -Субграфы оптимизированы для индексирования данных событий, которые исходят из смарт-контрактов. Субграф также может индексировать данные из `eth_call`, однако это значительно замедляет процесс индексирования, так как `eth_calls` требуют выполнения внешних вызовов к смарт-контрактам. Скорость реагирования этих вызовов зависит не от субграфа, а от подключения и скорости ответа ноды Ethereum, к которой отправлен запрос. Минимизируя или полностью исключая `eth_calls` в наших субграфах, мы можем значительно повысить скорость индексирования. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### Что из себя представляет eth_call? -`eth_calls` часто необходимы, когда данные, требуемые для субграфа, недоступны через сгенерированные события. Например, рассмотрим ситуацию, когда субграфу нужно определить, являются ли токены ERC20 частью определенного пула, но контракт генерирует только базовое событие `Transfer` и не создает событие, содержащее нужные нам данные: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -Это функционально, однако не идеально, так как замедляет индексирование нашего субграфа. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## Как устранить `eth_calls` @@ -54,7 +54,7 @@ export function handleTransfer(event: Transfer): void { event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -С этим обновлением субграф может напрямую индексировать необходимые данные без внешних вызовов: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,22 +96,22 @@ calls: Обработчик сам получает результат этого `eth_call`, как и в предыдущем разделе, привязываясь к контракту и выполняя вызов. `graph-node` кеширует результаты объявленных `eth_calls` в памяти, а вызов из обработчика будет извлекать результат из этого кеша в памяти, вместо того чтобы выполнять фактический RPC-вызов. -Примечание: Объявленные `eth_calls` могут быть выполнены только в субграфах с версией спецификации >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Заключение -Вы можете значительно улучшить производительность индексирования, минимизируя или исключая `eth_calls` в своих субграфах. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ru/subgraphs/best-practices/derivedfrom.mdx index 3e918462a606..da809815ce60 100644 --- a/website/src/pages/ru/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Лучшая практика для субграфов 2 — улучшение индексирования и отклика на запросы с помощью @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## Краткое содержание -Массивы в Вашей схеме могут значительно замедлить работу субграфа, когда их размер превышает тысячи элементов. Если возможно, следует использовать директиву @derivedFrom при работе с массивами, так как она предотвращает образование больших массивов, упрощает обработчики и уменьшает размер отдельных элементов, что значительно улучшает скорость индексирования и производительность запросов. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## Как использовать директиву @derivedFrom @@ -15,7 +15,7 @@ sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' comments: [Comment!]! @derivedFrom(field: "post") ``` -@derivedFrom создает эффективные отношения "один ко многим", позволяя объекту динамически ассоциироваться с несколькими связанными объектами на основе поля в связанном объекте. Этот подход исключает необходимость хранения продублированных данных с обеих сторон отношений, что делает субграф более эффективным. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Пример использования @derivedFrom @@ -60,30 +60,30 @@ type Comment @entity { Именно при добавлении директивы `@derivedFrom`, эта схема будет хранить "Comments" только на стороне отношения "Comments", а не на стороне отношения "Post". Массивы хранятся в отдельных строках, что позволяет им значительно расширяться. Это может привести к очень большим объёмам, поскольку их рост не ограничен. -Это не только сделает наш субграф более эффективным, но и откроет три возможности: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. Мы можем запрашивать `Post` и видеть все его комментарии. 2. Мы можем выполнить обратный поиск и запросить любой `Comment`, чтобы увидеть, от какого поста он пришел. -3. Мы можем использовать [Загрузчики производных полей](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities), чтобы получить возможность напрямую обращаться и манипулировать данными из виртуальных отношений в наших мэппингах субграфа. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Заключение -Используйте директиву `@derivedFrom` в субграфах для эффективного управления динамически растущими массивами, улучшая эффективность индексирования и извлечения данных. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. Для более подробного объяснения стратегий, которые помогут избежать использования больших массивов, ознакомьтесь с блогом Кевина Джонса: [Лучшие практики разработки субграфов: как избежать больших массивов](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ru/subgraphs/best-practices/grafting-hotfix.mdx index ebb6b49ea9bf..b169115f012c 100644 --- a/website/src/pages/ru/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Лучшая практика субграфов 6 — используйте графтинг для быстрого развертывания исправлений -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## Краткое содержание -Графтинг — это мощная функция в разработке субграфов, которая позволяет создавать и разворачивать новые субграфы, повторно используя индексированные данные из существующих. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Обзор -Эта функция позволяет быстро развертывать исправления для критических ошибок, устраняя необходимость повторного индексирования всего субграфа с нуля. Сохраняя исторические данные, графтинг минимизирует время простоя и обеспечивает непрерывность работы сервисов данных. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Преимущества графтинга для оперативных исправлений 1. **Быстрое развертывание** - - **Минимизация времени простоя**: когда субграф сталкивается с критической ошибкой и перестает индексировать данные, графтинг позволяет немедленно развернуть исправление без необходимости ждать повторного индексирования. - - **Немедленное восстановление**: новый субграф продолжается с последнего индексированного блока, обеспечивая бесперебойную работу служб передачи данных. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Сохранение данных** - - **Повторное использование исторических данных**: графтинг копирует существующие данные из базового субграфа, что позволяет сохранить важные исторические записи. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Консистентность**: поддерживает непрерывность данных, что имеет решающее значение для приложений, полагающихся на согласованные исторические данные. 3. **Эффективность** @@ -31,38 +31,38 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' 1. **Первоначальное развертывание без графтинга** - - **Начните с чистого листа**: Всегда разворчивайте первоначальный субграф без использования графтинга, чтобы убедиться в его стабильности и корректной работе. - - **Тщательно тестируйте**: проверьте производительность субграфа, чтобы свести к минимуму необходимость в будущих исправлениях. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Реализация исправления с использованием графтинга** - **Определите проблему**: при возникновении критической ошибки определите номер блока последнего успешно проиндексированного события. - - **Создайте новый субграф**: разработайте новый субграф, включающий оперативное исправление. - - **Настройте графтинг**: используйте графтинг для копирования данных до определенного номера блока из неисправного субграфа. - - **Быстро разверните**: опубликуйте графтинговый (перенесенный) субграф, чтобы как можно скорее восстановить работу сервиса. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Действия после оперативного исправления** - - **Мониторинг производительности**: убедитесь, что графтинговый (перенесенный) субграф индексируется правильно и исправление решает проблему. - - **Публикация без графтинга**: как только субграф стабилизируется, разверните его новую версию без использования графтинга для долгосрочного обслуживания. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Примечание: Не рекомендуется использовать графтинг бесконечно, так как это может усложнить будущие обновления и обслуживание. - - **Обновите ссылки**: перенаправьте все сервисы или приложения на новый субграф без использования графтинга. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Важные замечания** - **Тщательный выбор блока**: тщательно выбирайте номер блока графтинга, чтобы избежать потери данных. - **Совет**: используйте номер блока последнего корректно обработанного события. - - **Используйте идентификатор развертывания**: убедитесь, что Вы ссылаетесь на идентификатор развертывания базового субграфа, а не на идентификатор субграфа. - - **Примечание**: идентификатор развертывания — это уникальный идентификатор для конкретного развертывания субграфа. - - **Объявление функции**: не забудьте указать использование графтинга в манифесте субграфа в разделе функций. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Пример: развертывание оперативного исправления с использованием графтинга -Предположим, у вас есть субграф, отслеживающий смарт-контракт, который перестал индексироваться из-за критической ошибки. Вот как Вы можете использовать графтинг для развертывания оперативного исправления. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Манифест неудачного субграфа (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' 2. **Манифест нового субграфа с графтингом (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -100,10 +100,10 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' source: address: '0xNewContractAddress' abi: Lock - startBlock: 6000001 # Блок после последнего индексированного блока + startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' features: - grafting graft: - base: QmBaseDeploymentID # ID развертывания неудачного субграфа - block: 6000000 # Последний успешно индексированный блок + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph + block: 6000000 # Last successfully indexed block ``` **Пояснение:** -- **Обновление источника данных**: новый субграф указывает на 0xNewContractAddress, который может быть исправленной версией смарт-контракта. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Начальный блок**: устанавливается на один блок после последнего успешно индексированного блока, чтобы избежать повторной обработки ошибки. - **Конфигурация графтинга**: - - **base**: идентификатор развертывания неудачного субграфа. + - **base**: Deployment ID of the failed Subgraph. - **block**: номер блока, с которого должен начаться графтинг. 3. **Шаги развертывания** @@ -135,10 +135,10 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' - **Отредактируйте манифест**: как показано выше, обновите файл `subgraph.yaml` с конфигурациями для графтинга. - **Разверните субграф**: - Аутентифицируйтесь с помощью Graph CLI. - - Разверните новый субграф используя `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **После развертывания** - - **Проверьте индексирование**: убедитесь, что субграф корректно индексирует данные с точки графтинга. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Следите за данными**: убедитесь, что новые данные индексируются и что исправление работает эффективно. - **Запланируйте повторную публикацию**: запланируйте развертывание версии без графтинга для обеспечения долгосрочной стабильности. @@ -146,9 +146,9 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' Хотя графтинг является мощным инструментом для быстрого развертывания исправлений, существуют конкретные сценарии, когда его следует избегать для поддержания целостности данных и обеспечения оптимальной производительности. -- **Несовместимые изменения схемы**: если ваше исправление требует изменения типа существующих полей или удаления полей из схемы, графтинг не подходит. Графтинг предусматривает, что схема нового субграфа будет совместима со схемой базового субграфа. Несовместимые изменения могут привести к несоответствиям данных и ошибкам, так как существующие данные не будут соответствовать новой схеме. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Значительные изменения логики мэппинга**: когда исправление включает существенные изменения в вашей логике мэппинга, такие как изменение обработки событий или изменение функций обработчиков, графтинг может работать некорректно. Новая логика может быть несовместима с данными, обработанными по старой логике, что приведет к некорректным данным или сбоям в индексировании. -- **Развертывания в сеть The Graph**: графтинг не рекомендуется для субграфов, предназначенных для децентрализованной сети The Graph (майннет). Это может усложнить индексирование и не поддерживаться всеми Индексаторами, что может привести к непредсказуемому поведению или увеличению затрат. Для развертываний в майннете безопаснее перезапустить индексирование субграфа с нуля, чтобы обеспечить полную совместимость и надежность. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Управление рисками @@ -157,31 +157,31 @@ sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' ## Заключение -Графтинг — это эффективная стратегия для развертывания оперативных исправлений в разработке субграфов, позволяющая Вам: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Быстро восстанавливаться** после критических ошибок без повторного индексирования. - **Сохранять исторические данные**, поддерживая непрерывности работы для приложений и пользователей. - **Обеспечить доступность сервиса**, минимизируя время простоя при критических исправлениях. -Однако важно использовать графтинг разумно и следовать лучшим практикам для снижения рисков. После стабилизации своего субграфа с помощью оперативных исправлений, спланируйте развертывание версии без графтинга для обеспечения долгосрочного обслуживания. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Дополнительные ресурсы - **[Документация графтинга](/subgraphs/cookbook/grafting/)**: замените контракт и сохраните его историю с помощью графтинга - **[Понимание идентификаторов развертывания](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: ознакомьтесь с разницей между идентификатором развертывания и идентификатором субграфа. -Включив графтинг в процесс разработки субграфов, Вы сможете быстрее реагировать на проблемы, обеспечивая стабильность и надежность Ваших сервисов данных. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ru/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 194240e032c3..78e81a267bc5 100644 --- a/website/src/pages/ru/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Лучшие практики для субграфов №3 – Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## Краткое содержание @@ -24,7 +24,7 @@ type Transfer @entity(immutable: true) { Структуры неизменяемых объектов не будут изменяться в будущем. Идеальным кандидатом для превращения в неизменяемый объект может быть объект, который напрямую фиксирует данные событий в блокчейне, например, событие `Transfer`, записываемое как объект `Transfer`. -### Под капотом +### Как это устроено Изменяемые объекты имеют «диапазон блоков», указывающий их актуальность. Обновление таких объектов требует от graph node корректировки диапазона блоков для предыдущих версий, что увеличивает нагрузку на базу данных. Запросы также должны фильтровать данные, чтобы находить только актуальные объекты. Неизменяемые объекты работают быстрее, поскольку все они актуальны, и, так как они не изменяются, не требуется никаких проверок или обновлений при записи, а также фильтрации во время выполнения запросов. @@ -50,12 +50,12 @@ type Transfer @entity(immutable: true) { ### Причины, по которым не стоит использовать Bytes как идентификаторы 1. Если идентификаторы объектов должны быть читаемыми для человека, например, автоинкрементированные числовые идентификаторы или читаемые строки, то не следует использовать тип Bytes для идентификаторов. -2. Если данные субграфа интегрируются с другой моделью данных, которая не использует тип Bytes для идентификаторов, то не следует использовать Bytes для идентификаторов в субграфе. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Если улучшения производительности индексирования и запросов не являются приоритетом. ### Конкатенация (объединение) с использованием Bytes как идентификаторов -Это распространенная практика во многих субграфах — использовать конкатенацию строк для объединения двух свойств события в единый идентификатор, например, используя `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Однако поскольку это возвращает строку, такой подход значительно ухудшает производительность индексирования и запросов в субграфах. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Вместо этого следует использовать метод `concatI32()` для конкатенации свойств события. Эта стратегия приводит к созданию идентификатора типа `Bytes`, который гораздо более производителен. @@ -172,20 +172,20 @@ type Transfer @entity { ## Заключение -Использование как неизменяемых объектов, так и Bytes как идентификаторов значительно улучшает эффективность субграфов. В частности, тесты показали увеличение производительности запросов до 28% и ускорение индексирования до 48%. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Читайте больше о применении неизменяемых объектов и Bytes как идентификаторов в этом блоге от Дэвида Луттеркорта, инженера-программиста в Edge & Node: [Два простых способа улучшить производительность субграфов](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/best-practices/pruning.mdx b/website/src/pages/ru/subgraphs/best-practices/pruning.mdx index f99ae4861ec4..0903a26f9da7 100644 --- a/website/src/pages/ru/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Лучшая практика субграфа 1 — Улучшение скорости запросов с помощью сокращения (Pruning) субграфа -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## Краткое содержание -[Pruning](/developing/creating-a-subgraph/#prune) удаляет архивные элементы из базы данных субграфа до заданного блока, а удаление неиспользуемых элементов из базы данных субграфа улучшает производительность запросов, зачастую значительно. Использование `indexerHints` — это простой способ выполнить сокращение субграфа. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## Как сократить субграф с помощью `indexerHints` @@ -13,14 +13,14 @@ sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' `indexerHints` имеет три опции `prune`: -- `prune: auto`: Сохраняет минимально необходимую историю, установленную Индексатором, оптимизируя производительность запросов. Это рекомендуется как основная настройка и является настройкой по умолчанию для всех субграфов, созданных с помощью `graph-cli` версии >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Устанавливает пользовательский предел на количество исторических блоков, которые следует сохранить. - `prune: never`: без сокращения исторических данных; сохраняет всю историю и является значением по умолчанию, если раздел `indexerHints` отсутствует. `prune: never` следует выбрать, если требуются [Запросы на путешествия во времени](/subgraphs/querying/graphql-api/#time-travel-queries). -Мы можем добавить `indexerHints` в наши субграфы, обновив наш файл `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,18 +39,18 @@ dataSources: ## Заключение -Сокращение с использованием `indexerHints` — это наилучшая практика при разработке субграфов, обеспечивающая значительное улучшение производительности запросов. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ru/subgraphs/best-practices/timeseries.mdx index 5520d80a970a..3d5de9e6d731 100644 --- a/website/src/pages/ru/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/ru/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Лучшие практики субграфов №5 — Упрощение и оптимизация с помощью временных рядов и агрегаций -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Тайм-серии и агрегации --- ## Краткое содержание -Использование новой функции временных рядов и агрегаций в субграфах может значительно улучшить как скорость индексирования, так и производительность запросов. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Обзор @@ -36,6 +36,10 @@ sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' ## Как внедрить временные ряды и агрегации +### Предварительные требования + +You need `spec version 1.1.0` for this feature. + ### Определение объектов временных рядов Объект временного ряда представляет собой необработанные данные, собранные с течением времени. Он определяется с помощью аннотации `@entity(timeseries: true)`. Ключевые требования: @@ -51,7 +55,7 @@ sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ type Data @entity(timeseries: true) { type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -В этом примере статистика агрегирует поле цены из данных за часовые и дневные интервалы, вычисляя сумму. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Запрос агрегированных данных @@ -172,24 +176,24 @@ type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { ### Заключение -Внедрение временных рядов и агрегаций в субграфы является лучшей практикой для проектов, работающих с данными, зависящими от времени. Этот подход: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Улучшает производительность: ускоряет индексирование и запросы, снижая нагрузку на обработку данных. - Упрощает разработку: устраняет необходимость в ручном написании логики агрегации в мэппингах. - Эффективно масштабируется: обрабатывает большие объемы данных, не ухудшая скорость и отзывчивость. -Применяя этот шаблон, разработчики могут создавать более эффективные и масштабируемые субграфы, обеспечивая более быстрый и надежный доступ к данным для конечных пользователей. Чтобы узнать больше о внедрении временных рядов и агрегаций, обратитесь к [Руководству по временным рядам и агрегациям](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) и рассмотрите возможность использования этой функции в своих субграфах. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Лучшие практики для субграфов 1-6 -1. [Увеличение скорости запросов с помощью обрезки субграфов](/subgraphs/best-practices/pruning/) +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) -2. [Улучшение индексирования и отклика запросов с использованием @derivedFrom](/subgraphs/best-practices/derivedfrom/) +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) -3. [Улучшение индексирования и производительности запросов с использованием неизменяемых объектов и байтов в качестве идентификаторов](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) -4. [Увеличение скорости индексирования путем избегания `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) -5. [Упрощение и оптимизация с помощью временных рядов и агрегаций](/subgraphs/best-practices/timeseries/) +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) -6. [Использование переноса (графтинга) для быстрого развертывания исправлений](/subgraphs/best-practices/grafting-hotfix/) +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/ru/subgraphs/billing.mdx b/website/src/pages/ru/subgraphs/billing.mdx index 0a7daa3442d0..5f345d114a67 100644 --- a/website/src/pages/ru/subgraphs/billing.mdx +++ b/website/src/pages/ru/subgraphs/billing.mdx @@ -2,20 +2,22 @@ title: Выставление счетов --- -## Querying Plans +## Запрос планов -Существует два плана для выполнения запросов к субграфам в The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- **Бесплатный план**: Бесплатный план включает 100,000 бесплатных запросов в месяц с полным доступом к тестовой среде Subgraph Studio. Этот план предназначен для любителей, участников хакатонов и разработчиков небольших проектов, которые хотят попробовать The Graph перед масштабированием своего децентрализованного приложения. -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **План роста**: План роста включает все возможности бесплатного плана, но все запросы, превышающие 100,000 в месяц, требуют оплаты в GRT или кредитной картой. Этот план достаточно гибок, чтобы поддерживать команды, которые уже запустили децентрализованные приложения для различных сценариев использования. + +Learn more about pricing [here](https://thegraph.com/studio-pricing/). ## Оплата запросов с помощью кредитной карты - Чтобы настроить оплату с помощью кредитных/дебетовых карт, пользователи должны зайти в Subgraph Studio (https://thegraph.com/studio/) - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). + 1. Перейдите на [страницу оплаты Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Нажмите на кнопку «Connect Wallet» в правом верхнем углу страницы. Вы будете перенаправлены на страницу выбора кошелька. Выберите свой кошелек и нажмите «Connect». 3. Выберите «Обновление плана», если Вы переходите с бесплатного плана, или «Управление планом», если Вы уже ранее добавили GRT на свой баланс для оплаты. Далее Вы можете оценить количество запросов, чтобы получить примерную стоимость, но это не обязательный шаг. 4. Чтобы выбрать оплату кредитной картой, выберите «Credit card» как способ оплаты и заполните информацию о своей карте. Те, кто ранее использовал Stripe, могут воспользоваться функцией Link для автоматического заполнения данных. @@ -45,17 +47,17 @@ title: Выставление счетов - В качестве альтернативы, Вы можете приобрести GRT напрямую на Arbitrum через децентрализованную биржу. -> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). +> Этот раздел написан с учетом того, что у Вас уже есть GRT в кошельке и Вы находитесь в сети Arbitrum. Если у Вас нет GRT, Вы можете узнать, как его получить, [здесь](#getting-grt). После переноса GRT Вы можете добавить его на баланс для оплаты. ### Добавление токенов GRT с помощью кошелька -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Перейдите на [страницу оплаты Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Нажмите на кнопку «Connect Wallet» в правом верхнем углу страницы. Вы будете перенаправлены на страницу выбора кошелька. Выберите свой кошелек и нажмите «Connect». 3. Нажмите кнопку «Управление» в правом верхнем углу. Новые пользователи увидят опцию «Обновить до плана Роста», а те, кто пользовался ранее — «Пополнение с кошелька». 4. Используйте ползунок, чтобы оценить количество запросов, которое Вы планируете выполнять ежемесячно. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Рекомендации по количеству запросов, которые Вы можете использовать, можно найти на нашей странице **Часто задаваемые вопросы**. 5. Выберите «Криптовалюта». В настоящее время GRT — единственная криптовалюта, принимаемая в The Graph Network. 6. Выберите количество месяцев, за которые Вы хотели бы внести предоплату. - Предоплата не обязывает Вас к дальнейшему использованию. С Вас будет взиматься плата только за то, что Вы используете, и Вы сможете вывести свой баланс в любое время. @@ -68,7 +70,7 @@ title: Выставление счетов ### Вывод токенов GRT с помощью кошелька -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +1. Перейдите на [страницу оплаты Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). 2. Нажмите на кнопку «Подключить кошелек» в правом верхнем углу страницы. Выберите свой кошелек и нажмите «Подключить». 3. Нажмите кнопку «Управление» в правом верхнем углу страницы. Выберите «Вывести GRT». Появится боковая панель. 4. Введите сумму GRT, которую хотите вывести. @@ -77,11 +79,11 @@ title: Выставление счетов ### Добавление токенов GRT с помощью кошелька с мультиподписью -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). -2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. +1. Перейдите на [страницу оплаты Subgraph Studio](https://thegraph.com/studio/subgraphs/billing/). +2. Нажмите на кнопку «Подключить кошелек» в правом верхнем углу страницы. Выберите свой кошелек и нажмите «Подключить». Если Вы используете [Gnosis-Safe](https://gnosis-safe.io/), Вы сможете подключить как стандартный кошелёк, так и кошелёк с мультиподписью. Затем подпишите соответствующее сообщение. За это Вам не придётся платить комиссию. 3. Нажмите кнопку «Управление» в правом верхнем углу. Новые пользователи увидят опцию «Обновить до плана Роста», а те, кто пользовался ранее — «Пополнение с кошелька». 4. Используйте ползунок, чтобы оценить количество запросов, которое Вы планируете выполнять ежемесячно. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. + - Рекомендации по количеству запросов, которые Вы можете использовать, можно найти на нашей странице **Часто задаваемые вопросы**. 5. Выберите «Криптовалюта». В настоящее время GRT — единственная криптовалюта, принимаемая в The Graph Network. 6. Выберите количество месяцев, за которые Вы хотели бы внести предоплату. - Предоплата не обязывает Вас к дальнейшему использованию. С Вас будет взиматься плата только за то, что Вы используете, и Вы сможете вывести свой баланс в любое время. @@ -99,7 +101,7 @@ title: Выставление счетов Далее будет представлено пошаговое руководство по приобретению токена GRT на Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Перейдите на [Coinbase](https://www.coinbase.com/) и создайте учетную запись. 2. После того как Вы создали учетную запись, Вам нужно будет подтвердить свою личность с помощью процесса, известного как KYC (или Know Your Customer). Это стандартная процедура для всех централизованных или кастодиальных криптобирж. 3. После того как Вы подтвердили свою личность, Вы можете приобрести токены ETH нажав на кнопку "Купить/Продать" в правом верхнем углу страницы. 4. Выберите валюту, которую хотите купить. Выберите GRT. @@ -107,19 +109,19 @@ title: Выставление счетов 6. Выберите количество токенов GRT, которое хотите приобрести. 7. Проверьте все данные о приобретении. Проверьте все данные о приобретении и нажмите «Купить GRT». 8. Подтвердите покупку. Подтвердите покупку - Вы успешно приобрели токены GRT. -9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). +9. Вы можете перевести GRT со своего аккаунта на кошелек, например, [MetaMask](https://metamask.io/). - Чтобы перевести токены GRT на свой кошелек, нажмите кнопку «Учетные записи» в правом верхнем углу страницы. - Нажмите на кнопку «Отправить» рядом с учетной записью GRT. - Введите сумму GRT, которую хотите отправить, и адрес кошелька, на который хотите её отправить. - Нажмите «Продолжить» и подтвердите транзакцию. -Обратите внимание, что при больших суммах покупки Coinbase может потребовать от Вас подождать 7-10 дней, прежде чем переведет полную сумму на кошелек. -You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Вы можете узнать больше о том, как получить GRT на Coinbase [здесь](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance Далее будет представлено пошаговое руководство по приобретению токена GRT на Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Перейдите на [Binance](https://www.binance.com/en) и создайте аккаунт. 2. После того как Вы создали учетную запись, Вам нужно будет подтвердить свою личность с помощью процесса, известного как KYC (или Know Your Customer). Это стандартная процедура для всех централизованных или кастодиальных криптобирж. 3. После того как Вы подтвердили свою личность, Вы можете приобрести токены GRT. Вы можете сделать это, нажав на кнопку «Купить сейчас» на баннере главной страницы. 4. Вы попадете на страницу, где сможете выбрать валюту, которую хотите приобрести. Выберите GRT. @@ -127,27 +129,27 @@ You can learn more about getting GRT on Coinbase [here](https://help.coinbase.co 6. Выберите количество токенов GRT, которое хотите приобрести. 7. Проверьте все данные о приобретении и нажмите «Купить GRT». 8. Подтвердите покупку, и Вы сможете увидеть GRT в своем кошельке Binance Spot. -9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. +9. Вы можете вывести GRT со своего аккаунта на кошелек, например, [MetaMask](https://metamask.io/). + - [Чтобы вывести](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) GRT на свой кошелек, добавьте адрес своего кошелька в список адресов для вывода. - Нажмите на кнопку «кошелек», нажмите «вывести» и выберите GRT. - Введите сумму GRT, которую хотите отправить, и адрес кошелька из белого списка, на который Вы хотите её отправить. - Нажмите «Продолжить» и подтвердите транзакцию. -You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Вы можете узнать больше о том, как получить GRT на Binance [здесь](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ### Uniswap Так Вы можете приобрести GRT на Uniswap. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. +1. Перейдите на [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) и подключите свой кошелек. 2. Выберите токен, который хотите обменять. Выберите ETH. 3. Выберите токен, на который хотите произвести обмен. Выберите GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) + - Убедитесь, что Вы обмениваете на правильный токен. Адрес смарт-контракта GRT в сети Arbitrum One:[0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) 4. Введите количество ETH, которое хотите обменять. 5. Нажмите «Обменять». 6. Подтвердите транзакцию в своем кошельке и дождитесь ее обработки. -You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). +Вы можете узнать больше о том, как получить GRT на Uniswap [здесь](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). ## Получение Ether @@ -157,7 +159,7 @@ You can learn more about getting GRT on Uniswap [here](https://support.uniswap.o Далее будет представлено пошаговое руководство по приобретению токена ETH на Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +1. Перейдите на [Coinbase](https://www.coinbase.com/) и создайте учетную запись. 2. После того как Вы создали учетную запись, Вам нужно будет подтвердить свою личность с помощью процесса, известного как KYC (или Know Your Customer). Это стандартная процедура для всех централизованных или кастодиальных криптобирж. 3. После того как Вы подтвердили свою личность, Вы можете приобрести токены ETH нажав на кнопку "Купить/Продать" в правом верхнем углу страницы. 4. Выберите валюту, которую хотите купить. Выберите ETH. @@ -165,35 +167,35 @@ You can learn more about getting GRT on Uniswap [here](https://support.uniswap.o 6. Введите количество ETH, которое хотите приобрести. 7. Проверьте все данные о приобретении и нажмите «Купить ETH». 8. Подтвердите покупку. Вы успешно приобрели токены ETH. -9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). +9. Вы можете перевести ETH со своего аккаунта Coinbase на кошелек, например, [MetaMask](https://metamask.io/). - Чтобы перевести ETH на свой кошелек, нажмите кнопку «Учетные записи» в правом верхнем углу страницы. - Нажмите на кнопку «Отправить» рядом с учетной записью ETH. - Введите сумму ETH которую хотите отправить, и адрес кошелька, на который хотите её отправить. - Убедитесь, что делаете перевод на адрес своего Ethereum-кошелька в сети Arbitrum One. - Нажмите «Продолжить» и подтвердите транзакцию. -You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). +Вы можете узнать больше о том, как получить ETH на Coinbase [здесь](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). ### Binance -This will be a step by step guide for purchasing ETH on Binance. +Далее будет представлено пошаговое руководство по приобретению токена ETH на Binance. -1. Go to [Binance](https://www.binance.com/en) and create an account. +1. Перейдите на [Binance](https://www.binance.com/en) и создайте аккаунт. 2. После того как Вы создали учетную запись, Вам нужно будет подтвердить свою личность с помощью процесса, известного как KYC (или Know Your Customer). Это стандартная процедура для всех централизованных или кастодиальных криптобирж. -3. Once you have verified your identity, purchase ETH by clicking on the "Buy Now" button on the homepage banner. +3. После того как Вы подтвердили свою личность, Вы можете приобрести токены ETH. Вы можете сделать это, нажав на кнопку «Buy/Sell» в правом верхнем углу страницы. 4. Выберите валюту, которую хотите купить. Выберите ETH. 5. Выберите предпочитаемый способ оплаты. 6. Введите количество ETH, которое хотите приобрести. 7. Проверьте все данные о приобретении и нажмите «Купить ETH». -8. Confirm your purchase and you will see your ETH in your Binance Spot Wallet. -9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). +8. Подтвердите покупку, и ваш ETH появится в вашем спотовом кошельке Binance. +9. Вы можете вывести ETH со своего аккаунта на кошелек, например, [MetaMask](https://metamask.io/). - Чтобы вывести ETH на свой кошелек, добавьте адрес кошелька в белый список вывода. - Click on the "wallet" button, click withdraw, and select ETH. - Enter the amount of ETH you want to send and the whitelisted wallet address you want to send it to. - Убедитесь, что делаете перевод на адрес своего Ethereum-кошелька в сети Arbitrum One. - Нажмите «Продолжить» и подтвердите транзакцию. -You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). +Вы можете узнать больше о том, как получить ETH на Binance [здесь](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). ## Часто задаваемые вопросы по выставлению счетов @@ -203,11 +205,11 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e Мы рекомендуем переоценить количество запросов, чтобы Вам не приходилось часто пополнять баланс. Хорошей оценкой для небольших и средних приложений будет начать с 1–2 млн запросов в месяц и внимательно следить за использованием в первые недели. Для более крупных приложений хорошей оценкой будет использовать количество ежедневных посещений Вашего сайта, умноженное на количество запросов, которые делает Ваша самая активная страница при открытии. -Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. +Конечно, как новые, так и существующие пользователи могут обратиться к команде бизнес-развития Edge & Node для консультации и получения информации о планируемом использовании. ### Могу ли я вывести GRT со своего платежного баланса? -Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). +Да, Вы всегда можете вывести GRT, которые еще не были использованы для запросов, со своего платежного баланса. Контракт для выставления счетов предназначен только для переноса GRT с основной сети Ethereum в сеть Arbitrum. Если Вы хотите перевести свои GRT с Arbitrum обратно на основную сеть Ethereum, Вам нужно будет использовать [Мост Arbitrum](https://bridge.arbitrum.io/?l2ChainId=42161). ### Что произойдет, когда мой платежный баланс закончится? Получу ли я предупреждение? diff --git a/website/src/pages/ru/subgraphs/cookbook/arweave.mdx b/website/src/pages/ru/subgraphs/cookbook/arweave.mdx index a7f24e1bf79e..b4b73dd0401b 100644 --- a/website/src/pages/ru/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/ru/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Создание Субграфов на Arweave --- -> Поддержка Arweave в Graph Node и Subgraph Studio находится на стадии бета-тестирования. Если у Вас есть вопросы о создании субграфов Arweave, свяжитесь с нами в [Discord](https://discord.gg/graphprotocol)! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! Из этого руководства Вы узнаете, как создавать и развертывать субграфы для индексации блокчейна Arweave. @@ -25,12 +25,12 @@ The Graph позволяет создавать собственные откр Чтобы иметь возможность создавать и развертывать Субграфы на Arweave, Вам понадобятся два пакета: -1. `@graphprotocol/graph-cli` версии выше 0.30.2 — это инструмент командной строки для создания и развертывания субграфов. [Нажмите здесь](https://www.npmjs.com/package/@graphprotocol/graph-cli), чтобы скачать с помощью `npm`. -2. `@graphprotocol/graph-ts` версии выше 0.27.0 — это библиотека типов, специфичных для субграфов. [Нажмите здесь](https://www.npmjs.com/package/@graphprotocol/graph-ts), чтобы скачать с помощью `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Составляющие Субграфов -Существует 3 компонента субграфа: +There are three components of a Subgraph: ### 1. Манифест - `subgraph.yaml` @@ -40,49 +40,49 @@ The Graph позволяет создавать собственные откр Здесь Вы определяете, какие данные хотите иметь возможность запрашивать после индексации своего субграфа с помощью GraphQL. На самом деле это похоже на модель для API, где модель определяет структуру тела запроса. -Требования для субграфов Arweave описаны в [имеющейся документации](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. Мэппинги на AssemblyScript - `mapping.ts` Это логика, которая определяет, как данные должны извлекаться и храниться, когда кто-то взаимодействует с источниками данных, которые Вы отслеживаете. Данные переводятся и сохраняются в соответствии с указанной Вами схемой. -Во время разработки субграфа есть две ключевые команды: +During Subgraph development there are two key commands: ``` -$ graph codegen # генерирует типы из файла схемы, указанного в манифесте -$ graph build # генерирует Web Assembly из файлов AssemblyScript и подготавливает все файлы субграфа в папке /build +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Определение манифеста субграфа -Манифест субграфа `subgraph.yaml` определяет источники данных для субграфа, триггеры, представляющие интерес, и функции, которые должны выполняться в ответ на эти триггеры. Ниже приведён пример манифеста субграфа для Arweave: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: - file: ./schema.graphql # ссылка на файл схемы + file: ./schema.graphql # link to the schema file dataSources: - kind: arweave name: arweave-blocks - network: arweave-mainnet # The Graph поддерживает только Arweave Mainnet + network: arweave-mainnet # The Graph only supports Arweave Mainnet source: - owner: 'ID-OF-AN-OWNER' # Открытый ключ кошелька Arweave - startBlock: 0 # установите это значение на 0, чтобы начать индексацию с генезиса чейна + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript - file: ./src/blocks.ts # ссылка на файл с мэппингами Assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: - Block - Transaction blockHandlers: - - handler: handleBlock # имя функции в файле мэппинга + - handler: handleBlock # the function name in the mapping file transactionHandlers: - - handler: handleTx # имя функции в файле мэппинга + - handler: handleTx # the function name in the mapping file ``` -- Субграфы Arweave вводят новый тип источника данных (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - Сеть должна соответствовать сети на размещенной Graph Node. В Subgraph Studio мейннет Arweave обозначается как `arweave-mainnet` - Источники данных Arweave содержат необязательное поле source.owner, которое является открытым ключом кошелька Arweave @@ -99,7 +99,7 @@ dataSources: ## Определение схемы -Определение схемы описывает структуру базы данных итогового субграфа и взаимосвязи между объектами. Это не зависит от исходного источника данных. Более подробную информацию об определении схемы субграфа можно найти [здесь](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## Мэппинги AssemblyScript @@ -152,7 +152,7 @@ class Transaction { ## Развертывание субграфа Arweave в Subgraph Studio -Как только Ваш субграф будет создан на панели управления Subgraph Studio, Вы можете развернуть его с помощью команды CLI `graph deploy`. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Запрос субграфа Arweave -Конечная точка GraphQL для субграфов Arweave определяется схемой и существующим интерфейсом API. Для получения дополнительной информации ознакомьтесь с [документацией по API GraphQL](/subgraphs/querying/graphql-api/). +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Примеры субграфов -Ниже приведен пример субграфа для справки: +Here is an example Subgraph for reference: -- [Пример субграфа для Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) -## FAQ +## Часто задаваемые вопросы -### Может ли субграф индексировать Arweave и другие чейны? +### Can a Subgraph index Arweave and other chains? -Нет, субграф может поддерживать источники данных только из одного чейна/сети. +No, a Subgraph can only support data sources from one chain/network. ### Могу ли я проиндексировать сохраненные файлы в Arweave? В настоящее время The Graph индексирует Arweave только как блокчейн (его блоки и транзакции). -### Могу ли я идентифицировать связки Bundle в своем субграфе? +### Can I identify Bundlr bundles in my Subgraph? В настоящее время это не поддерживается. @@ -188,7 +188,7 @@ Source.owner может быть открытым ключом пользова ### Каков текущий формат шифрования? -Данные обычно передаются в мэппингах в виде байтов (Bytes), которые, если хранятся напрямую, возвращаются в субграф в формате `hex` (например, хэши блоков и транзакций). Вы можете захотеть преобразовать их в формат `base64` или `base64 URL`-безопасный в Ваших мэппингах, чтобы они соответствовали тому, что отображается в блок-обозревателях, таких как [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). Следующая вспомогательная функция `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` может быть использована и будет добавлена в `graph-ts`: diff --git a/website/src/pages/ru/subgraphs/cookbook/enums.mdx b/website/src/pages/ru/subgraphs/cookbook/enums.mdx index 204b35851fc3..65f7091b08ae 100644 --- a/website/src/pages/ru/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/ru/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ title: Категоризация маркетплейсов NFT с исполь ### Пример использования Enums (перечислений) в Вашей схеме -Если вы создаете субграф для отслеживания истории владения токенами на рынке, каждый токен может переходить через разные стадии владения, такие как `OriginalOwner` (Первоначальный Владелец), `SecondOwner` (Второй Владелец) и `ThirdOwner` (Третий Владелец). Используя перечисления (enums), Вы можете определить эти конкретные стадии владения, обеспечивая присвоение только заранее определенных значений. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. Вы можете определить перечисления (enums) в своей схеме, и после их определения Вы можете использовать строковое представление значений перечислений для установки значения поля перечисления в объекты. @@ -65,14 +65,14 @@ type Token @entity { > Примечание: Следующее руководство использует смарт-контракт NFT CryptoCoven. -Чтобы определить перечисления для различных маркетплейсов, на которых торгуются NFT, используйте следующее в своей схеме субграфа: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Перечисление для маркетплейсов, с которыми взаимодействовал смарт-контракт CryptoCoven (вероятно, торговля или минт) enum Marketplace { OpenSeaV1 # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе OpenSeaV1 OpenSeaV2 # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе OpenSeaV2 - SeaPort # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе SeaPort + SeaPort # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе SeaPort LooksRare # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе LooksRare # ...и другие рынки } @@ -80,7 +80,7 @@ enum Marketplace { ## Использование перечислений (Enums) для Маркетплейсов NFT -После определения перечисления (enums) могут использоваться в Вашем субграфе для категоризации транзакций или событий. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. Например, при регистрации продаж NFT можно указать маркетплейс, на котором произошла сделка, используя перечисление. diff --git a/website/src/pages/ru/subgraphs/cookbook/grafting.mdx b/website/src/pages/ru/subgraphs/cookbook/grafting.mdx index 8605468ff4e7..e621211840ec 100644 --- a/website/src/pages/ru/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/ru/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Замените контракт и сохраните его историю с помощью Grafting --- -Из этого руководства Вы узнаете, как создавать и развертывать новые субграфы путем графтинга (переноса) существующих субграфов. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## Что такое Grafting? -При графтинге (переносе) повторно используются данные из существующего субрафа и начинается их индексация в более позднем блоке. Это может быть полезно в период разработки, чтобы быстро устранить простые ошибки в мэппинге или временно восстановить работу существующего субграфа после его сбоя. Кроме того, его можно использовать при добавлении в субграф функции, индексация которой с нуля занимает много времени. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -Перенесённый субграф может использовать схему GraphQL, которая не идентична схеме базового субграфа, а просто совместима с ней. Это должна быть автономно действующая схема субграфа, но она может отличаться от схемы базового субграфа следующим образом: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Она добавляет или удаляет типы объектов - Она удаляет атрибуты из типов объектов @@ -22,38 +22,38 @@ title: Замените контракт и сохраните его истор - [Графтинг](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -В этом руководстве мы рассмотрим базовый случай использования. Мы заменим существующий контракт идентичным (с новым адресом, но с тем же кодом). Затем подключим существующий субграф к "базовому" субграфу, который отслеживает новый контракт. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Важное примечание о Grafting при обновлении до сети -> **Предупреждение**: Рекомендуется не использовать графтинг для субграфов, опубликованных в сети The Graph +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Почему это важно? -Grafting — это мощная функция, которая позволяет «переносить» один субграф в другой, фактически перенося исторические данные из существующего субграфа в новую версию. Однако перенос субграфа из The Graph Network обратно в Subgraph Studio невозможен. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Лучшие практики -**Первоначальная миграция**: при первом развертывании субграфа в децентрализованной сети рекомендуется не использовать графтинг. Убедитесь, что субграф стабилен и работает должным образом. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Последующие обновления**: когда Ваш субграф будет развернут и стабилен в децентрализованной сети, Вы можете использовать графтинг для будущих версий, чтобы облегчить переход и сохранить исторические данные. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. Соблюдая эти рекомендации, Вы минимизируете риски и обеспечите более плавный процесс миграции. ## Создание существующего субграфа -Создание субграфов — это важная часть работы с The Graph, более подробно описанная [здесь](/subgraphs/quick-start/). Для того чтобы иметь возможность создать и развернуть существующий субграф, используемый в этом руководстве, предоставлен следующий репозиторий: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Пример репозитория субграфа](https://github.com/Shiyasmohd/grafting-tutorial) -> Примечание: Контракт, используемый в субграфе, был взят из следующего [стартового набора Hackathon](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Определение манифеста субграфа -Манифест субграфа `subgraph.yaml` определяет источники данных для субграфа, триггеры, которые представляют интерес, и функции, которые должны быть выполнены в ответ на эти триггеры. Ниже приведен пример манифеста субграфа, который Вы будете использовать: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Определение Манифеста Grafting -Grafting требует добавления двух новых элементов в исходный манифест субграфа: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - - grafting # наименование функции + - grafting # feature name graft: - base: Qm... # идентификатор субграфа базового субграфа - block: 5956000 # номер блока + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number ``` - `features:` — это список всех используемых [имен функций](/developing/creating-a-subgraph/#experimental-features). -- `graft:` — это отображение базового субграфа и блока, к которому применяется графтинг (перенос). `block` — это номер блока, с которого нужно начать индексирование. The Graph скопирует данные из базового субграфа до указанного блока включительно, а затем продолжит индексирование нового субграфа с этого блока. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -Значения `base` и `block` можно найти, развернув два субграфа: один для базового индексирования и один с графтингом +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Развертывание базового субграфа -1. Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и создайте субграф в тестнете Sepolia с названием `graft-example` -2. Следуйте инструкциям в разделе `AUTH & DEPLOY` на странице своего субграфа в папке `graft-example` репозитория -3. После завершения убедитесь, что субграф правильно индексируется. Если Вы запустите следующую команду в The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ graft: } ``` -Убедившись, что субграф индексируется правильно, Вы можете быстро обновить его с помощью графтинга. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Развертывание grafting субграфа Замененный subgraph.yaml будет иметь новый адрес контракта. Это может произойти, когда Вы обновите свое децентрализованное приложение, повторно развернете контракт и т. д. -1. Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и создайте субграф в тестнете Sepolia с названием `graft-replacement` -2. Создайте новый манифест. `subgraph.yaml` для `graph-replacement` содержит другой адрес контракта и новую информацию о том, как он должен быть присоединен. Это `block` [последнего сгенерированного события], которое Вас интересует, вызванного старым контрактом, и `base` старого субграфа. Идентификатор субграфа `base` — это `Deployment ID` Вашего исходного субграфа `graph-example`. Вы можете найти его в Subgraph Studio. -3. Следуйте инструкциям в разделе `AUTH & DEPLOY` на странице своего субграфа в папке `graft-replacement` репозитория -4. После завершения убедитесь, что субграф правильно индексируется. Если Вы запустите следующую команду в The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ graft: } ``` -Вы можете увидеть, что субграф `graft-replacement` индексирует данные как из старого субграфа `graph-example`, так и из новых данных из нового адреса контракта. Исходный контракт сгенерировал два события `Withdrawal`, [Событие 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) и [Событие 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). Новый контракт сгенерировал одно событие `Withdrawal` после этого, [Событие 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). Две ранее индексируемые транзакции (События 1 и 2) и новая транзакция (Событие 3) были объединены в субграфе `graft-replacement`. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Поздравляем! Вы успешно перенесли один субграф в другой. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Дополнительные ресурсы diff --git a/website/src/pages/ru/subgraphs/cookbook/near.mdx b/website/src/pages/ru/subgraphs/cookbook/near.mdx index ac22a9f8c015..5e521d9dd04d 100644 --- a/website/src/pages/ru/subgraphs/cookbook/near.mdx +++ b/website/src/pages/ru/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Создание субграфов на NEAR --- -Это руководство является введением в создание субграфов для индексирования смарт-контрактов на [блокчейне NEAR](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## Что такое NEAR? [NEAR](https://near.org/) — это платформа для смарт-контрактов, предназначенная для создания децентрализованных приложений. Для получения дополнительной информации ознакомьтесь с [официальной документацией](https://docs.near.org/concepts/basics/protocol). -## Что такое NEAR субграфы? +## What are NEAR Subgraphs? -The Graph предоставляет разработчикам инструменты для обработки событий блокчейна и упрощает доступ к полученным данным через API GraphQL, известный также как субграф. [Graph Node](https://github.com/graphprotocol/graph-node) теперь способен обрабатывать события NEAR, что позволяет разработчикам NEAR создавать субграфы для индексирования своих смарт-контрактов. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Субграфы основаны на событиях, что означает, что они отслеживают и обрабатывают события в блокчейне. В настоящее время для субграфов NEAR поддерживаются два типа обработчиков: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Обработчики блоков: они запускаются для каждого нового блока - Обработчики поступлений: запускаются каждый раз, когда сообщение выполняется в указанной учетной записи @@ -23,35 +23,35 @@ The Graph предоставляет разработчикам инструме ## Создание NEAR субграфа -`@graphprotocol/graph-cli` — это инструмент командной строки для создания и развертывания субграфов. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` — это библиотека типов, специфичных для субграфов. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -Для разработки субграфов на платформе NEAR требуется `graph-cli` версии выше `0.23.0` и `graph-ts` версии выше `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Построение NEAR субграфа очень похоже на построение субграфа, индексирующего Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -Существует три аспекта определения субграфа: +There are three aspects of Subgraph definition: -**subgraph.yaml:** манифест субграфа, определяющий источники данных и способы их обработки. NEAR является новым `kind` (типом) источника данных. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** файл схемы, который определяет, какие данные хранятся в Вашем субграфе и как к ним можно обращаться через GraphQL. Требования для субграфов NEAR описаны в [существующей документации](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **Мэппинги на AssemblyScript:** [код на AssemblyScript](/subgraphs/developing/creating/graph-ts/api/), который преобразует данные событий в элементы, определенные в Вашей схеме. Поддержка NEAR вводит специфичные для NEAR типы данных и новую функциональность для парсинга JSON. -Во время разработки субграфа есть две ключевые команды: +During Subgraph development there are two key commands: ```bash -$ graph codegen # генерирует типы из файла схемы, указанного в манифесте -$ graph build # генерирует Web Assembly из файлов AssemblyScript и подготавливает все файлы субграфа в папке /build +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Определение манифеста субграфа -Манифест субграфа (`subgraph.yaml`) определяет источники данных для субграфа, интересующие триггеры и функции, которые должны быть выполнены в ответ на эти триггеры. Пример манифеста субграфа для NEAR представлен ниже: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- Субграфы NEAR вводят новый тип источника данных (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - `network` должен соответствовать сети на хостинговой Graph Node. В Subgraph Studio майннет NEAR называется `near-mainnet`, а теснет NEAR — `near-testnet` - Источники данных NEAR содержат необязательное поле `source.account`, которое представляет собой удобочитаемый идентификатор, соответствующий [учетной записи NEAR] (https://docs.near.org/concepts/protocol/account-model). Это может быть как основной аккаунт, так и суб-аккаунт. - Источники данных NEAR вводят альтернативное необязательное поле `source.accounts`, которое содержит необязательные префиксы и суффиксы. Необходимо указать хотя бы один префикс или суффикс, они будут соответствовать любому аккаунту, начинающемуся или заканчивающемуся на значения из списка соответственно. Приведенный ниже пример будет совпадать с: `[app|good].*[morning.near|morning.testnet]`. Если необходим только список префиксов или суффиксов, другое поле можно опустить. @@ -92,7 +92,7 @@ accounts: ### Определение схемы -Определение схемы описывает структуру итоговой базы данных субграфа и отношения между объектами. Это не зависит от исходного источника данных. Более подробную информацию об определении схемы субграфа можно найти [здесь](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### Мэппинги AssemblyScript @@ -165,31 +165,31 @@ class ReceiptWithOutcome { - Обработчики блоков получат `Block` - Обработчики поступлений получат `ReceiptWithOutcome` -В остальном, весь [API для AssemblyScript](/subgraphs/developing/creating/graph-ts/api/) доступен разработчикам субграфов для NEAR во время выполнения мэппинга. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. Это включает в себя новую функцию для парсинга JSON — логи в NEAR часто выводятся как строковые JSON. Новая функция `json.fromString(...)` доступна в рамках [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api), что позволяет разработчикам легко обрабатывать эти логи. ## Развертывание NEAR субграфа -Как только Ваш субграф будет создан, наступает время развернуть его на Graph Node для индексирования. Субграфы NEAR можно развернуть на любом Graph Node версии `>=v0.26.x` (эта версия еще не отмечена и не выпущена). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio и Индексатор обновлений в The Graph Network в настоящее время поддерживают индексирование основной и тестовой сети NEAR в бета-версии со следующими именами сетей: - `near-mainnet` - `near-testnet` -Дополнительную информацию о создании и развертывании субграфов в Subgraph Studio можно найти [здесь](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -В качестве краткого примера — первый шаг заключается в "создании" Вашего субграфа — это нужно сделать только один раз. В Subgraph Studio это можно сделать на Вашей [панели управления](https://thegraph.com/studio/), выбрав опцию "Создать субграф". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -После того как субграф создан, его можно развернуть с помощью команды `graph deploy` в CLI: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # создает субграф на локальной Graph Node (в Subgraph Studio это делается через UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # загружает файлы сборки на указанную конечную точку IPFS, а затем разворачивает субграф на указанной Graph Node на основе хеша манифеста IPFS +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -Конфигурация ноды будет зависеть от того, где развертывается субграф. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Как только Ваш субграф будет развернут, он будет проиндексирован Graph Node. Вы можете проверить его прогресс, сделав запрос к самому субграфу: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,27 +228,27 @@ graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Обзор + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Предварительные требования + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Начнем + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Специфические особенности + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Дополнительные ресурсы + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ru/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/ru/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..32aa593f6384 --- /dev/null +++ b/website/src/pages/ru/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Введение + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Начнем + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Дополнительные ресурсы + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/ru/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/ru/subgraphs/cookbook/subgraph-debug-forking.mdx index 8f2e67289d77..14fd0d145d5c 100644 --- a/website/src/pages/ru/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/ru/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Быстрая и простая отладка субграфа с использованием форков --- -Как и многие системы, обрабатывающие большие объемы данных, Индексаторы The Graph (Graph Nodes) могут занять достаточно много времени для синхронизации Вашего субграфа с целевым блокчейном. Несоответствие между быстрыми изменениями, направленными на отладку, и долгим временем ожидания, необходимым для индексирования, является крайне непродуктивным, и мы прекрасно осознаем эту проблему. Поэтому мы представляем **форкинг субграфа**, разработанный [LimeChain](https://limechain.tech/), и в этой статье я покажу, как эту функцию можно использовать для значительного ускорения отладки субграфов! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## И так, что это? -**Форкинг субграфа** — это процесс ленивой загрузки объектов из _другого_ хранилища субграфа (обычно удалённого). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -В контексте отладки **форкинг субграфа** позволяет Вам отлаживать Ваш неудавшийся субграф на блоке _X_, не дожидаясь синхронизации с блоком _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## Что? Как? -Когда Вы развертываете субграф на удалённой Graph Node для индексирования, и он терпит неудачу на блоке _X_, хорошая новость заключается в том, что Graph Node всё равно будет обслуживать запросы GraphQL, используя своё хранилище, которое синхронизировано с блоком _X_. Это здорово! Таким образом, мы можем воспользоваться этим "актуальным" хранилищем, чтобы исправить ошибки, возникающие при индексировании блока _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -Короче говоря, мы собираемся _форкать неработающий субграф_ с удалённой Graph Node, которая гарантированно имеет индексированный субграф до блока _X_, чтобы предоставить локально развернутому субграфу, который отлаживается на блоке _X_, актуальное состояние индексирования. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## Пожалуйста, покажите мне какой-нибудь код! -Чтобы сосредоточиться на отладке субграфа, давайте упростим задачу и продолжим с [примером субграфа](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar), который индексирует смарт-контракт Ethereum Gravity. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Вот обработчики, определённые для индексирования `Gravatar`, без каких-либо ошибок: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Ой, как неприятно! Когда я развертываю свой идеально выглядящий субграф в [Subgraph Studio](https://thegraph.com/studio/), он выдаёт ошибку _"Gravatar not found!"_. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. Обычный способ попытаться исправить это: 1. Внести изменения в источник мэппингов, которые, по Вашему мнению, решат проблему (в то время как я знаю, что это не так). -2. Перезапустить развертывание своего субграфа в [Subgraph Studio](https://thegraph.com/studio/) (или на другую удалённую Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Подождать, пока он синхронизируется. 4. Если он снова сломается, вернуться к пункту 1, в противном случае: Ура! Действительно, это похоже на обычный процесс отладки, но есть один шаг, который ужасно замедляет процесс: _3. Ждите, пока завершится синхронизация._ -Используя **форк субграфа**, мы можем фактически устранить этот шаг. Вот как это выглядит: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Запустите локальную Graph Node с помощью **_соответстсвующего набора fork-base_**. 1. Внесите изменения в источник мэппингов, которые, по Вашему мнению, решат проблему. -2. Произведите развертывание на локальной Graph Node, **_форкнув неудачно развернутый субграф_** и **_начав с проблемного блока_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. Если он снова сломается, вернитесь к пункту 1, в противном случае: Ура! Сейчас у Вас может появиться 2 вопроса: @@ -69,18 +69,18 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { И я вам отвечаю: -1. `fork-base` - это «базовый» URL, при добавлении которого к _subgraph id_ результирующий URL (`/`) является действительной конечной точкой GraphQL для хранилища субграфа. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Форкнуть легко, не нужно напрягаться: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Также не забудьте установить поле `dataSources.source.startBlock` в манифесте субграфа на номер проблемного блока, чтобы пропустить индексирование ненужных блоков и воспользоваться преимуществами форка! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! Итак, вот что я делаю: -1. Я запускаю локальную Graph Node ([вот как это сделать](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) с опцией `fork-base`, установленной в: `https://api.thegraph.com/subgraphs/id/`, поскольку я буду форкать субграф, тот самый, который я ранее развертывал, с [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. После тщательной проверки я замечаю, что существует несоответствие в представлениях `id`, используемых при индексировании `Gravatar` в двух моих обработчиках. В то время как `handleNewGravatar` конвертирует его в hex (`event.params.id.toHex()`), `handleUpdatedGravatar` использует int32 (`event.params.id.toI32()`), что приводит к тому, что `handleUpdatedGravatar` завершается ошибкой и появляется сообщение "Gravatar not found!". Я заставляю оба обработчика конвертировать `id` в hex. -3. После внесения изменений я развертываю свой субграф на локальной Graph Node, **выполняя форк неудавшегося субграфа** и устанавливаю значение `dataSources.source.startBlock` равным `6190343` в файле `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. Я проверяю логи, созданные локальной Graph Node, и, ура!, кажется, все работает. -5. Я развертываю свой теперь свободный от ошибок субграф на удаленной Graph Node и живу долго и счастливо! (но без картошки) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/ru/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/ru/subgraphs/cookbook/subgraph-uncrashable.mdx index f81fe52608e8..67040c394cbd 100644 --- a/website/src/pages/ru/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/ru/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Генератор кода безопасного субграфа --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) — это инструмент для генерации кода, который создает набор вспомогательных функций из схемы GraphQL проекта. Он гарантирует, что все взаимодействия с объектами в Вашем субграфе будут полностью безопасными и последовательными. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Зачем интегрироваться с Subgraph Uncrashable? -- **Непрерывная работа**. Ошибки в обработке объектов могут привести к сбоям субграфов, что нарушит работу проектов, зависящих от The Graph. Настройте вспомогательные функции, чтобы Ваши субграфы оставались «непотопляемыми» и обеспечивали стабильную работу. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Полная безопасность**. Обычные проблемы при разработке субграфов — это ошибки загрузки неопределенных элементов, неинициализированные или неустановленные значения элементов, а также гонки при загрузке и сохранении элементов. Убедитесь, что все взаимодействия с объектами полностью атомарны. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **Настройка пользователем**. Установите значения по умолчанию и настройте уровень проверок безопасности, который соответствует потребностям Вашего индивидуального проекта. Записываются предупреждающие логи, указывающие на то, где произходит нарушение логики субграфа, что помогает исправить проблему и обеспечить точность данных. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Ключевые особенности** -- Инструмент генерации кода поддерживает **все** типы субграфов и настраивается таким образом, чтобы пользователи могли задать разумные значения по умолчанию. Генерация кода будет использовать эту настройку для создания вспомогательных функций в соответствии с требованиями пользователей. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - Фреймворк также включает в себя способ создания пользовательских, но безопасных функций установки для групп переменных объектов (через config-файл). Таким образом, пользователь не сможет загрузить/использовать устаревшую graph entity, и также не сможет забыть о сохранении или установке переменной, которая требуется функцией. -- Предупреждающие логи записываются как логи, указывающие на нарушение логики субграфа, чтобы помочь устранить проблему и обеспечить точность данных. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable можно запустить как необязательный флаг с помощью команды Graph CLI codegen. @@ -26,4 +26,4 @@ Subgraph Uncrashable можно запустить как необязатель graph codegen -u [options] [] ``` -Ознакомьтесь с [документацией по subgraph uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/docs/) или посмотрите этот [видеоруководство](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial), чтобы узнать больше и начать разрабатывать более безопасные субграфы. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/ru/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/ru/subgraphs/cookbook/transfer-to-the-graph.mdx index 570aab81debc..939561b026f3 100644 --- a/website/src/pages/ru/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/ru/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -2,13 +2,13 @@ title: Перенос в The Graph --- -Быстро обновите свои субграфы с любой платформы до [децентрализованной сети The Graph](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Преимущества перехода на The Graph -- Используйте тот же субграф, который уже используют ваши приложения, с миграцией без времени простоя. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Повышайте надежность благодаря глобальной сети, поддерживаемой более чем 100 индексаторами. -- Получайте молниеносную поддержку для субграфов 24/7 от дежурной команды инженеров. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Обновите свой субграф до The Graph за 3 простых шага @@ -23,7 +23,7 @@ title: Перенос в The Graph - Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и подключите свой кошелек. - Нажмите "Создать субграф". Рекомендуется называть субграф с использованием Заглавного регистра: "Subgraph Name Chain Name". -> Примечание: после публикации имя субграфа будет доступно для редактирования, но для этого каждый раз потребуется действие на он-чейне, поэтому выберите подходящее имя сразу. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Установите Graph CLI @@ -37,7 +37,7 @@ title: Перенос в The Graph npm install -g @graphprotocol/graph-cli@latest ``` -Используйте следующую команду для создания субграфа в Studio с помощью CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Разверните свой субграф в Studio -Если у Вас есть исходный код, Вы можете с легкостью развернуть его в Studio. Если его нет, вот быстрый способ развернуть Ваш субграф. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. В Graph CLI выполните следующую команду: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Примечание:** Каждый субграф имеет хэш IPFS (идентификатор развертывания), который выглядит так: "Qmasdfad...". Для развертывания просто используйте этот **IPFS хэш**. Вам будет предложено ввести версию (например, v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Опубликуйте свой субграф в The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Запросите Ваш Субграф -> Для того чтобы привлечь около 3 индексаторов для запроса Вашего субграфа, рекомендуется зафиксировать как минимум 3000 GRT. Чтобы узнать больше о кураторстве, ознакомьтесь с разделом [Кураторство](/resources/roles/curating/) на платформе The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -Вы можете начать [запрашивать](/subgraphs/querying/introduction/) любой субграф, отправив запрос GraphQL на конечную точку URL-адреса его запроса, которая расположена в верхней части страницы его эксплорера в Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Пример -[Субграф CryptoPunks на Ethereum](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) от Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![URL запроса](/img/cryptopunks-screenshot-transfer.png) -URL запроса для этого субграфа: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**Ваш-api-ключ**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ https://gateway-arbitrum.network.thegraph.com/api/`**Ваш-api-ключ**`/subg ### Мониторинг статуса субграфа -После обновления Вы сможете получить доступ к своим субграфам и управлять ими в [Subgraph Studio](https://thegraph.com/studio/) и исследовать все субграфы в [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Дополнительные ресурсы -- Чтобы быстро создать и опубликовать новый субграф, ознакомьтесь с [Руководством по быстрому старту](/subgraphs/quick-start/). -- Чтобы исследовать все способы оптимизации и настройки своего субграфа для улучшения производительности, читайте больше о [создании субграфа здесь](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ru/subgraphs/developing/_meta-titles.json b/website/src/pages/ru/subgraphs/developing/_meta-titles.json index 01a91b09ed77..7c82e83ac8dd 100644 --- a/website/src/pages/ru/subgraphs/developing/_meta-titles.json +++ b/website/src/pages/ru/subgraphs/developing/_meta-titles.json @@ -1,6 +1,6 @@ { - "creating": "Creating", - "deploying": "Deploying", - "publishing": "Publishing", - "managing": "Managing" + "creating": "Создание", + "deploying": "Развертывание", + "publishing": "Публикация", + "managing": "Управление" } diff --git a/website/src/pages/ru/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ru/subgraphs/developing/creating/advanced.mdx index a264671c393e..662c71ed059f 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Обзор -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Тайм-серии и агрегации @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Пример схемы @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Нефатальные ошибки -Ошибки индексирования в уже синхронизированных субграфах по умолчанию приведут к сбою субграфа и прекращению синхронизации. В качестве альтернативы субграфы можно настроить на продолжение синхронизации при наличии ошибок, игнорируя изменения, внесенные обработчиком, который спровоцировал ошибку. Это дает авторам субграфов время на исправление своих субграфов, в то время как запросы к последнему блоку продолжают обрабатываться, хотя результаты могут быть противоречивыми из-за бага, вызвавшего ошибку. Обратите внимание на то, что некоторые ошибки всё равно всегда будут фатальны. Чтобы быть нефатальной, ошибка должна быть детерминированной. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Для включения нефатальных ошибок необходимо установить в манифесте субграфа следующий флаг функции: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## Источники файловых данных IPFS/Arweave -Источники файловых данных — это новая функциональность субграфа для надежного и расширенного доступа к данным вне чейна во время индексации. Источники данных файлов поддерживают получение файлов из IPFS и Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Это также закладывает основу для детерминированного индексирования данных вне сети, а также потенциального введения произвольных данных из HTTP-источников. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//Этот пример кода предназначен для сборщика субграфа Crypto. Приведенный выше хеш ipfs представляет собой каталог с метаданными токена для всех NFT криптоковена. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //Это создает путь к метаданным для одного сборщика NFT Crypto. Он объединяет каталог с "/" + filename + ".json" + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" token.ipfsURI = tokenIpfsHash @@ -317,23 +317,23 @@ export function handleTransfer(event: TransferEvent): void { This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Поздравляем, Вы используете файловые источники данных! -#### Развертывание субграфов +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Ограничения -Обработчики и объекты файловых источников данных изолированы от других объектов субграфа, что гарантирует их детерминированность при выполнении и исключает загрязнение источников данных на чейн-основе. В частности: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Объекты, созданные с помощью файловых источников данных, неизменяемы и не могут быть обновлены - Обработчики файловых источников данных не могут получить доступ к объектам из других файловых источников данных - Объекты, связанные с источниками данных файлов, не могут быть доступны обработчикам на чейн-основе -> Хотя это ограничение не должно вызывать проблем в большинстве случаев, для некоторых оно может вызвать сложности. Если у Вас возникли проблемы с моделированием Ваших файловых данных в субграфе, свяжитесь с нами через Discord! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Кроме того, невозможно создать источники данных из файлового источника данных, будь то источник данных onchain или другой файловый источник данных. Это ограничение может быть снято в будущем. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Фильтры по темам, также известные как фильтры по индексированным аргументам, — это мощная функция в субграфах, которая позволяет пользователям точно фильтровать события блокчейна на основе значений их индексированных аргументов. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- Эти фильтры помогают изолировать конкретные интересующие события из огромного потока событий в блокчейне, позволяя субграфам работать более эффективно, сосредотачиваясь только на релевантных данных. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- Это полезно для создания персональных субграфов, отслеживающих конкретные адреса и их взаимодействие с различными смарт-контрактами в блокчейне. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### Как работают фильтры тем -Когда смарт-контракт генерирует событие, любые аргументы, помеченные как индексированные, могут использоваться в манифесте субграфа в качестве фильтров. Это позволяет субграфу выборочно прослушивать события, соответствующие этим индексированным аргументам. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ contract Token { #### Конфигурация в субграфах -Фильтры тем определяются непосредственно в конфигурации обработчика событий в манифесте субграфа. Вот как они настроены: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ eventHandlers: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Пример 2. Отслеживание транзакций в любом направлении между двумя и более адресами @@ -452,17 +452,17 @@ eventHandlers: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- Субграф будет индексировать транзакции, происходящие в любом направлении между несколькими адресами, что позволит осуществлять комплексный мониторинг взаимодействий с участием всех адресов. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Декларированный eth_call > Примечание: Это экспериментальная функция, которая пока недоступна в стабильной версии Graph Node. Вы можете использовать её только в Subgraph Studio или на своей локальной ноде. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. Эта функция выполняет следующие действия: -- Значительно повышает производительность получения данных из блокчейна Ethereum за счет сокращения общего времени выполнения нескольких вызовов и оптимизации общей эффективности субграфа. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Обеспечивает ускоренное получение данных, что приводит к более быстрому реагированию на запросы и улучшению пользовательского опыта. - Сокращает время ожидания для приложений, которым необходимо агрегировать данные из нескольких вызовов Ethereum, что делает процесс получения данных более эффективным. @@ -474,7 +474,7 @@ Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` #### Scenario without Declarative `eth_calls` -Представьте, что у вас есть субграф, которому необходимо выполнить три вызова в Ethereum, чтобы получить данные о транзакциях пользователя, балансе и владении токенами. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Традиционно эти вызовы могут выполняться последовательно: @@ -498,15 +498,15 @@ Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` #### Как это работает -1. Декларативное определение: В манифесте субграфа Вы декларируете вызовы Ethereum таким образом, чтобы указать, что они могут выполняться параллельно. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Механизм параллельного выполнения: Механизм выполнения The Graph Node распознает эти объявления и выполняет вызовы одновременно. -3. Агрегация результатов: После завершения всех вызовов результаты агрегируются и используются субграфом для дальнейшей обработки. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Пример конфигурации в манифесте субграфа Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ calls: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,20 +535,20 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Поскольку графтинг копирует, а не индексирует базовые данные, гораздо быстрее перенести субграф в нужный блок, чем индексировать с нуля, хотя для очень больших субграфов копирование исходных данных может занять несколько часов. Пока графтовый субграф инициализируется, узел The Graph будет регистрировать информацию о типах объектов, которые уже были скопированы. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. Перенесённый субграф может использовать схему GraphQL, которая не идентична схеме базового субграфа, а просто совместима с ней. Это должна быть автономно действующая схема субграфа, но она может отличаться от схемы базового субграфа следующим образом: @@ -560,4 +560,4 @@ When a subgraph whose manifest contains a `graft` block is deployed, Graph Node - Она добавляет или удаляет интерфейсы - Она изменяется в зависимости от того, под какой тип объектов реализован интерфейс -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/ru/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ru/subgraphs/developing/creating/assemblyscript-mappings.mdx index e4c398204f2e..6a74db44bfb3 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-too ## Генерация кода -Для упрощения и обеспечения безопасности типов при работе со смарт-контрактами, событиями и объектами Graph CLI может генерировать типы AssemblyScript на основе схемы GraphQL субграфа и ABI контрактов, включенных в источники данных. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Это делается с помощью @@ -80,7 +80,7 @@ There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-too graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/README.md index b6771a8305e5..d36adad723ef 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/README.md +++ b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/README.md @@ -6,7 +6,7 @@ TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to [The Graph](https://github.com/graphprotocol/graph-node). -## Usage +## Применение For a detailed guide on how to create a subgraph, please see the [Graph CLI docs](https://github.com/graphprotocol/graph-cli). diff --git a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/_meta-titles.json b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/_meta-titles.json index e850186d44c0..29a6950b50c7 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/_meta-titles.json +++ b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/_meta-titles.json @@ -1,5 +1,5 @@ { - "README": "Introduction", + "README": "Введение", "api": "Референс API", "common-issues": "Common Issues" } diff --git a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/api.mdx index 88bfcafe7af0..40ba29383852 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Узнайте, какие встроенные API можно использовать при написании мэппингов субграфов. По умолчанию доступны два типа API: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - [Библиотека The Graph TypeScript](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Код, сгенерированный из файлов субграфов с помощью `graph codegen` +- Code generated from Subgraph files by `graph codegen` Вы также можете добавлять другие библиотеки в качестве зависимостей, если они совместимы с [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ title: AssemblyScript API ### Версии -`apiVersion` в манифесте субграфа указывает версию мэппинга API, которая запускается посредством Graph Node для данного субграфа. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Версия | Примечания к релизу | -| :-: | --- | -| 0.0.9 | Добавлены новые функции хоста [`eth_get_balance`](#balance-of-an-address) и [`hasCode`](#check-if-an-address-a-contract-or-eoa) | -| 0.0.8 | Добавлена проверка наличия полей в схеме при сохранении объекта. | -| 0.0.7 | К типам Ethereum добавлены классы `TransactionReceipt` и `Log`
К объекту Ethereum Event добавлено поле `receipt` | -| 0.0.6 | В объект Ethereum Transaction добавлено поле `nonce`
В объект Ethereum Block добавлено поле `baseFeePerGas` | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | В объект Ethereum SmartContractCall добавлено поле `functionSignature` | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | В объект Ethereum Transaction добавлено поле `input` | +| Версия | Примечания к релизу | +| :----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Добавлены новые функции хоста [`eth_get_balance`](#balance-of-an-address) и [`hasCode`](#check-if-an-address-a-contract-or-eoa) | +| 0.0.8 | Добавлена проверка наличия полей в схеме при сохранении объекта. | +| 0.0.7 | К типам Ethereum добавлены классы `TransactionReceipt` и `Log`
К объекту Ethereum Event добавлено поле `receipt` | +| 0.0.6 | В объект Ethereum Transaction добавлено поле `nonce`
В объект Ethereum Block добавлено поле `baseFeePerGas` | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | В объект Ethereum SmartContractCall добавлено поле `functionSignature` | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | В объект Ethereum Transaction добавлено поле `input` | ### Встроенные типы @@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts' API `store` позволяет загружать, сохранять и удалять объекты из хранилища the Graph Node и в него. -Объекты, записанные в хранилище карты, сопоставляются один к одному с типами `@entity`, определенными в схеме субграфов GraphQL. Чтобы сделать работу с этими объектами удобной, команда `graph codegen`, предоставляемая [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) генерирует классы объектов, которые являются подклассами встроенного типа `Entity`, с геттерами и сеттерами свойств для полей в схеме, а также методами загрузки и сохранения этих объектов. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Создание объектов @@ -233,7 +233,7 @@ API `store` позволяет загружать, сохранять и уда // Импорт класса событий Transfer, сгенерированного из ERC20 ABI import { Transfer as TransferEvent } from '../generated/ERC20/ERC20' -// Импорт типа объекта Transfer, сгенерированного из схемы GraphQL +// Импорт типа объекта Transfer, сгенерированного из схемы GraphQL import { Transfer } from '../generated/schema' событие // Обработчик события передачи @@ -269,6 +269,7 @@ if (transfer == null) { transfer = new Transfer(id) } + // Используйте объект Transfer, как и раньше ``` @@ -282,8 +283,8 @@ if (transfer == null) { The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- В случае, если транзакция не существует, субграф должен будет обратиться к базе данных просто для того, чтобы узнать, что объект не существует. Если автор субграфа уже знает, что объект должен быть создан в том же блоке, использование `loadInBlock` позволяет избежать этого обращения к базе данных. -- Для некоторых субграфов эти пропущенные поиски могут существенно увеличить время индексации. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // или некоторым образом создается идентификатор @@ -292,6 +293,7 @@ if (transfer == null) { transfer = new Transfer(id) } + // Используйте объект Transfer, как и раньше ``` @@ -380,11 +382,11 @@ Ethereum API предоставляет доступ к смарт-контра #### Поддержка типов Ethereum -Как и в случае с объектами, `graph codegen` генерирует классы для всех смарт-контрактов и событий, используемых в субграфе. Для этого ABI контракта должны быть частью источника данных в манифесте субграфа. Как правило, файлы ABI хранятся в папке `abis/`. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -С помощью сгенерированных классов преобразования между типами Ethereum и [встроенными типами] (#built-in-types) происходят за кулисами, так что авторам субграфов не нужно беспокоиться о них. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -Следующий пример иллюстрирует это. С учётом схемы субграфа, такой как +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -483,7 +485,7 @@ class Log { #### Доступ к состоянию смарт-контракта -Код, сгенерированный с помощью `graph codegen`, также включает классы для смарт-контрактов, используемых в субграфе. Они могут быть использованы для доступа к общедоступным переменным состояния и вызова функций контракта в текущем блоке. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. Распространенным шаблоном является доступ к контракту, из которого исходит событие. Это достигается с помощью следующего кода: @@ -506,7 +508,7 @@ export function handleTransfer(event: TransferEvent) { Пока `ERC20Contract` в Ethereum имеет общедоступную функцию только для чтения, называемую `symbol`, ее можно вызвать с помощью `.symbol()`. Для общедоступных переменных состояния автоматически создается метод с таким же именем. -Любой другой контракт, который является частью субграфа, может быть импортирован из сгенерированного кода и привязан к действительному адресу. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Обработка возвращенных вызовов @@ -582,7 +584,7 @@ let isContract = ethereum.hasCode(eoa).inner // возвращает ложно import { log } from '@graphprotocol/graph-ts' ``` -API `log` позволяет субграфам записывать информацию в стандартный вывод Graph Node, а также в Graph Explorer. Сообщения могут быть зарегистрированы с использованием различных уровней ведения лога. Для составления сообщений лога из аргумента предусмотрен синтаксис строки базового формата. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. API `log` включает в себя следующие функции: @@ -590,7 +592,7 @@ API `log` включает в себя следующие функции: - `log.info (fmt: string, args: Array): void` - регистрирует информационное сообщение. - `log.warning(fmt: string, args: Array): void` - регистрирует предупреждение. - `log.error(fmt: string, args: Array): void` - регистрирует сообщение об ошибке. -- `log.critical(fmt: string, args: Array): void` – регистрирует критическое сообщение и завершает работу субграфа. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. API `log` принимает строку формата и массив строковых значений. Затем он заменяет заполнители строковыми значениями из массива. Первый `{}` заполнитель заменяется первым значением в массиве, второй `{}` заполнитель заменяется вторым значением и так далее. @@ -695,8 +697,8 @@ let data = ipfs.cat(path) import { JSONValue, Value } from '@graphprotocol/graph-ts' export function processItem(value: JSONValue, userData: Value): void { - // Смотрите документацию по JsonValue для получения подробной информации о работе - // со значениями JSON +// Смотрите документацию по JsonValue для получения подробной информации о работе +// со значениями JSON let obj = value.toObject() let id = obj.get('id') let title = obj.get('title') @@ -705,7 +707,7 @@ export function processItem(value: JSONValue, userData: Value): void { return } - // Обратные вызовы также могут создавать объекты +// Обратные вызовы также могут создавать объекты let newItem = new Item(id) newItem.title = title.toString() newitem.parent = userData.toString() // Установите для родителя значение "parentId" @@ -721,7 +723,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) Единственным поддерживаемым в настоящее время флагом является `json`, который должен быть передан в `ipfs.map`. С флагом `json` файл IPFS должен состоять из серии значений JSON, по одному значению в строке. Вызов `ipfs.map` прочитает каждую строку в файле, десериализует ее в `JSONValue` и совершит обратный вызов для каждой из них. Затем обратный вызов может использовать операции с объектами для хранения данных из `JSONValue`. Изменения объекта сохраняются только после успешного завершения обработчика, вызвавшего `ipfs.map`; в то же время они хранятся в памяти, и поэтому размер файла, который может обработать `ipfs.map`, ограничен. -При успешном завершении `ipfs.map` возвращает `void`. Если какое-либо совершение обратного вызова приводит к ошибке, обработчик, вызвавший `ipfs.map`, прерывается, а субграф помечается как давший сбой. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -770,44 +772,44 @@ if (value.kind == JSONValueKind.BOOL) { ### Справка по преобразованию типов -| Источник(и) | Место назначения | Функция преобразования | -| -------------------- | -------------------- | ----------------------------- | -| Address | Bytes | отсутствует | -| Address | String | s.toHexString() | -| BigDecimal | String | s.toString() | -| BigInt | BigDecimal | s.toBigDecimal() | -| BigInt | String (hexadecimal) | s.toHexString() или s.toHex() | -| BigInt | String (unicode) | s.toString() | -| BigInt | i32 | s.toI32() | -| Boolean | Boolean | отсутствует | -| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | -| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | -| Bytes | String (hexadecimal) | s.toHexString() или s.toHex() | -| Bytes | String (unicode) | s.toString() | -| Bytes | String (base58) | s.toBase58() | -| Bytes | i32 | s.toI32() | -| Bytes | u32 | s.toU32() | -| Bytes | JSON | json.fromBytes(s) | -| int8 | i32 | отсутствует | -| int32 | i32 | отсутствует | -| int32 | BigInt | BigInt.fromI32(s) | -| uint24 | i32 | отсутствует | -| int64 - int256 | BigInt | отсутствует | -| uint32 - uint256 | BigInt | отсутствует | -| JSON | boolean | s.toBool() | -| JSON | i64 | s.toU64() | -| JSON | u64 | s.toU64() | -| JSON | f64 | s.toF64() | -| JSON | BigInt | s.toBigInt() | -| JSON | string | s.toString() | -| JSON | Array | s.toArray() | -| JSON | Object | s.toObject() | -| String | Address | Address.fromString(s) | -| Bytes | Address | Address.fromBytes(s) | -| String | BigInt | BigInt.fromString(s) | -| String | BigDecimal | BigDecimal.fromString(s) | -| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | -| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | +| Источник(и) | Место назначения | Функция преобразования | +| ---------------------- | ------------------------- | ----------------------------------- | +| Address | Bytes | отсутствует | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | String (hexadecimal) | s.toHexString() или s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | отсутствует | +| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | String (hexadecimal) | s.toHexString() или s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | отсутствует | +| int32 | i32 | отсутствует | +| int32 | BigInt | BigInt.fromI32(s) | +| uint24 | i32 | отсутствует | +| int64 - int256 | BigInt | отсутствует | +| uint32 - uint256 | BigInt | отсутствует | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toU64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| Bytes | Address | Address.fromBytes(s) | +| String | BigInt | BigInt.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | ### Метаданные источника данных @@ -836,7 +838,7 @@ if (value.kind == JSONValueKind.BOOL) { ### DataSourceContext в манифесте -Раздел `context` в `dataSources` позволяет Вам определять пары ключ-значение, которые доступны в Ваших мэппингах субграфа. Доступные типы: `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List` и `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Ниже приведен пример YAML, иллюстрирующий использование различных типов в разделе `context`: @@ -887,4 +889,4 @@ dataSources: - `List`: Определяет список элементов. Для каждого элемента необходимо указать его тип и данные. - `BigInt`: Определяет большое целочисленное значение. Необходимо заключить в кавычки из-за большого размера. -Затем этот контекст становится доступным в Ваших мэппинговых файлах субграфов, что позволяет сделать субграфы более динамичными и настраиваемыми. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/common-issues.mdx index 74f717af91a4..0903710db4bf 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Распространенные проблемы с AssemblyScript --- -Существуют определенные проблемы c [AssemblyScript] (https://github.com/AssemblyScript/assemblyscript), с которыми часто приходится сталкиваться при разработке субграфа. Они различаются по сложности отладки, однако знание о них может помочь. Ниже приведен неполный перечень этих проблем: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Область видимости не наследуется [функциями замыкания](https://www.assemblyscript.org/status.html#on-closures), т.е. переменные, объявленные вне функций замыкания, не могут быть использованы. Пояснения см. в [Рекомендациях для разработчиков #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/ru/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ru/subgraphs/developing/creating/install-the-cli.mdx index b48104c2ff0d..0208397aeb4d 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Установка Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Обзор -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Начало работы @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Создайте субграф ### Из существующего контракта -Следующая команда создает субграф, индексирующий все события существующего контракта: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - Если какой-либо из необязательных аргументов отсутствует, Вам будет предложено воспользоваться интерактивной формой. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### Из примера подграфа -Следующая команда инициализирует новый проект на примере субграфа: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is Файл(ы) ABI должен(ы) соответствовать Вашему контракту (контрактам). Существует несколько способов получения файлов ABI: - Если Вы создаете свой собственный проект, у Вас, скорее всего, будет доступ к наиболее актуальным ABIS. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## Релизы SpecVersion - -| Версия | Примечания к релизу | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Добавлена поддержка обработчиков событий, имеющих доступ к чекам транзакций. | -| 0.0.4 | Добавлена ​​поддержка управления функциями субграфа. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/ru/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ru/subgraphs/developing/creating/ql-schema.mdx index fb468f6110f5..75c1d3e9bf15 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/ql-schema.mdx @@ -1,28 +1,28 @@ --- -title: The Graph QL Schema +title: Схема GraphQL --- ## Обзор -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +Схема для вашего субграфа находится в файле `schema.graphql`. Схемы GraphQL определяются с использованием языка определения интерфейса GraphQL. -> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. +> Примечание: Если вы никогда не писали схему GraphQL, рекомендуется ознакомиться с этим введением в систему типов GraphQL. Справочную документацию по схемам GraphQL можно найти в разделе [GraphQL API](/subgraphs/querying/graphql-api/). ### Определение Объектов Прежде чем определять объекты, важно сделать шаг назад и задуматься над тем, как структурированы и связаны Ваши данные. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- Все запросы будут выполняться против модели данных, определенной в схеме субграфа. Поэтому проектирование схемы субграфа должно основываться на запросах, которые ваше приложение будет выполнять. - Может быть полезно представить объекты как «объекты, содержащие данные», а не как события или функции. -- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. -- Each type that should be an entity is required to be annotated with an `@entity` directive. +- Вы определяете типы объектов в файле `schema.graphql`, и Graph Node будет генерировать поля верхнего уровня для запроса отдельных экземпляров и коллекций этих типов объектов. +- Каждый тип, который должен быть объектом, должен быть аннотирован директивой `@entity`. - По умолчанию объекты изменяемы, то есть мэппинги могут загружать существующие объекты, изменять и сохранять их новую версию. - - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. + - Изменяемость имеет свою цену, поэтому для типов объектов, которые никогда не будут изменяться, например, содержащих данные, извлеченные дословно из чейна, рекомендуется пометить их как неизменяемые с помощью `@entity(immutable: true)`. - Если изменения происходят в том же блоке, в котором был создан объект, то мэппинги могут вносить изменения в неизменяемые объекты. Неизменяемые объекты гораздо быстрее записываются и запрашиваются, поэтому их следует использовать, когда это возможно. #### Удачный пример -The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. +Следующий объект `Gravatar` структурирован вокруг объекта Gravatar и является хорошим примером того, как можно определить объект. ```graphql type Gravatar @entity(immutable: true) { @@ -36,7 +36,7 @@ type Gravatar @entity(immutable: true) { #### Неудачный пример -The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. +Следующий пример объектов `GravatarAccepted` и `GravatarDeclined` основан на событиях. Не рекомендуется сопоставлять события или вызовы функций 1:1 к объектам. ```graphql type GravatarAccepted @entity { @@ -56,15 +56,15 @@ type GravatarDeclined @entity { #### Дополнительные и обязательные поля -Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: +Поля объектов могут быть определены как обязательные или необязательные. Обязательные поля указываются с помощью `!` в схеме. Если поле является скалярным, вы получите ошибку при попытке сохранить объект. Если поле ссылается на другой объект, то вы получите следующую ошибку: ``` Null value resolved for non-null field 'name' ``` -Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. +Каждый объект должен иметь поле `id`, которое должно быть типа `Bytes!` или `String!`. Обычно рекомендуется использовать `Bytes!`, если только `id` не содержит текст, читаемый человеком, поскольку объекты с `id` типа `Bytes!` будут быстрее записываться и запрашиваться, чем те, у которых `id` типа `String!`. Поле `id` служит основным ключом и должно быть уникальным среди всех объектов одного типа. По историческим причинам также принимается тип `ID!`, который является синонимом `String!`. -For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. +Для некоторых типов объектов `id` для `Bytes!` формируется из `id` двух других объектов. Это возможно с использованием функции `concat`, например, `let id = left.id.concat(right.id)`, чтобы сформировать `id` из `id` объектов `left` и `right`. Аналогично, чтобы сформировать `id` из `id` существующего объекта и счетчика `count`, можно использовать `let id = left.id.concatI32(count)`. Конкатенация гарантирует, что `id` будет уникальным, если длина `left.id` одинаковая для всех таких объектов, например, если `left.id` — это `Address`. ### Встроенные скалярные типы @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two В API GraphQL поддерживаются следующие скаляры: -| Тип | Описание | -| --- | --- | -| `Bytes` | Массив байтов, представленный в виде шестнадцатеричной строки. Обычно используется для хэшей и адресов Ethereum. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Тип | Описание | +| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Массив байтов, представленный в виде шестнадцатеричной строки. Обычно используется для хэшей и адресов Ethereum. | +| `String` | Скаляр для значений типа `string`. Нулевые символы не поддерживаются и автоматически удаляются. | +| `Boolean` | Скаляр для значений `boolean`. | +| `Int` | Спецификация GraphQL определяет тип `Int` как знаковое 32-битное целое число. | +| `Int8` | 8-байтовое целое число со знаком, также известное как 64-битное целое число со знаком, может хранить значения в диапазоне от -9,223,372,036,854,775,808 до 9,223,372,036,854,775,807. Рекомендуется использовать его для представления типа `i64` из ethereum. | +| `BigInt` | Большие целые числа. Используются для типов `uint32`, `int64`, `uint64`, ..., `uint256` из Ethereum. Примечание: все типы, меньше чем `uint32`, такие как `int32`, `uint24` или `int8`, представлены как `i32`. | +| `BigDecimal` | `BigDecimal` Высокоточные десятичные числа, представленные как мантисса и экспонента. Диапазон экспоненты от −6143 до +6144. Округляется до 34 значащих цифр. | +| `Timestamp` | Это значение типа `i64` в микросекундах. Обычно используется для полей `timestamp` в временных рядах и агрегациях. | ### Перечисления @@ -95,9 +95,9 @@ enum TokenStatus { } ``` -Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: +Как только перечисление определено в схеме, вы можете использовать строковое представление значения перечисления для установки поля перечисления в объекте. Например, вы можете установить `tokenStatus` в значение `SecondOwner`, сначала определив ваш объект, а затем установив поле с помощью `entity.tokenStatus = "SecondOwner"`. Пример ниже демонстрирует, как будет выглядеть объект Token с полем перечисления: -More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). +Более подробную информацию о написании перечислений можно найти в [документации по GraphQL](https://graphql.org/learn/schema/). ### Связи объектов @@ -107,7 +107,7 @@ More detail on writing enums can be found in the [GraphQL documentation](https:/ #### Связи "Один к одному" -Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: +Определите тип объекта `Transaction` с необязательной связью "один к одному" с типом объекта `TransactionReceipt`: ```graphql type Transaction @entity(immutable: true) { @@ -123,7 +123,7 @@ type TransactionReceipt @entity(immutable: true) { #### Связи "Один ко многим" -Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: +Определите тип объекта `TokenBalance` с обязательной связью "один ко многим" с типом объекта `Token`: ```graphql type Token @entity(immutable: true) { @@ -139,13 +139,13 @@ type TokenBalance @entity { ### Обратные запросы -Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. +Обратные поисковые запросы можно определить в объекте с помощью поля `@derivedFrom`. Это создает виртуальное поле в объекте, которое может быть запрашиваемо, но не может быть установлено вручную через API отображений. Вместо этого оно вычисляется на основе связи, определенной в другом объекте. Для таких отношений редко имеет смысл хранить обе стороны связи, и как производительность индексирования, так и производительность запросов будут лучше, если хранится только одна сторона связи, а другая извлекается. -Для связей "один ко многим" связь всегда должна храниться на стороне "один", а сторона "многие" всегда должна быть производной. Такое сохранение связи, вместо хранения массива объектов на стороне "многие", приведет к значительному повышению производительности как при индексации, так и при запросах к субграфам. В общем, следует избегать хранения массивов объектов настолько, насколько это возможно. +Для отношений «один ко многим» отношение всегда должно храниться на стороне «один», а сторона «многие» должна быть выведена. Хранение отношений таким образом, а не хранение массива объектов на стороне «многие», приведет к значительному улучшению производительности как при индексировании, так и при запросах к субграфу. В общем, хранение массивов объектов следует избегать, насколько это возможно на практике. #### Пример -We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: +Мы можем сделать балансы для токена доступными из токена, создав поле `tokenBalances`: ```graphql type Token @entity(immutable: true) { @@ -160,15 +160,15 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Вот пример того, как написать мэппинг для субграфа с обратными поисковыми запросами: ```typescript -let token = new Token(event.address) // Create Token -token.save() // tokenBalances is derived automatically +let token = new Token(event.address) // Создание токена +token.save() // tokenBalances определяется автоматически let tokenBalance = new TokenBalance(event.address) tokenBalance.amount = BigInt.fromI32(0) -tokenBalance.token = token.id // Reference stored here +tokenBalance.token = token.id // Ссылка на токен сохраняется здесь tokenBalance.save() ``` @@ -178,7 +178,7 @@ tokenBalance.save() #### Пример -Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. +Определите обратный поиск от объекта `User` к объекту `Organization`. В примере ниже это достигается через поиск атрибута `members` внутри объекта `Organization`. В запросах поле `organizations` на объекте `User` будет разрешаться путем поиска всех объектов `Organization`, которые включают идентификатор пользователя. ```graphql type Organization @entity { @@ -194,7 +194,7 @@ type User @entity { } ``` -A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like +Более эффективный способ хранения этих отношений — это использование таблицы отображений, которая содержит одну запись для каждой пары `User` / `Organization` с такой схемой ```graphql type Organization @entity { @@ -231,11 +231,11 @@ query usersWithOrganizations { } ``` -Такой более сложный способ хранения связей "многие ко многим" приведет к уменьшению объема хранимых данных для субграфа и, следовательно, к тому, что субграф будет значительно быстрее индексироваться и запрашиваться. +Этот более сложный способ хранения отношений многие ко многим приведет к меньшему объему данных, хранимых для субграфа, что, в свою очередь, сделает субграф значительно быстрее как при индексировании, так и при запросах. ### Добавление комментариев к схеме -As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: +Согласно спецификации GraphQL, комментарии могут быть добавлены над атрибутами объектов схемы с использованием символа решетки `#`. Это показано в следующем примере: ```graphql type MyFirstEntity @entity { @@ -251,7 +251,7 @@ type MyFirstEntity @entity { Определение полнотекстового запроса включает в себя название запроса, словарь языка, используемый для обработки текстовых полей, алгоритм ранжирования, используемый для упорядочивания результатов, и поля, включенные в поиск. Каждый полнотекстовый запрос может охватывать несколько полей, но все включенные поля должны относиться к одному типу объекта. -To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. +Чтобы добавить полнотекстовый запрос, включите тип `_Schema_` с директивой `fulltext` в схему GraphQL. ```graphql type _Schema_ @@ -274,7 +274,7 @@ type Band @entity { } ``` -The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/subgraphs/querying/graphql-api/#queries) for a description of the fulltext search API and more example usage. +Пример поля `bandSearch` может быть использован в запросах для фильтрации объектов `Band` на основе текстовых документов в полях `name`, `description` и `bio`. Перейдите к [GraphQL API - Запросы](/subgraphs/querying/graphql-api/#queries) для описания API полнотекстового поиска и других примеров использования. ```graphql query { @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Управление функциями](#экспериментальные-функции):** Начиная с `specVersion` `0.0.4` и далее, `fullTextSearch` должен быть объявлен в разделе `features` манифеста субграфа. ## Поддерживаемые языки @@ -295,30 +295,30 @@ query { Поддерживаемые языковые словари: -| Code | Словарь | -| ------- | ------------- | -| простой | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | Португальский | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| Код | Словарь | +| ------- | ---------------- | +| простой | Общий | +| da | Датский | +| nl | Голландский | +| en | Английский | +| fi | Финский | +| fr | Французский | +| de | Немецкий | +| hu | Венгерский | +| it | Итальянский | +| no | Норвежский | +| pt | Португальский | +| ro | Румынский | +| ru | Русский | +| es | Испанский | +| sv | Шведский | +| tr | Турецкий | ### Алгоритмы ранжирования Поддерживаемые алгоритмы для упорядочивания результатов: -| Algorithm | Description | +| Алгоритм | Описание | | ------------- | ---------------------------------------------------------------------------------------------- | | rank | Используйте качество соответствия (0-1) полнотекстового запроса, чтобы упорядочить результаты. | -| proximityRank | Similar to rank but also includes the proximity of the matches. | +| proximityRank | Похоже на рейтинг, но также учитывает близость совпадений. | diff --git a/website/src/pages/ru/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ru/subgraphs/developing/creating/starting-your-subgraph.mdx index 8136fb559cff..60fcbd1a8dd9 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Обзор -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Версия | Примечания к релизу | +| :----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ru/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ru/subgraphs/developing/creating/subgraph-manifest.mdx index a8f1a728f47a..3bbbf09cdf51 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Обзор -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -Один субграф может: +A single Subgraph can: - Индексировать данные из нескольких смарт-контрактов (но не из нескольких сетей). @@ -24,12 +24,12 @@ The **subgraph definition** consists of the following files: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). Важными элементами манифеста, которые необходимо обновить, являются: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Обработчики событий -Обработчики событий в субграфе реагируют на конкретные события, генерируемые смарт-контрактами в блокчейне, и запускают обработчики, определенные в манифесте подграфа. Это позволяет субграфам обрабатывать и хранить данные о событиях в соответствии с определенной логикой. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Определение обработчика событий -Обработчик событий объявлен внутри источника данных в конфигурации YAML субграфа. Он определяет, какие события следует прослушивать, и соответствующую функцию, которую необходимо выполнить при обнаружении этих событий. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Обработчики вызовов -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Обработчики вызовов срабатывают только в одном из двух случаев: когда указанная функция вызывается учетной записью, отличной от самого контракта, или когда она помечена как внешняя в Solidity и вызывается как часть другой функции в том же контракте. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Определение обработчика вызова @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Функция мэппинга -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Обработчики блоков -В дополнение к подписке на события контракта или вызовы функций, субграф может захотеть обновить свои данные по мере добавления в цепочку новых блоков. Чтобы добиться этого, субграф может запускать функцию после каждого блока или после блоков, соответствующих заранее определенному фильтру. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Поддерживаемые фильтры @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. Отсутствие фильтра для обработчика блоков гарантирует, что обработчик вызывается для каждого блока. Источник данных может содержать только один обработчик блоков для каждого типа фильтра. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Однократный фильтр @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -Определенный обработчик с однократным фильтром будет вызываться только один раз перед запуском всех остальных обработчиков. Эта конфигурация позволяет субграфу использовать обработчик в качестве обработчика инициализации, выполняя определенные задачи в начале индексирования. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Функция мэппинга -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Стартовые блоки -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Подсказки индексатору -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Сокращение -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> Термин «история» в контексте субграфов означает хранение данных, отражающих старые состояния изменяемых объектов. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. История данного блока необходима для: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Отката субграфа обратно к этому блоку +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block Если исторические данные на момент создания блока были удалены, вышеупомянутые возможности будут недоступны. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: Чтобы сохранить определенный объем исторических данных: @@ -532,3 +532,18 @@ For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/# indexerHints: prune: never ``` + +## SpecVersion Releases + +| Версия | Примечания к релизу | +| :----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/ru/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ru/subgraphs/developing/creating/unit-testing-framework.mdx index a747fd939efb..336ce2398d0d 100644 --- a/website/src/pages/ru/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/ru/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Фреймворк модульного тестирования --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Начало работы @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### Параметры CLI @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Демонстрационный субграф +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Видеоуроки -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Структура тестов -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im Вот и все - мы создали наш первый тест! 👏 -Теперь, чтобы запустить наши тесты, Вам просто нужно запустить в корневой папке своего субграфа следующее: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,11 +1289,11 @@ test('file/ipfs dataSource creation example', () => { ## Тестовое покрытие -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. -### Prerequisites +### Предварительные требования To run the test coverage functionality provided in **Matchstick**, there are a few things you need to prepare beforehand: @@ -1311,7 +1311,7 @@ In order for that function to be visible (for it to be included in the `wat` fil export { handleNewGravatar } ``` -### Usage +### Применение После того как всё это будет настроено, чтобы запустить инструмент тестового покрытия, просто запустите: @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Дополнительные ресурсы -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Обратная связь diff --git a/website/src/pages/ru/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ru/subgraphs/developing/deploying/multiple-networks.mdx index 8ae55fbd8bcc..4f15c642b820 100644 --- a/website/src/pages/ru/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/ru/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Развертывание субграфа в нескольких сетях +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Развертывание подграфа в нескольких сетях +## Deploying the Subgraph to multiple networks -В некоторых случаях вы захотите развернуть один и тот же подграф в нескольких сетях, не дублируя весь его код. Основная проблема, возникающая при этом, заключается в том, что адреса контрактов в этих сетях разные. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Использование `graph-cli` @@ -19,7 +20,7 @@ This page explains how to deploy a subgraph to multiple networks. To deploy a su --network-file Путь к файлу конфигурации сетей (по умолчанию: "./networks.json") ``` -Вы можете использовать опцию `--network` для указания конфигурации сети из стандартного файла `json` (по умолчанию используется `networks.json`), чтобы легко обновлять свой субграф во время разработки. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Примечание: Команда `init` теперь автоматически сгенерирует `networks.json` на основе предоставленной информации. Затем Вы сможете обновить существующие или добавить дополнительные сети. @@ -53,7 +54,7 @@ This page explains how to deploy a subgraph to multiple networks. To deploy a su > Примечание: Вам не нужно указывать ни один из `templates` (если они у Вас есть) в файле конфигурации, только `dataSources`. Если есть какие-либо `templates`, объявленные в файле `subgraph.yaml`, их сеть будет автоматически обновлена до указанной с помощью опции `--network`. -Теперь давайте предположим, что Вы хотите иметь возможность развернуть свой субграф в сетях `mainnet` и `sepolia`, и это Ваш `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -95,7 +96,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -Команда `build` обновит Ваш `subgraph.yaml` конфигурацией `sepolia`, а затем повторно скомпилирует субграф. Ваш файл `subgraph.yaml` теперь должен выглядеть следующим образом: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -126,7 +127,7 @@ yarn deploy --network sepolia --network-file path/to/config Одним из способов параметризации таких аспектов, как адреса контрактов, с использованием старых версий `graph-cli` является генерация его частей с помощью системы шаблонов, такой как [Mustache](https://mustache.github.io/) или [Handlebars](https://handlebarsjs.com/). -Чтобы проиллюстрировать этот подход, давайте предположим, что субграф должен быть развернут в майннете и в сети Sepolia с использованием разных адресов контракта. Затем Вы можете определить два файла конфигурации, содержащие адреса для каждой сети: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -178,7 +179,7 @@ dataSources: } ``` -Чтобы развернуть этот субграф для основной сети или сети Sepolia, Вам нужно просто запустить одну из двух следующих команд: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -192,25 +193,25 @@ yarn prepare:sepolia && yarn deploy **Примечание:** Этот подход также можно применять в более сложных ситуациях, когда необходимо заменить не только адреса контрактов и сетевые имена, но и сгенерировать мэппинги или ABI из шаблонов. -Это предоставит Вам `chainHeadBlock`, который Вы сможете сравнить с `latestBlock` своего субграфа, чтобы проверить, не отстает ли он. `synced` сообщает, попал ли субграф в чейн. `health` в настоящее время может принимать значения `healthy`, если ошибки отсутствуют, или `failed`, если произошла ошибка, остановившая работу субграфа. В этом случае Вы можете проверить поле `fatalError` для получения подробной информации об этой ошибке. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Политика архивирования подграфов в Subgraph Studio +## Subgraph Studio Subgraph archive policy -Версия субграфа в Studio архивируется, если и только если выполняются следующие критерии: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - Версия не опубликована в сети (или ожидает публикации) - Версия была создана 45 или более дней назад -- Субграф не запрашивался в течение 30 дней +- The Subgraph hasn't been queried in 30 days -Кроме того, когда развертывается новая версия, если субграф не был опубликован, то версия N-2 субграфа архивируется. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -У каждого подграфа, затронутого этой политикой, есть возможность вернуть соответствующую версию обратно. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Проверка работоспособности подграфа +## Checking Subgraph health -Если подграф успешно синхронизируется, это хороший признак того, что он будет работать надёжно. Однако новые триггеры в сети могут привести к тому, что ваш подграф попадет в состояние непроверенной ошибки, или он может начать отставать из-за проблем с производительностью или проблем с операторами нод. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node предоставляет конечную точку GraphQL, которую Вы можете запросить для проверки статуса своего субграфа. В хостинговом сервисе он доступен по адресу `https://api.thegraph.com/index-node/graphql`. На локальной ноде он по умолчанию доступен через порт `8030/graphql`. Полную схему для этой конечной точки можно найти [здесь](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Вот пример запроса, проверяющего состояние текущей версии субграфа: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -237,4 +238,4 @@ Graph Node предоставляет конечную точку GraphQL, ко } ``` -Это предоставит Вам `chainHeadBlock`, который Вы сможете сравнить с `latestBlock` своего субграфа, чтобы проверить, не отстает ли он. `synced` сообщает, попал ли субграф в чейн. `health` в настоящее время может принимать значения `healthy`, если ошибки отсутствуют, или `failed`, если произошла ошибка, остановившая работу субграфа. В этом случае Вы можете проверить поле `fatalError` для получения подробной информации об этой ошибке. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/ru/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ru/subgraphs/developing/deploying/using-subgraph-studio.mdx index e1aadd279a0b..3ff9c8594763 100644 --- a/website/src/pages/ru/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/ru/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Узнайте, как развернуть свой субграф в Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Обзор Subgraph Studio В [Subgraph Studio](https://thegraph.com/studio/) Вы можете выполнять следующие действия: -- Просматривать список созданных Вами субграфов -- Управлять, просматривать детали и визуализировать статус конкретного субграфа -- Создание и управление ключами API для определенных подграфов +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Ограничивать использование своих API-ключей определенными доменами и разрешать только определенным индексаторам выполнять запросы с их помощью -- Создавать свой субграф -- Развертывать свой субграф, используя The Graph CLI -- Тестировать свой субграф в тестовой среде Playground -- Интегрировать свой субграф на стадии разработки, используя URL запроса разработки -- Публиковать свой субграф в The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Управлять своими платежами ## Установка The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Откройте [Subgraph Studio](https://thegraph.com/studio/). 2. Подключите свой кошелек для входа. - Вы можете это сделать через MetaMask, Coinbase Wallet, WalletConnect или Safe. -3. После входа в систему Ваш уникальный ключ развертывания будет отображаться на странице сведений о Вашем субграфе. - - Ключ развертывания позволяет публиковать субграфы, а также управлять вашими API-ключами и оплатой. Он уникален, но может быть восстановлен, если Вы подозреваете, что он был взломан. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Важно: для выполнения запросов к субграфам необходим API-ключ +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Совместимость подграфов с сетью Graph -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Не должны использовать ни одну из следующих функций: - - ipfs.cat & ipfs.map - - Нефатальные ошибки - - Grafting +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Инициализация Вашего Субграфа -После создания субграфа в Subgraph Studio Вы можете инициализировать его код через CLI с помощью следующей команды: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -Значение `` можно найти на странице сведений о субграфе в Subgraph Studio, см. изображение ниже: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -После запуска `graph init` Вам будет предложено ввести адрес контракта, сеть и ABI, которые Вы хотите запросить. Это приведет к созданию новой папки на Вашем локальном компьютере с базовым кодом для начала работы над субграфом. Затем Вы можете завершить работу над своим субграфом, чтобы убедиться, что он функционирует должным образом. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Аутентификация в Graph -Прежде чем Вы сможете развернуть свой субграф в Subgraph Studio, Вам будет необходимо войти в свою учетную запись в CLI. Для этого Вам понадобится ключ развертывания, который Вы сможете найти на странице сведений о субграфе. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. После этого используйте следующую команду для аутентификации через CLI: @@ -91,11 +85,11 @@ graph auth ## Развертывание субграфа -Когда будете готовы, Вы сможете развернуть свой субграф в Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Развертывание субграфа с помощью CLI отправляет его в Studio, где Вы сможете протестировать его и обновить метаданные. Это действие не приводит к публикации субграфа в децентрализованной сети. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Используйте следующую команду CLI для развертывания своего субграфа: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ graph deploy ## Тестирование Вашего субграфа -После развертывания Вы можете протестировать свой субграф (в Subgraph Studio или в собственном приложении, используя URL-адрес запроса на развертывание), развернуть другую версию, обновить метаданные и, когда будете готовы, опубликовать в [Graph Explorer](https://thegraph.com/explorer). +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Используйте Subgraph Studio, чтобы проверить логи на панели управления и обнаружить возможные ошибки в своем субграфе. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Публикация Вашего субграфа -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Управление версиями Вашего субграфа с помощью CLI -Если Вы хотите обновить свой субграф, Вы можете сделать следующее: +If you want to update your Subgraph, you can do the following: - Вы можете развернуть новую версию в Studio, используя CLI (на этом этапе она будет только приватной). - Если результат Вас устроит, Вы можете опубликовать новое развертывание в [Graph Explorer](https://thegraph.com/explorer). -- Это действие создаст новую версию вашего субграфа, о которой Кураторы смогут начать сигнализировать, а Индексаторы — индексировать. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Автоматическое архивирование версий подграфа -Каждый раз, когда Вы развертываете новую версию субграфа в Subgraph Studio, предыдущая версия архивируется. Архивированные версии не будут проиндексированы/синхронизированы и, следовательно, их нельзя будет запросить. Вы можете разархивировать архивированную версию своего субграфа в Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Примечание: предыдущие версии непубликованных субграфов, развернутых в Studio, будут автоматически архивированы. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/ru/subgraphs/developing/developer-faq.mdx b/website/src/pages/ru/subgraphs/developing/developer-faq.mdx index 4c5aa00bf9cf..a86d764816c8 100644 --- a/website/src/pages/ru/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/ru/subgraphs/developing/developer-faq.mdx @@ -1,43 +1,43 @@ --- title: Developer FAQ -sidebarTitle: FAQ +sidebarTitle: Часто задаваемые вопросы --- На этой странице собраны некоторые из наиболее частых вопросов для разработчиков, использующих The Graph. ## Вопросы, связанные с субграфом -### 1. Что такое субграф? +### 1. What is a Subgraph? -Субграф - это пользовательский API, построенный на данных блокчейна. Субграфы запрашиваются с использованием языка запросов GraphQL и развертываются на Graph Node с помощью Graph CLI. После развертывания и публикации в децентрализованной сети The Graph индексаторы обрабатывают субграфы и делают их доступными для запросов потребителей субграфов. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. Каков первый шаг в создании субграфа? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Могу ли я создать субграф, если в моих смарт-контрактах нет событий? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -Настоятельно рекомендуется структурировать смарт-контракты так, чтобы они содержали события, связанные с данными, которые вы хотите запросить. Обработчики событий в субграфе срабатывают на события контракта и являются самым быстрым способом получения нужных данных. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -Если контракты, с которыми Вы работаете, не содержат событий, Ваш субграф может использовать обработчики вызовов и блоков для запуска индексации. Хотя это не рекомендуется, так как производительность будет существенно ниже. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Могу ли я изменить учетную запись GitHub, связанную с моим субграфом? +### 4. Can I change the GitHub account associated with my Subgraph? -Нет. После создания субграфа связанная с ним учетная запись GitHub не может быть изменена. Пожалуйста, учтите это перед созданием субграфа. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. Как обновить субграф в майннете? +### 5. How do I update a Subgraph on mainnet? -Вы можете развернуть новую версию своего субграфа в Subgraph Studio с помощью интерфейса командной строки (CLI). Это действие сохраняет конфиденциальность вашего субграфа, но, если результат Вас удовлетворит, Вы сможете опубликовать его в Graph Explorer. При этом будет создана новая версия Вашего субграфа, на которую Кураторы смогут начать подавать сигналы. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Можно ли дублировать субграф на другую учетную запись или конечную точку без повторного развертывания? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Вы должны повторно развернуть субграф, но если идентификатор субграфа (хэш IPFS) не изменится, его не нужно будет синхронизировать с самого начала. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. Как вызвать контрактную функцию или получить доступ к публичной переменной состояния из моих мэппингов субграфа? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? В настоящее время нет, так как мэппинги написаны на языке AssemblyScript. @@ -45,15 +45,15 @@ Take a look at `Access to smart contract` state inside the section [AssemblyScri ### 9. При прослушивании нескольких контрактов, возможно ли выбрать порядок прослушивания событий контрактов? -Внутри субграфа события всегда обрабатываются в том порядке, в котором они появляются в блоках, независимо от того, относится ли это к нескольким контрактам или нет. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. Чем шаблоны отличаются от источников данных? -Шаблоны позволяют Вам быстро создавать источники данных, пока Ваш субграф индексируется. Ваш контракт может создавать новые контракты по мере того, как люди будут с ним взаимодействовать. Поскольку форма этих контрактов (ABI, события и т. д.) известна заранее, Вы сможете определить, как Вы хотите индексировать их в шаблоне. Когда они будут сгенерированы, Ваш субграф создаст динамический источник данных, предоставив адрес контракта. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Могу ли я удалить свой субграф? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Вопросы, связанный с сетью @@ -110,11 +110,11 @@ dataSource.address() Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. Есть ли какие-либо советы по увеличению производительности индексирования? Синхронизация моего субграфа занимает очень много времени +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Есть ли способ напрямую запросить субграф, чтобы определить номер последнего проиндексированного блока? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Да! Попробуйте выполнить следующую команду, заменив "organization/subgraphName" на название организации, под которой она опубликована, и имя Вашего субграфа: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Прочее diff --git a/website/src/pages/ru/subgraphs/developing/introduction.mdx b/website/src/pages/ru/subgraphs/developing/introduction.mdx index d5b1df06feae..8afe64411063 100644 --- a/website/src/pages/ru/subgraphs/developing/introduction.mdx +++ b/website/src/pages/ru/subgraphs/developing/introduction.mdx @@ -1,6 +1,6 @@ --- title: Introduction to Subgraph Development -sidebarTitle: Introduction +sidebarTitle: Введение --- To start coding right away, go to [Developer Quick Start](/subgraphs/quick-start/). @@ -11,21 +11,21 @@ To start coding right away, go to [Developer Quick Start](/subgraphs/quick-start На The Graph Вы можете: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Использовать GraphQL для запроса существующих субграфов. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### Что такое GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Действия разработчика -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Создавайте собственные субграфы для удовлетворения конкретных потребностей в данных, обеспечивая улучшенную масштабируемость и гибкость для других разработчиков. -- Развертывайте, публикуйте и сигнализируйте о своих субграфах в The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### Что такое субграфы? +### What are Subgraphs? -Субграф — это пользовательский API, созданный на основе данных блокчейна. Он извлекает данные из блокчейна, обрабатывает их и сохраняет так, чтобы их можно было легко запросить через GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/ru/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ru/subgraphs/developing/managing/deleting-a-subgraph.mdx index 5787620c079a..84674685403f 100644 --- a/website/src/pages/ru/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/ru/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -1,31 +1,31 @@ --- -title: Deleting a Subgraph +title: Удаление субграфа --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Удалите свой субграф, используя [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Удаление вашего субграфа удалит все опубликованные версии из сети The Graph, но он останется видимым в Graph Explorer и Subgraph Studio для пользователей, которые на него сигнализировали. ## Пошаговое руководство -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Перейдите на страницу субграфа в [Subgraph Studio](https://thegraph.com/studio/). -2. Click on the three-dots to the right of the "publish" button. +2. Нажмите на три точки справа от кнопки "опубликовать". -3. Click on the option to "delete this subgraph": +3. Нажмите на опцию "удалить этот субграф": - ![Delete-subgraph](/img/Delete-subgraph.png) + ![Удалить субграф](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. В зависимости от состояния субграфа, вам будут предложены различные варианты. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - Если субграф не опубликован, просто нажмите «удалить» и подтвердите действие. + - Если субграф опубликован, вам нужно будет подтвердить действие в вашем кошельке перед его удалением из Studio. Если субграф опубликован в нескольких сетях, таких как тестовая сеть и основная сеть, могут потребоваться дополнительные шаги. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> Если владелец субграфа подал сигнал на него, сигнализированный GRT будет возвращен владельцу. -### Important Reminders +### Важные напоминания -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Кураторы больше не смогут сигналить на сабграф. -- Кураторы, уже подавшие сигнал на субграф, могут отозвать свой сигнал по средней цене доли. -- Deleted subgraphs will show an error message. +- Как только вы удалите субграф, он **не** будет отображаться на главной странице Graph Explorer. Однако пользователи, которые сделали сигнал на него, все еще смогут просматривать его на своих профилях и удалить свой сигнал. +- Кураторы больше не смогут сигнализировать о субграфе. +- Кураторы, которые уже сигнализировали о субграфе, могут отозвать свой сигнал по средней цене доли. +- Удалённые субграфы будут показывать сообщение об ошибке. diff --git a/website/src/pages/ru/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ru/subgraphs/developing/managing/transferring-a-subgraph.mdx index bc76890218f7..f99757ea07e9 100644 --- a/website/src/pages/ru/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/ru/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Субграфы, опубликованные в децентрализованной сети, имеют NFT, сминченный по адресу, опубликовавшему субграф. NFT основан на стандарте ERC721, который облегчает переводы между аккаунтами в The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Напоминания -- Тот, кто владеет NFT, управляет субграфом. -- Если владелец решит продать или передать NFT, он больше не сможет редактировать или обновлять этот субграф в сети. -- Вы можете легко перенести управление субграфом на мультиподпись. -- Участник сообщества может создать субграф от имени DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## Просмотр Вашего субграфа как NFT -Чтобы просмотреть свой субграф как NFT, Вы можете посетить маркетплейс NFT, например, **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Пошаговое руководство -Чтобы передать право собственности на субграф, выполните следующие действия: +To transfer ownership of a Subgraph, do the following: 1. Используйте встроенный в Subgraph Studio пользовательский интерфейс: - ![Передача права собственности на субграф](/img/subgraph-ownership-transfer-1.png) + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Выберите адрес, на который хотели бы передать субграф: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/ru/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ru/subgraphs/developing/publishing/publishing-a-subgraph.mdx index bf789c87b2b0..8838c90b6889 100644 --- a/website/src/pages/ru/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/ru/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Публикация подграфа в децентрализованной сети +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -Публикуя субграф в децентрализованной сети, Вы делаете его доступным для: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,23 +18,23 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -Все опубликованные версии существующего субграфа могут: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Обновление метаданных опубликованного субграфа +### Updating metadata for a published Subgraph -- После публикации своего субграфа в децентрализованной сети Вы можете в любое время обновить метаданные в Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - После сохранения изменений и публикации обновлений они появятся в Graph Explorer. - Важно отметить, что этот процесс не приведет к созданию новой версии, поскольку Ваше развертывание не изменилось. ## Публикация с помощью CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Откройте `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. @@ -43,7 +44,7 @@ As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`]( ### Настройка Вашего развертывания -Вы можете загрузить сборку своего субграфа на конкретную ноду IPFS и дополнительно настроить развертывание с помощью следующих флагов: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -63,31 +64,31 @@ FLAGS ## Добавление сигнала к Вашему субграфу -Разработчики могут добавлять сигнал GRT в свои субграфы, чтобы стимулировать Индексаторов запрашивать субграф. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- Если субграф имеет право на вознаграждение за индексирование, Индексаторы, предоставившие «доказательство индексирования», получат вознаграждение GRT в зависимости от заявленной суммы GRT. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Добавление сигнала в субграф, который не имеет права на получение вознаграждения, не привлечет дополнительных Индексаторов. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> Если Ваш субграф имеет право на получение вознаграждения, рекомендуется курировать собственный субграф, добавив как минимум 3,000 GRT, чтобы привлечь дополнительных Индексаторов для индексирования Вашего субграфа. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -При подаче сигнала Кураторы могут решить подать сигнал на определенную версию субграфа или использовать автомиграцию. Если они подают сигнал с помощью автомиграции, доли куратора всегда будут обновляться до последней версии, опубликованной разработчиком. Если же они решат подать сигнал на определенную версию, доли всегда будут оставаться на этой конкретной версии. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Индексаторы могут находить субграфы для индексирования на основе сигналов курирования, которые они видят в Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio позволяет Вам добавлять сигнал в Ваш субграф, добавляя GRT в пул курирования Вашего субграфа в той же транзакции, в которой он публикуется. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Кроме того, Вы можете добавить сигнал GRT к опубликованному субграфу из Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/ru/subgraphs/developing/subgraphs.mdx b/website/src/pages/ru/subgraphs/developing/subgraphs.mdx index 62134b0551ae..8945ce707d0e 100644 --- a/website/src/pages/ru/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/ru/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Субграфы ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Жизненный цикл подграфа -Ниже представлен общий обзор жизненного цикла субграфа: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/ru/subgraphs/explorer.mdx b/website/src/pages/ru/subgraphs/explorer.mdx index 34a535683fca..b963e985dd99 100644 --- a/website/src/pages/ru/subgraphs/explorer.mdx +++ b/website/src/pages/ru/subgraphs/explorer.mdx @@ -2,70 +2,70 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Обзор -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. -## Inside Explorer +## Внутреннее устройство Explorer -The following is a breakdown of all the key features of Graph Explorer. For additional support, you can watch the [Graph Explorer video guide](/subgraphs/explorer/#video-guide). +Ниже приведён обзор всех ключевых функций Graph Explorer. Для получения дополнительной помощи Вы можете посмотреть [видеоруководство по Graph Explorer](/subgraphs/explorer/#video-guide). -### Subgraphs Page +### Страница субграфов -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Ваши готовые субграфы +- Your own finished Subgraphs - Субграфы, опубликованные другими -- Конкретный скбграф, который Вам нужен (в зависимости от даты создания, количества сигналов или имени). +- The exact Subgraph you want (based on the date created, signal amount, or name). -![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) +![Изображение Explorer 1](/img/Subgraphs-Explorer-Landing.png) -Нажав на субграф, Вы сможете сделать следующее: +When you click into a Subgraph, you will be able to do the following: - Протестировать запросы на тестовой площадке и использовать данные сети для принятия обоснованных решений. -- Подать сигнал GRT на свой собственный субграф или субграфы других, чтобы обратить внимание индексаторов на их значимость и качество. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. -![Explorer Image 2](/img/Subgraph-Details.png) +![Изображение Explorer 2](/img/Subgraph-Details.png) -На специальной странице каждого субграфа Вы можете выполнить следующие действия: +On each Subgraph’s dedicated page, you can do the following: -- Сигнал/снятие сигнала на субграфах +- Signal/Un-signal on Subgraphs - Просмотр дополнительных сведений, таких как диаграммы, текущий идентификатор развертывания и другие метаданные -- Переключение версии с целью изучения прошлых итераций субграфа -- Запрос субграфов через GraphQL -- Тестовые субграфы на тренировочной площадке -- Просмотр индексаторов, индексирующих определенный субграф +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Статистика субграфов (распределения, кураторы и т. д.) -- Просмотр объекта, опубликовавшего субграф +- View the entity who published the Subgraph -![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) +![Изображение Explorer 3](/img/Explorer-Signal-Unsignal.png) -### Delegate Page +### Страница Делегатора -On the [Delegate page](https://thegraph.com/explorer/delegate?chain=arbitrum-one), you can find information about delegating, acquiring GRT, and choosing an Indexer. +На [странице Делегатора](https://thegraph.com/explorer/delegate?chain=arbitrum-one) Вы можете найти информацию о делегировании, приобретении GRT и выборе Индексатора. -On this page, you can see the following: +На этой странице Вы можете увидеть следующее: -- Indexers who collected the most query fees -- Indexers with the highest estimated APR +- Индексаторы, собравшие наибольшее количество комиссий за запросы +- Индексаторы с самой высокой расчетной годовой процентной ставкой -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. -### Participants Page +### Страница участников -This page provides a bird's-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. +На этой странице представлен общий обзор всех «участников», включая всех участников сети, таких как Индексаторы, Делегаторы и Кураторы. #### 1. Индексаторы -![Explorer Image 4](/img/Indexer-Pane.png) +![Изображение Explorer 4](/img/Indexer-Pane.png) -Индексаторы являются основой протокола. Они стейкают на субграфы, индексируют их и обслуживают запросы всех, кто использует субграфы. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -В таблице Индексаторов Вы можете увидеть параметры делегирования Индексаторов, их стейк, сумму стейка, которую они поставили на каждый субграф, а также размер дохода, который они получили от комиссий за запросы и вознаграждений за индексирование. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Особенности** @@ -74,7 +74,7 @@ This page provides a bird's-eye view of all "participants," which includes every - Оставшееся время восстановления — время, оставшееся до того, как Индексатор сможет изменить вышеуказанные параметры делегирования. Периоды восстановления устанавливаются Индексаторами при обновлении параметров делегирования. - Собственность — это депозитная доля Индексатора, которая может быть урезана за злонамеренное или некорректное поведение. - Делегированный стейк — доля Делегаторов, которая может быть распределена Индексатором, но не может быть сокращена. -- Распределенный стейк — доля, которую Индексаторы активно распределяют между индексируемыми субграфами. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Доступная емкость делегирования — объем делегированной доли, которую Индексаторы всё ещё могут получить, прежде чем они будут перераспределены. - Максимальная емкость делегирования — максимальная сумма делегированной доли, которую Индексатор может продуктивно принять. Избыточная делегированная ставка не может быть использована для распределения или расчета вознаграждений. - Плата за запросы — это общая сумма комиссий, которую конечные пользователи заплатили за запросы Индексатора за все время. @@ -84,16 +84,16 @@ This page provides a bird's-eye view of all "participants," which includes every - Параметры индексирования можно задать, щелкнув мышью в правой части таблицы или перейдя в профиль Индексатора и нажав кнопку «Делегировать». -To learn more about how to become an Indexer, you can take a look at the [official documentation](/indexing/overview/) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) +Чтобы узнать больше о том, как стать Индексатором, Вы можете ознакомиться с [официальной документацией](/indexing/overview/) или [руководствами для Индексаторов Академии The Graph.] (https://thegraph.academy/delegators/choosing-indexers/) -![Indexing details pane](/img/Indexing-Details-Pane.png) +![Панель сведений об индексировании](/img/Indexing-Details-Pane.png) #### 2. Кураторы -Кураторы анализируют субграфы, чтобы определить, какие из них имеют наивысшее качество. Найдя потенциально привлекательный субграф, Куратор может курировать его, отправляя сигнал на его кривую связывания. Таким образом, Кураторы сообщают Индексаторам, какие субграфы имеют высокое качество и должны быть проиндексированы. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Кураторами могут быть члены сообщества, потребители данных или даже разработчики субграфов, которые сигнализируют о своих собственных субграфах, внося токены GRT в кривую связывания. - - Внося GRT, Кураторы чеканят кураторские акции субграфа. В результате они могут заработать часть комиссий за запросы, сгенерированных субграфом, на который они подали сигнал. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - Кривая связывания стимулирует Кураторов отбирать источники данных самого высокого качества. В приведенной ниже таблице «Куратор» вы можете увидеть: @@ -102,9 +102,9 @@ To learn more about how to become an Indexer, you can take a look at the [offici - Количество GRT, которое было внесено - Количество акций, которыми владеет Куратор -![Explorer Image 6](/img/Curation-Overview.png) +![Изображение Explorer 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/resources/roles/curating/) or [The Graph Academy](https://thegraph.academy/curators/). +Если Вы хотите узнать больше о роли Куратора, Вы можете сделать это, ознакомившись с [официальной документацией] (/resources/roles/curating/) или посетив [Академию The Graph](https://thegraph.academy/curators/). #### 3. Делегаторы @@ -112,14 +112,14 @@ If you want to learn more about the Curator role, you can do so by visiting [off - Без Делегаторов Индексаторы с меньшей долей вероятности получат значительные вознаграждения и сборы. Таким образом, Индексаторы привлекают Делегаторов, предлагая им часть вознаграждения за индексацию и комиссию за запросы. - Делегаторы выбирают Индексаторов на основе ряда различных переменных, таких как прошлые результаты, ставки вознаграждения за индексирование и снижение платы за запросы. -- Reputation within the community can also play a factor in the selection process. It's recommended to connect with the selected Indexers via [The Graph's Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/). +- Репутация в сообществе может также повлиять на выбор. Вы можете связаться с Индексаторами через [Дискорд The Graph](https://discord.gg/graphprotocol) или [Форум The Graph](https://forum.thegraph.com/). -![Explorer Image 7](/img/Delegation-Overview.png) +![Изображение Explorer 7](/img/Delegation-Overview.png) В таблице «Делегаторы» Вы можете увидеть активных в сообществе Делегаторов и важные показатели: - Количество Индексаторов, к которым делегирует Делегатор -- A Delegator's original delegation +- Первоначальная делегация Делегатора - Накопленные ими вознаграждения, которые они не вывели из протокола - Реализованные вознаграждения, которые они сняли с протокола - Общее количество GRT, которое у них имеется в настоящее время в протоколе @@ -127,9 +127,9 @@ If you want to learn more about the Curator role, you can do so by visiting [off If you want to learn more about how to become a Delegator, check out the [official documentation](/resources/roles/delegating/delegating/) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). -### Network Page +### Страница сети -On this page, you can see global KPIs and have the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +На этой странице Вы можете увидеть глобальные ключевые показатели эффективности и получить возможность переключения на поэпохальную основу и более детально проанализировать сетевые метрики. Эти данные дадут Вам представление о том, как работает сеть на протяжении определённого времени. #### Обзор @@ -144,10 +144,10 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep Несколько важных деталей, на которые следует обратить внимание: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). -![Explorer Image 8](/img/Network-Stats.png) +![Изображение Explorer 8](/img/Network-Stats.png) #### Эпохи @@ -161,7 +161,7 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep - Эпохи распределения - это эпохи, в которых состояния каналов для эпох регулируются, и Индексаторы могут требовать скидки на комиссию за запросы. - Завершенные эпохи — это эпохи, в которых Индексаторы больше не могут заявить возврат комиссии за запросы. -![Explorer Image 9](/img/Epoch-Stats.png) +![Изображение Explorer 9](/img/Epoch-Stats.png) ## Ваш профиль пользователя @@ -174,19 +174,19 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep - Любое из текущих действий, которые Вы совершили. - Данные своего профиля, описание и веб-сайт (если Вы его добавили). -![Explorer Image 10](/img/Profile-Overview.png) +![Изображение Explorer 10](/img/Profile-Overview.png) ### Вкладка "Субграфы" -На вкладке «Субграфы» Вы увидите опубликованные вами субграфы. +In the Subgraphs tab, you’ll see your published Subgraphs. -> Сюда не будут включены субграфы, развернутые с помощью CLI в целях тестирования. Субграфы будут отображаться только после публикации в децентрализованной сети. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. -![Explorer Image 11](/img/Subgraphs-Overview.png) +![Изображение Explorer 11](/img/Subgraphs-Overview.png) ### Вкладка "Индексирование" -На вкладке «Индексирование» Вы найдете таблицу со всеми активными и прежними распределениями по субграфам. Вы также найдете диаграммы, на которых сможете увидеть и проанализировать свои прошлые результаты в качестве Индексатора. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Этот раздел также будет содержать подробную информацию о Ваших чистых вознаграждениях Индексатора и чистой комиссии за запросы. Вы увидите следующие показатели: @@ -197,7 +197,7 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep - Сокращение вознаграждений — процент вознаграждений Индексатора, который Вы сохраните при разделении с Делегаторами - Собственность — Ваша внесенная ставка, которая может быть уменьшена за злонамеренное или неправильное поведение -![Explorer Image 12](/img/Indexer-Stats.png) +![Изображение Explorer 12](/img/Indexer-Stats.png) ### Вкладка "Делегирование" @@ -219,20 +219,20 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep Имейте в виду, что эта диаграмма прокручивается по горизонтали, поэтому, если Вы прокрутите ее до конца вправо, Вы также сможете увидеть статус своего делегирования (делегирование, отмена делегирования, возможность отзыва). -![Explorer Image 13](/img/Delegation-Stats.png) +![Изображение Explorer 13](/img/Delegation-Stats.png) ### Вкладка "Курирование" -На вкладке «Курирование» Вы найдете все субграфы, на которые Вы подаете сигналы (что позволит Вам получать комиссию за запросы). Сигнализация позволяет Кураторам указывать Индексаторам, какие субграфы являются ценными и заслуживающими доверия, тем самым сигнализируя о том, что их необходимо проиндексировать. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. На данной вкладке Вы найдете обзор: -- Всех субграфов, которые Вы курируете, с подробной информацией о сигнале -- Общего количества акций на субграф -- Вознаграждений за запрос за субграф +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Даты обновления данных -![Explorer Image 14](/img/Curation-Stats.png) +![Изображение Explorer 14](/img/Curation-Stats.png) ### Параметры Вашего профиля @@ -241,7 +241,7 @@ On this page, you can see global KPIs and have the ability to switch to a per-ep - Операторы выполняют ограниченные действия в протоколе от имени Индексатора, такие как открытие и закрытие распределения. Операторами обычно являются другие адреса Ethereum, отдельные от их кошелька для ставок, с ограниченным доступом к сети, который Индексаторы могут настроить лично - Параметры делегирования позволяют Вам контролировать распределение GRT между Вами и Вашими Делегаторами. -![Explorer Image 15](/img/Profile-Settings.png) +![Изображение Explorer 15](/img/Profile-Settings.png) Являясь Вашим официальным порталом в мир децентрализованных данных, Graph Explorer позволяет Вам выполнять самые разные действия, независимо от Вашей роли в сети. Вы можете перейти к настройкам своего профиля, открыв выпадающее меню рядом со своим адресом и нажав кнопку «Настройки». diff --git a/website/src/pages/ru/subgraphs/guides/arweave.mdx b/website/src/pages/ru/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..800f22842ffe --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Создание Субграфов на Arweave +--- + +> Поддержка Arweave в Graph Node и Subgraph Studio находится на стадии бета-тестирования: пожалуйста, обращайтесь к нам в [Discord](https://discord.gg/graphprotocol) с любыми вопросами о создании субграфов Arweave! + +Из этого руководства Вы узнаете, как создавать и развертывать субграфы для индексации блокчейна Arweave. + +## Что такое Arweave? + +Протокол Arweave позволяет разработчикам хранить данные на постоянной основе, и в этом основное различие между Arweave и IPFS, поскольку в IPFS отсутствует функция постоянства, а файлы, хранящиеся в Arweave, не могут быть изменены или удалены. + +Arweave уже создала множество библиотек для интеграции протокола на нескольких различных языках программирования. С дополнительной информацией Вы можете ознакомиться: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Ресурсы Arweave](https://www.arweave.org/build) + +## Что такое субграфы Arweave? + +The Graph позволяет создавать собственные открытые API, называемые "Субграфами". Субграфы используются для указания индексаторам (операторам серверов), какие данные индексировать на блокчейне и сохранять на их серверах, чтобы Вы могли запрашивать эти данные в любое время используя [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) теперь может индексировать данные на протоколе Arweave. Текущая интеграция индексирует только Arweave как блокчейн (блоки и транзакции), она еще не индексирует сохраненные файлы. + +## Построение Субграфа на Arweave + +Чтобы иметь возможность создавать и развертывать Субграфы на Arweave, Вам понадобятся два пакета: + +1. `@graphprotocol/graph-cli` версии выше 0.30.2 — это инструмент командной строки для создания и развертывания субграфов. [Нажмите здесь](https://www.npmjs.com/package/@graphprotocol/graph-cli), чтобы скачать с помощью `npm`. +2. `@graphprotocol/graph-ts` версии выше 0.27.0 — это библиотека типов, специфичных для субграфов. [Нажмите здесь](https://www.npmjs.com/package/@graphprotocol/graph-ts), чтобы скачать с помощью `npm`. + +## Составляющие Субграфов + +Существует три компонента субграфа: + +### 1. Манифест - `subgraph.yaml` + +Определяет источники данных, представляющие интерес, и то, как они должны обрабатываться. Arweave - это новый вид источника данных. + +### 2. Схема - `schema.graphql` + +Здесь Вы определяете, какие данные хотите иметь возможность запрашивать после индексации своего субграфа с помощью GraphQL. На самом деле это похоже на модель для API, где модель определяет структуру тела запроса. + +Требования для субграфов Arweave описаны в [существующей документации](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. Мэппинги на AssemblyScript - `mapping.ts` + +Это логика, которая определяет, как данные должны извлекаться и храниться, когда кто-то взаимодействует с источниками данных, которые Вы отслеживаете. Данные переводятся и сохраняются в соответствии с указанной Вами схемой. + +Во время разработки субграфа есть две ключевые команды: + +``` +$ graph codegen # генерирует типы из файла схемы, указанного в манифесте +$ graph build # генерирует Web Assembly из файлов AssemblyScript и подготавливает все файлы субграфа в папке /build +``` + +## Определение манифеста субграфа + +Манифест субграфа `subgraph.yaml` идентифицирует источники данных для субграфа, триггеры, представляющие интерес, и функции, которые должны быть выполнены в ответ на эти триггеры. Ниже приведен пример манифеста субграфа для Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # ссылка на файл схемы +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph поддерживает только Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # Открытый ключ кошелька Arweave + startBlock: 0 # установите это значение на 0, чтобы начать индексацию с генезиса чейна + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # ссылка на файл с мэппингами Assemblyscript + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # имя функции в файле мэппинга + transactionHandlers: + - handler: handleTx # имя функции в файле мэппинга +``` + +- Arweave Subgraphs вводят новый тип источника данных (`arweave`) +- Сеть должна соответствовать сети на размещенной Graph Node. В Subgraph Studio мейннет Arweave обозначается как `arweave-mainnet` +- Источники данных Arweave содержат необязательное поле source.owner, которое является открытым ключом кошелька Arweave + +Источники данных Arweave поддерживают два типа обработчиков: + +- `blockHandlers` — выполняется при каждом новом блоке Arweave. source.owner не требуется. +- `transactionHandlers` — выполняется при каждой транзакции, где `source.owner` является владельцем источника данных. На данный момент для `transactionHandlers` требуется указать владельца. Если пользователи хотят обрабатывать все транзакции, они должны указать `""` в качестве `source.owner` + +> Source.owner может быть адресом владельца или его Публичным ключом. +> +> Транзакции являются строительными блоками Arweave permaweb, и они представляют собой объекты, созданные конечными пользователями. +> +> Примечание: транзакции [Irys (ранее Bundlr)](https://irys.xyz/) пока не поддерживаются. + +## Определение схемы + +Определение схемы описывает структуру итоговой базы данных субграфа и отношения между объектами. Это не зависит от исходного источника данных. Более подробную информацию о определении схемы субграфа можно найти [здесь](/developing/creating-a-subgraph/#the-graphql-schema). + +## Мэппинги AssemblyScript + +Обработчики для обработки событий написаны на [AssemblyScript](https://www.assemblyscript.org/). + +Индексирование Arweave вводит специфичные для Arweave типы данных в [API AssemblyScript](https://thegraph. com/docs/using-graph-ts). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Обработчики блоков получают `Block`, в то время как обработчики транзакций получают `Transaction`. + +Написание мэппингов для субграфа Arweave очень похоже на написание мэппингов для субграфа Ethereum. Для получения дополнительной информации нажмите [сюда](/developing/creating-a-subgraph/#writing-mappings). + +## Развертывание субграфа Arweave в Subgraph Studio + +После того как Ваш субграф был создан на панели управления Subgraph Studio, вы можете развернуть его с помощью команды CLI `graph deploy`. + +```bash +graph deploy --access-token +``` + +## Запрос субграфа Arweave + +Конечная точка GraphQL для Arweave Subgraphs определяется определением схемы, с использованием существующего интерфейса API. Пожалуйста, посетите [документацию по GraphQL API](/subgraphs/querying/graphql-api/) для получения дополнительной информации. + +## Примеры субграфов + +Вот пример субграфа для справки: + +- [Пример субграфа для Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## Часто задаваемые вопросы + +### Может ли субграф индексировать данные с Arweave и других чейнов? + +Нет, субграф может поддерживать источники данных только из одного чейна/сети. + +### Могу ли я проиндексировать сохраненные файлы в Arweave? + +В настоящее время The Graph индексирует Arweave только как блокчейн (его блоки и транзакции). + +### Могу ли я идентифицировать Bundlr-бандлы в своём субграфе? + +В настоящее время это не поддерживается. + +### Как я могу отфильтровать транзакции по определенному аккаунту? + +Source.owner может быть открытым ключом пользователя или адресом учетной записи. + +### Каков текущий формат шифрования? + +Данные обычно передаются в мэппингах в виде байтов, которые, если сохраняются напрямую, возвращаются в субграфе в формате `hex` (например, хэши блоков и транзакций). Возможно, вам захочется преобразовать их в формат `base64` или `base64 URL`-безопасный в ваших мэппингах, чтобы привести их в соответствие с тем, как они отображаются в эксплорерах блоков, таких как [Arweave Explorer](https://viewblock.io/arweave/). + +Следующая вспомогательная функция `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` может быть использована и будет добавлена в `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/ru/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ru/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..ba2416901e38 --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Обзор + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Предварительные требования + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +или + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Заключение + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/ru/subgraphs/guides/enums.mdx b/website/src/pages/ru/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..6109eaaae73f --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Категоризация маркетплейсов NFT с использованием Enums (перечислений) +--- + +Используйте Enums (перечисления), чтобы сделать Ваш код чище и уменьшить вероятность ошибок. Вот полный пример использования перечислений для маркетплейсов NFT. + +## Что такое Enums (перечисления)? + +Перечисления (или типы перечислений) — это особый тип данных, который позволяет определить набор конкретных допустимых значений. + +### Пример использования Enums (перечислений) в Вашей схеме + +Если вы создаете cубграф для отслеживания истории владения токенами на маркетплейсе, каждый токен может проходить через разные этапы владения, такие как `OriginalOwner`, `SecondOwner` и `ThirdOwner`. Используя перечисления (enums), вы можете определить эти конкретные этапы владения, обеспечив, что будут присваиваться только заранее определенные значения. + +Вы можете определить перечисления (enums) в своей схеме, и после их определения Вы можете использовать строковое представление значений перечислений для установки значения поля перечисления в объекты. + +Вот как может выглядеть определение перечисления (enum) в Вашей схеме, исходя из приведенного выше примера: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +Это означает, что когда Вы используете тип `TokenStatus` в своей схеме, Вы ожидаете, что он будет иметь одно из заранее определенных значений: `OriginalOwner` (Первоначальный Владелец), `SecondOwner` (Второй Владелец) или `ThirdOwner` (Третий Владелец), что обеспечивает согласованность и корректность данных. + +Чтобы узнать больше о перечислениях (Enums), ознакомьтесь с разделом [Создание субграфа](/developing/creating-a-subgraph/#enums) и с [документацией GraphQL](https://graphql.org/learn/schema/#enumeration-types). + +## Преимущества использования перечислений (Enums) + +- **Ясность:** Перечисления предоставляют значимые имена для значений, что делает данные более понятными. +- **Валидация:** Перечисления обеспечивают строгие определения значений, предотвращая ввод недопустимых данных. +- **Поддерживаемость:** Когда Вам нужно изменить или добавить новые категории, перечисления позволяют сделать это целенаправленно и удобно. + +### Без перечислений (Enums) + +Если Вы решите определить тип как строку вместо использования перечисления (Enum), Ваш код может выглядеть следующим образом: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +В этой схеме `TokenStatus` является простой строкой без конкретных и допустимых значений. + +#### Почему это является проблемой? + +- Нет никаких ограничений на значения `TokenStatus`, поэтому любое строковое значение может быть назначено случайно. Это усложняет обеспечение того, что устанавливаются только допустимые статусы, такие как `OriginalOwner` (Первоначальный Владелец), `SecondOwner` (Второй Владелец) или `ThirdOwner` (Третий Владелец). +- Легко допустить опечатку, например, `Orgnalowner` вместо `OriginalOwner`, что делает данные и потенциальные запросы ненадежными. + +### С перечислениями (Enums) + +Вместо присвоения строк произвольной формы Вы можете определить перечисление (Enum) для `TokenStatus` с конкретными значениями: `OriginalOwner`, `SecondOwner` или `ThirdOwner`. Использование перечисления гарантирует, что используются только допустимые значения. + +Перечисления обеспечивают безопасность типов, минимизируют риск опечаток и гарантируют согласованные и надежные результаты. + +## Определение перечислений (Enums) для Маркетплейсов NFT + +> Примечание: Следующее руководство использует смарт-контракт NFT CryptoCoven. + +Чтобы определить перечисления (enums) для различных маркетплейсов, где торгуются NFT, используйте следующее в вашей схеме cубграфа: + +```gql +# Перечисление для маркетплейсов, с которыми взаимодействовал смарт-контракт CryptoCoven (вероятно, торговля или минт) +enum Marketplace { + OpenSeaV1 # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе OpenSeaV1 + OpenSeaV2 # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе OpenSeaV2 + SeaPort # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе SeaPort + LooksRare # Представляет случай, когда NFT CryptoCoven торгуется на маркетплейсе LooksRare + # ...и другие рынки +} +``` + +## Использование перечислений (Enums) для Маркетплейсов NFT + +После того как перечисления (enums) определены, их можно использовать по всему вашему cубграфу для категоризации транзакций или событий. + +Например, при регистрации продаж NFT можно указать маркетплейс, на котором произошла сделка, используя перечисление. + +### Реализация функции для маркетплейсов NFT + +Вот как можно реализовать функцию для получения названия маркетплейса из перечисления (enum) в виде строки: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Используем операторы if-else для сопоставления значения перечисления со строкой + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // Если маркетплейс OpenSea, возвращаем его строковое представление + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // Если маркетплейс SeaPort, возвращаем его строковое представление + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // Если маркетплейс LooksRare, возвращаем его строковое представление + // ... и другие маркетплейсы + } +} +``` + +## Лучшие практики использования перечислений (Enums) + +- **Согласованность в наименованиях:** Используйте четкие, описательные названия для значений перечислений, чтобы улучшить читаемость кода. +- **Централизованное управление:** Храните перечисления в одном файле для обеспечения согласованности. Это облегчает обновление перечислений и гарантирует, что они являются единственным источником достоверной информации. +- **Документация:** Добавляйте комментарии к перечислениям, чтобы прояснить их назначение и использование. + +## Использование перечислений (Enums) в запросах + +Перечисления в запросах помогают улучшить качество данных и делают результаты более понятными. Они функционируют как фильтры и элементы ответа, обеспечивая согласованность и уменьшая ошибки в значениях маркетплейса. + +**Особенности** + +- **Фильтрация с помощью перечислений:** Перечисления предоставляют четкие фильтры, позволяя уверенно включать или исключать конкретные маркетплейсы. +- **Перечисления в ответах:** Перечисления гарантируют, что возвращаются только признанные названия маркетплейсов, делая результаты стандартизированными и точными. + +### Пример запросов + +#### Запрос 1: Аккаунт с наибольшим количеством взаимодействий на маркетплейсе NFT + +Этот запрос выполняет следующие действия: + +- Он находит аккаунт с наибольшим количеством уникальных взаимодействий с маркетплейсами NFT, что полезно для анализа активности на разных маркетплейсах. +- Поле маркетплейсов использует перечисление marketplace, что обеспечивает согласованность и валидацию значений маркетплейсов в ответе. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # Это поле возвращает значение перечисления, представляющее маркетплейс + } + } +} +``` + +#### Результаты + +Данный ответ включает информацию об аккаунте и перечень уникальных взаимодействий с маркетплейсом, где используются значения перечислений (enum) для обеспечения единообразной ясности: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Запрос 2: Наиболее активный маркетплейс для транзакций CryptoCoven + +Этот запрос выполняет следующие действия: + +- Он определяет маркетплейс с наибольшим объемом транзакций CryptoCoven. +- Он использует перечисление marketplace, чтобы гарантировать, что в ответе будут только допустимые типы маркетплейсов, что повышает надежность и согласованность ваших данных. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Результат 2 + +Ожидаемый ответ включает маркетплейс и соответствующее количество транзакций, используя перечисление для указания типа маркетплейса: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Запрос 3: Взаимодействия на маркетплейсе с высоким количеством транзакций + +Этот запрос выполняет следующие действия: + +- Он извлекает четыре самых активных маркетплейса с более чем 100 транзакциями, исключая маркетплейсы с типом "Unknown". +- Он использует перечисления в качестве фильтров, чтобы гарантировать, что включены только допустимые типы маркетплейсов, что повышает точность выполнения запроса. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Результат 3 + +Ожидаемый вывод включает маркетплейсы, которые соответствуют критериям, каждый из которых представлен значением перечисления: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Дополнительные ресурсы + +Дополнительную информацию можно найти в [репозитории] этого руководства (https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/ru/subgraphs/guides/grafting.mdx b/website/src/pages/ru/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..6d718b0fa64c --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Замените контракт и сохраните его историю с помощью Grafting +--- + +В этом руководстве вы научитесь создавать и развертывать новые субграфы, используя существующие субграфы. + +## Что такое Grafting? + +Графтинг позволяет повторно использовать данные из существующего субграфа и начать индексирование с более позднего блока. Это полезно в процессе разработки, чтобы быстро обходить простые ошибки в мэппингах или временно восстанавливать работу существующего субграфа после его сбоя. Также это может пригодиться при добавлении новой функции в субграф, которая требует долгого времени на индексирование с нуля. + +Перенесённый субграф может использовать схему GraphQL, которая не идентична схеме базового субграфа, а просто совместима с ней. Это должна быть автономно действующая схема субграфа, но она может отличаться от схемы базового субграфа следующим образом: + +- Она добавляет или удаляет типы объектов +- Она удаляет атрибуты из типов объектов +- Она добавляет обнуляемые атрибуты к типам объектов +- Она превращает необнуляемые атрибуты в обнуляемые +- Она добавляет значения в перечисления +- Она добавляет или удаляет интерфейсы +- Она изменяется в зависимости от того, под какой тип объектов реализован интерфейс + +Для получения дополнительной информации Вы можете перейти: + +- [Графтинг](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +В этом руководстве мы рассмотрим базовый случай. Мы заменим существующий контракт на идентичный контракт (с новым адресом, но с тем же кодом). Затем, с помощью графтинга, мы подключим существующий субграф к "базовому" субграфу, который отслеживает новый контракт. + +## Важное примечание о Grafting при обновлении до сети + +> **Предупреждение**: рекомендуется не использовать графтинг для субграфов, опубликованных в сети The Graph + +### Почему это важно? + +Графтинг — это мощная функция, которая позволяет «приращивать» один субграф к другому, эффективно передавая исторические данные из существующего субграфа в новую версию. Невозможно выполнить графтинг субграфа из сети The Graph обратно в Subgraph Studio. + +### Лучшие практики + +**Первоначальная миграция**: при первом развертывании вашего субграфа в децентрализованной сети, делайте это без графтинга. Убедитесь, что субграф стабилен и работает так, как ожидается. + +**Последующие обновления**: после того, как ваш субграф станет активным и стабильным в децентрализованной сети, вы можете использовать графтинг для будущих версий, чтобы сделать переход более плавным и сохранить исторические данные. + +Соблюдая эти рекомендации, Вы минимизируете риски и обеспечите более плавный процесс миграции. + +## Создание существующего субграфа + +Создание субграфов — важная часть работы с The Graph, и об этом рассказывается более подробно [здесь](/subgraphs/quick-start/). Чтобы иметь возможность создавать и развертывать существующий субграф, используемый в этом руководстве, был предоставлен следующий репозиторий: + +- [Пример репозитория субграфа](https://github.com/Shiyasmohd/grafting-tutorial) + +> Примечание: контракт, использованный в субграфе, был взят из следующего [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## Определение манифеста субграфа + +Манифест субграфа `subgraph.yaml` определяет источники данных для субграфа, интересующие триггеры и функции, которые должны быть выполнены в ответ на эти триггеры. Ниже приведён пример манифеста субграфа, который вы будете использовать: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- Источник данных `Lock` — это ABI и адрес контракта, которые мы получим при компиляции и развертывании контракта +- Сеть должна соответствовать индексируемой сети, к которой выполняется запрос. Поскольку мы работаем в тестнете Sepolia, сеть будет `sepolia`. +- Раздел `mapping` определяет триггеры, которые представляют интерес, и функции, которые должны быть выполнены в ответ на эти триггеры. В данном случае мы слушаем событие `Withdrawal` и вызываем функцию `handleWithdrawal`, когда оно срабатывает. + +## Определение Манифеста Grafting + +Для использования функции графтинга необходимо добавить два новых элемента в исходный манифест субграфа: + +```yaml +--- +features: + - grafting # название функции +graft: + base: Qm... # идентификатор базового субграфа + block: 5956000 # номер блока +``` + +- `features:` — это список всех используемых [имен функций](/developing/creating-a-subgraph/#experimental-features). +- `graft:` — это карта, содержащая базовый субграф (`base`) и номер блока (`block`), на который будет выполняться графтинг. Значение `block` указывает, с какого блока начинать индексирование. The Graph скопирует данные базового субграфа вплоть до указанного блока (включительно), а затем продолжит индексировать новый субграф, начиная с этого блока. + +Значения `base` и `block` можно получить, развернув два субграфа: один для базового индексирования, а другой с графтингом + +## Развертывание базового субграфа + +1. Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и создайте субграф в тестовой сети Sepolia с названием `graft-example` +2. Следуйте инструкциям в разделе `AUTH & DEPLOY` на странице вашего субграфа в папке `graft-example` из репозитория +3. После завершения убедитесь, что субграф правильно индексируется. Если Вы запустите следующую команду в The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Отклик будет подобным этому: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Как только вы убедитесь, что субграф индексируется корректно, вы можете быстро обновить его с помощью графтинга. + +## Развертывание grafting субграфа + +Замененный subgraph.yaml будет иметь новый адрес контракта. Это может произойти, когда Вы обновите свое децентрализованное приложение, повторно развернете контракт и т. д. + +1. Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и создайте субграф в тестовой сети Sepolia с названием `graft-replacement` +2. Создайте новый манифест. `subgraph.yaml` для `graph-replacement` содержит другой адрес контракта и новую информацию о том, как следует выполнить графтинг. Это `block` последнего [события, сгенерированного](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) старым контрактом, и `base` старого субграфа. Идентификатор субграфа `base` — это `Deployment ID` вашего оригинального `graph-example` субграфа. Вы можете найти его в Subgraph Studio. +3. Следуйте инструкциям в разделе `AUTH & DEPLOY` на странице вашего субграфв в папке `graft-replacement` из репозитория +4. После завершения убедитесь, что субграф правильно индексируется. Если Вы запустите следующую команду в The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Это должно привести к следующему результату: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +Вы можете увидеть, что субграф `graft-replacement` индексирует данные из старого субграфа `graph-example` и новые данные с нового адреса контракта. Оригинальный контракт сгенерировал два события `Withdrawal`, [Событие 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) и [Событие 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). Новый контракт сгенерировал одно событие `Withdrawal`, [Событие 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). Два ранее индексированных транзакции (События 1 и 2) и новая транзакция (Событие 3) были объединены в субграфе `graft-replacement`. + +Поздравляем! Вы успешно перенесли один субграф в другой. + +## Дополнительные ресурсы + +Если Вы хотите получить больше опыта в графтинге (переносе), вот несколько примеров популярных контрактов: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +Чтобы стать еще большим экспертом в области Graph, рассмотрите возможность изучения других способов обработки изменений в исходных данных. Альтернативы, такие как [Шаблоны источников данных](/developing/creating-a-subgraph/#data-source-templates), могут привести к аналогичным результатам + +> Примечание: Многие материалы из этой статьи были взяты из ранее опубликованной статьи об [Arweave](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/ru/subgraphs/guides/near.mdx b/website/src/pages/ru/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..83663ce76e77 --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Создание субграфов на NEAR +--- + +Это руководство является введением в создание субграфов для индексирования смарт-контрактов на блокчейне [NEAR](https://docs.near.org/). + +## Что такое NEAR? + +[NEAR](https://near.org/) — это платформа для смарт-контрактов, предназначенная для создания децентрализованных приложений. Для получения дополнительной информации ознакомьтесь с [официальной документацией](https://docs.near.org/concepts/basics/protocol). + +## Что такое субграфы NEAR? + +The Graph предоставляет разработчикам инструменты для обработки событий блокчейна и предоставления полученных данных через API GraphQL, который называется субграфом. [Graph Node](https://github.com/graphprotocol/graph-node) теперь может обрабатывать события NEAR, что означает, что разработчики на платформе NEAR могут создавать субграфы для индексирования своих смарт-контрактов. + +Субграфы основаны на событиях, что означает, что они слушают и затем обрабатывают события с блокчейна. В настоящее время для субграфов NEAR поддерживаются два типа обработчиков: + +- Обработчики блоков: они запускаются для каждого нового блока +- Обработчики поступлений: запускаются каждый раз, когда сообщение выполняется в указанной учетной записи + +[Из документации NEAR](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> Поступление - это единственный объект, к которому можно применить действие в системе. Когда мы говорим об "обработке транзакции" на платформе NEAR, это в конечном итоге означает "применение поступлений" в какой-то момент. + +## Создание NEAR субграфа + +`@graphprotocol/graph-cli` — это инструмент командной строки для создания и развертывания субграфов. + +`@graphprotocol/graph-ts` — это библиотека типов, специфичных для субграфов. + +Для разработки субграфа NEAR требуется версия `graph-cli` выше `0.23.0` и версия `graph-ts` выше `0.23.0`. + +> Создание субграфа NEAR очень похоже на создание субграфа, индексирующего Ethereum. + +Существует три аспекта определения субграфов: + +**subgraph.yaml**: манифест субграфа, который определяет интересующие источники данных и то, как они должны обрабатываться. NEAR является новым `kind` (типом) источника данных. + +**schema.graphql**: файл схемы, который определяет, какие данные хранятся для вашего субграфа, и как их можно запрашивать через GraphQL. Требования для субграфов NEAR описаны в [существующей документации](/developing/creating-a-subgraph/#the-graphql-schema). + +**Мэппинги на AssemblyScript:** [код на AssemblyScript](/subgraphs/developing/creating/graph-ts/api/), который преобразует данные событий в элементы, определенные в Вашей схеме. Поддержка NEAR вводит специфичные для NEAR типы данных и новую функциональность для парсинга JSON. + +Во время разработки субграфа есть две ключевые команды: + +```bash +$ graph codegen # генерирует типы из файла схемы, указанного в манифесте +$ graph build # генерирует Web Assembly из файлов AssemblyScript и подготавливает все файлы субграфа в папке /build +``` + +### Определение манифеста субграфа + +Манифест субграфа (`subgraph.yaml`) идентифицирует источники данных для субграфа, триггеры интересующих событий и функции, которые должны быть выполнены в ответ на эти триггеры. Ниже приведен пример манифеста субграфа для субграфа NEAR: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # ссылка на файл схемы +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # Этот источник данных будет контролировать эту учетную запись + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # имя функции в файле мэппинга + receiptHandlers: + - handler: handleReceipt # имя функции в файле мэппингаe + file: ./src/mapping.ts # ссылка на файл с мэппингами Assemblyscript +``` + +- Субграфы NEAR вводят новый тип источника данных (`near`). +- `network` должен соответствовать сети на хостинговой Graph Node. В Subgraph Studio майннет NEAR называется `near-mainnet`, а теснет NEAR — `near-testnet` +- Источники данных NEAR содержат необязательное поле `source.account`, которое представляет собой удобочитаемый идентификатор, соответствующий [учетной записи NEAR] (https://docs.near.org/concepts/protocol/account-model). Это может быть как основной аккаунт, так и суб-аккаунт. +- Источники данных NEAR вводят альтернативное необязательное поле `source.accounts`, которое содержит необязательные префиксы и суффиксы. Необходимо указать хотя бы один префикс или суффикс, они будут соответствовать любому аккаунту, начинающемуся или заканчивающемуся на значения из списка соответственно. Приведенный ниже пример будет совпадать с: `[app|good].*[morning.near|morning.testnet]`. Если необходим только список префиксов или суффиксов, другое поле можно опустить. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +Источники данных NEAR поддерживают два типа обработчиков: + +- `blockHandlers`: выполняется для каждого нового блока NEAR. `source.account` не требуется. +- `receiptHandlers`: выполняется при каждом получении, где `source.account` источника данных является получателем. Обратите внимание, что обрабатываются только точные совпадения ([субаккаунты](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) должны быть добавлены как независимые источники данных). + +### Определение схемы + +Определение схемы описывает структуру базы данных субграфа и отношения между объектами. Это не зависит от исходного источника данных. Подробнее об определении схемы субграфа можно прочитать [здесь](/developing/creating-a-subgraph/#the-graphql-schema). + +### Мэппинги AssemblyScript + +Обработчики для обработки событий написаны на [AssemblyScript](https://www.assemblyscript.org/). + +Индексирование NEAR вводит специфичные для NEAR типы данных в [API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Всегда 0 для версии < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +Эти типы передаются в обработчики блоков и поступлений: + +- Обработчики блоков получат `Block` +- Обработчики поступлений получат `ReceiptWithOutcome` + +В противном случае, остальная часть [API AssemblyScript](/subgraphs/developing/creating/graph-ts/api/) доступна разработчикам субграфов NEAR во время выполнения мэппинга. + +Это включает в себя новую функцию для парсинга JSON — логи в NEAR часто выводятся как строковые JSON. Новая функция `json.fromString(...)` доступна в рамках [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api), что позволяет разработчикам легко обрабатывать эти логи. + +## Развертывание NEAR субграфа + +После того как вы построите субграф, пришло время развернуть его на Graph Node для индексирования. Субграфы NEAR можно развернуть на любой Graph Node версии `>=v0.26.x` (эта версия еще не была отмечена и выпущена). + +Subgraph Studio и Индексатор обновлений в The Graph Network в настоящее время поддерживают индексирование основной и тестовой сети NEAR в бета-версии со следующими именами сетей: + +- `near-mainnet` +- `near-testnet` + +Более подробную информацию о создании и развертывании субграфа в Subgraph Studio можно найти [здесь](/deploying/deploying-a-subgraph-to-studio/). + +Как быстрый вводный шаг — первым делом нужно "создать" ваш субграф — это нужно сделать только один раз. В Subgraph Studio это можно сделать через [вашу панель управления](https://thegraph.com/studio/): "Создать субграф". + +Как только ваш субграф будет создан, вы можете развернуть его, используя команду CLI `graph deploy`: + +```sh +$ graph create --node # создает субграф на локальной Graph Node (в Subgraph Studio это делается через UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # загружает файлы сборки на указанную конечную точку IPFS и затем развертывает субграф на указанном Graph Node, основываясь на IPFS-хэше манифеста +``` + +Конфигурация ноды будет зависеть от того, где развертывается субграф. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Локальная Graph Node (на основе конфигурации по умолчанию) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +После того как ваш субграф был развернут, он будет индексироваться Graph Node. Вы можете проверить его прогресс, сделав запрос к самому субграфу: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Индексирование NEAR с помощью локальной Graph Node + +Запуск Graph Node, который индексирует NEAR, имеет следующие эксплуатационные требования: + +- Фреймворк NEAR Indexer с инструментарием Firehose +- Компонент(ы) NEAR Firehose +- Graph Node с настроенным эндпоинтом Firehose + +В ближайшее время мы предоставим более подробную информацию о запуске вышеуказанных компонентов. + +## Запрос NEAR субграфа + +Конечная точка GraphQL для субграфов NEAR определяется в соответствии с определением схемы и существующим интерфейсом API. Для получения дополнительной информации изучите [документацию по GraphQL API](/subgraphs/querying/graphql-api/). + +## Примеры субграфов + +Вот несколько примеров субграфов для справки: + +[Блоки NEAR](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[Подтверждения NEAR](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## Часто задаваемые вопросы + +### Как работает бета-версия? + +Поддержка NEAR находится на стадии бета-тестирования, что означает, что могут быть изменения в API по мере улучшения интеграции. Пожалуйста, отправьте письмо на адрес near@thegraph.com, чтобы мы могли помочь вам в создании субграфов NEAR и держать вас в курсе последних обновлений! + +### Может ли субграф индексировать чейны NEAR и EVM? + +Нет, субграф может поддерживать источники данных только из одного чейна/сети. + +### Могут ли субграфы реагировать на более конкретные триггеры? + +В настоящее время поддерживаются только триггеры Block и Receipt. Мы исследуем триггеры для вызовов функций к указанной учетной записи. Мы также заинтересованы в поддержке триггеров событий, когда NEAR обладает собственной поддержкой событий. + +### Будут ли срабатывать обработчики поступлений для учетных записей и их дочерних учетных записей? + +Если указано `account`, это будет соответствовать только точному имени аккаунта. Для того чтобы соответствовать субаккаунтам, можно указать поле `accounts`, с `suffixes` и `prefixes`, которые будут соответствовать аккаунтам и субаккаунтам. Например, следующее выражение будет соответствовать всем субаккаунтам `mintbase1.near`: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Могут ли субграфы NEAR выполнять вызовы просмотра аккаунтов NEAR во время мэппингов? + +Это не поддерживается. Мы оцениваем, требуется ли этот функционал для индексирования. + +### Могу ли я использовать шаблоны источников данных в своем субграфе NEAR? + +В настоящее время это не поддерживается. Мы оцениваем, требуется ли этот функционал для индексирования. + +### Субграфы Ethereum поддерживают «ожидающие» и «текущие» версии. Как я могу развернуть «ожидающую» версию субграфа NEAR? + +Функциональность ожидающих пока не поддерживается для субграфов NEAR. В промежуточный период вы можете развернуть новую версию на другом "именованном" субграфе, а затем, когда она будет синхронизирована с головой чейна, вы можете повторно развернуть ее на своем основном "именованном" субграфе, который будет использовать тот же самый идентификатор развертывания, так что основной субграф будет мгновенно синхронизирован. + +### На мой вопрос нет ответа, где я могу получить дополнительную помощь в создании субграфов NEAR? + +Если это общий вопрос о разработке субграфов, дополнительную информацию можно найти в остальной части [документации для разработчиков](/subgraphs/quick-start/). В других случаях присоединяйтесь к [Discord-каналу The Graph Protocol](https://discord.gg/graphprotocol) и задавайте вопросы в канале #near или отправьте email на адрес near@thegraph.com. + +## Ссылки + +- [Документация для разработчиков NEAR](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/ru/subgraphs/guides/polymarket.mdx b/website/src/pages/ru/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..f10bf31617c1 --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Запрос данных блокчейна из Polymarket с субграфами на The Graph +sidebarTitle: Запрос данных Polymarket +--- + +Запрашивайте ончейн-данные Polymarket с помощью GraphQL через субграфы в The Graph Network. Субграфы — это децентрализованные API, работающие на основе The Graph, протокола для индексирования и запросов данных из блокчейнов. + +## Субграф Polymarket в Graph Explorer + +Вы можете увидеть интерактивную площадку для запросов на [странице субграфа Polymarket в The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), где можно протестировать любые запросы. + +![Polymarket Endpoint](/img/Polymarket-playground.png) + +## Как пользоваться визуальным редактором запросов + +Визуальный редактор запросов помогает тестировать примерные запросы из Вашего субграфа. + +Вы можете использовать GraphiQL Explorer для составления запросов GraphQL, нажимая на нужные поля. + +### Пример запроса: получите 5 самых высоких выплат от Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Пример вывода + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Схема GraphQL Polymarket + +Схема для этого субграфа определена [здесь, в GitHub Polymarket](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Конечная точка субграфа Polymarket + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +Конечная точка субграфа Polymarket доступна в [Graph Explorer](https://thegraph.com/explorer). + +![Конечная точка Polymarket](/img/Polymarket-endpoint.png) + +## Как получить свой собственный ключ API + +1. Перейдите на [https://thegraph.com/studio](http://thegraph.com/studio) и подключите свой кошелек +2. Перейдите по ссылке https://thegraph.com/studio/apikeys/, чтобы создать ключ API + +Вы можете использовать этот API-ключ в любом субграфе в [Graph Explorer](https://thegraph.com/explorer), и он не ограничивается только Polymarket. + +100 тыс. запросов в месяц бесплатны, что идеально подходит для Вашего стороннего проекта! + +## Дополнительные субграфы Polymarket + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Активность Polymarket в Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Прибыль и убыток Polymarket](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Открытый интерес Polymarket](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## Как делать запросы с помощью API + +Вы можете передать любой запрос GraphQL в конечную точку Polymarket и получить данные в формате json. + +Следующий пример кода вернет тот же результат, что и выше. + +### Пример кода из node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Дополнительные источники + +Для получения дополнительной информации о запросе данных из Вашего субграфа читайте [здесь](/subgraphs/querying/introduction/). + +Чтобы изучить все способы оптимизации и настройки Вашего субграфа для повышения производительности, прочитайте больше о [создании субграфа здесь](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ru/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ru/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..3b61c71f3c74 --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: Как обезопасить API-ключи с использованием серверных компонентов Next.js +--- + +## Обзор + +Мы можем использовать [серверные компоненты Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components), чтобы надёжно защитить наш API-ключ от утечки на стороне фронтенда в нашем dApp. Для дополнительной безопасности API-ключа мы также можем [ограничить его использование определёнными субграфами или доменами в Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +В этом руководстве мы рассмотрим, как создать серверный компонент Next.js, который выполняет запрос к субграфу, скрывая API-ключ от фронтенда. + +### Предостережения + +- Серверные компоненты Next.js не защищают API-ключи от утечки при атаках типа "отказ в обслуживании". +- Шлюзы The Graph Network имеют стратегии обнаружения и смягчения атак типа "отказ в обслуживании", однако использование серверных компонентов может ослабить эти защиты. +- Серверные компоненты Next.js вносят риски централизации, так как сервер может выйти из строя. + +### Почему это необходимо + +В стандартном React-приложении API-ключи, включённые в код внешнего интерфейса, могут быть раскрыты на стороне клиента, что созает угрозу безопасности. Хотя обычно используются файлы `.env`, они не обеспечивают полной защиты ключей, так как код React выполняется на стороне клиента, раскрывая API-ключ в заголовках. Серверные компоненты Next.js решают эту проблему, обрабатывая конфиденциальные операции на сервере. + +### Использование рендеринга на стороне клиента для выполнения запроса к субграфу + +![Рендеринг на клиентской стороне](/img/api-key-client-side-rendering.png) + +### Предварительные требования + +- API-ключ от [Subgraph Studio](https://thegraph.com/studio) +- Базовые знания о Next.js и React. +- Существующий проект Next.js, который использует [App Router](https://nextjs.org/docs/app). + +## Пошаговое руководство + +### Шаг 1: Настройка переменных среды + +1. В корневой папке нашего проекта Next.js создайте файл `.env.local`. +2. Добавьте наш API-ключ: `API_KEY=`. + +### Шаг 2: Создание серверного компонента + +1. В директории `components` создайте новый файл `ServerComponent.js`. +2. Используйте приведённый пример кода для настройки серверного компонента. + +### Шаг 3: Реализация API-запроса на стороне сервера + +В файл `ServerComponent.js` добавьте следующий код: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Шаг 4: Использование серверного компонента + +1. В файл страницы (например, `pages/index.js`) импортируйте `ServerComponent`. +2. Отрендерите компонент: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Шаг 5: Запуск и тестирование нашего децентрализованного приложения (Dapp) + +Запустите наше приложение Next.js с помощью команды `npm run dev`. Убедитесь, что серверный компонент запрашивает данные, не раскрывая API-ключ. + +![Рендеринг на стороне сервера](/img/api-key-server-side-rendering.png) + +### Заключение + +Используя серверные компоненты Next.js, мы эффективно скрыли ключ API от клиентской стороны, улучшив безопасность нашего приложения. Этот метод гарантирует, что чувствительные операции обрабатываются на сервере, вдали от потенциальных уязвимостей на стороне клиента. В заключение, не забудьте ознакомиться с [другими мерами безопасности для ключей API](/subgraphs/querying/managing-api-keys/), чтобы повысить уровень безопасности своего ключа API. diff --git a/website/src/pages/ru/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ru/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..5c3b634a5620 --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Агрегируйте данные с помощью композиции субграфов +sidebarTitle: Создайте композиционный субграф с несколькими субграфами +--- + +Используйте композицию субграфов для ускорения разработки. Создайте базовый субграф с основными данными, а затем разрабатывайте дополнительные субграфы на его основе. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Введение + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Преимущества композиции + +Композиция субграфов — это мощная функция для масштабирования, позволяющая вам: + +- Повторно использовать, смешивать и комбинировать существующие данные +- Оптимизировать разработку и запросы +- Использовать несколько источников данных (до пяти исходных субграфов) +- Ускорить синхронизацию вашего субграфа +- Обрабатывать ошибки и оптимизировать повторную синхронизацию + +## Обзор архитектуры + +Настройка для этого примера включает два субграфа: + +1. **Исходный субграф**: отслеживает данные событий как объекты. +2. **Зависимый субграф**: использует исходный субграф в качестве источника данных. + +Вы можете найти их в директориях `source` и `dependent`. + +- **Исходный субграф** — это базовый субграф для отслеживания событий, который записывает события, генерируемые соответствующими контрактами. +- **Зависимый субграф** ссылается на источник субграфа как на источник данных, используя объекты из источника в качестве триггеров. + +В то время как исходный субграф является стандартным субграфом, зависимый субграф использует функцию композиции субграфов. + +## Предварительные требования + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Начнем + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Специфические особенности + +- Чтобы сделать этот пример простым, все исходные субграфы используют только блок-обработчики. Однако в реальной среде каждый исходный субграф будет использовать данные из разных смарт-контрактов. +- Приведенные ниже примеры показывают, как импортировать и расширять схему другого субграфа для улучшения его функциональности. +- Каждый исходный субграф оптимизирован для работы с конкретным объектом. +- Все перечисленные команды устанавливают необходимые зависимости, генерируют код на основе схемы GraphQL, строят субграф и деплоят его на ваш локальный экземпляр Graph Node. + +### Шаг 1. Развертывание исходного субграфа для времени блока + +Этот первый исходный субграф вычисляет время блока для каждого блока. + +- Он импортирует схемы из других субграфов и добавляет объект `block` с полем `timestamp`, представляющим время, когда был добыт каждый блок. +- Он слушает события блокчейна, связанные с временем (например, метки времени блоков), и обрабатывает эти данные для обновления объектов субграфа соответствующим образом. + +Чтобы развернуть этот субграф локально, выполните следующие команды: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Шаг 2. Развертывание исходного субграфа для стоимости блока + +Этот второй исходный субграф индексирует стоимость каждого блока. + +#### Ключевые функции + +- Он импортирует схемы из других субграфов и добавляет объект `block` с полями, связанными со стоимостью. +- Он отслеживает события блокчейна, связанные с затратами (например, плата за газ, стоимость транзакций), и обрабатывает эти данные для соотвествующего обновления объектов субграфа. + +Чтобы развернуть этот субграф локально, выполните те же команды, что и выше. + +### Шаг 3. Определите размер блока в исходном субграфе + +Этот третий исходный субграф индексирует размер каждого блока. Чтобы развернуть этот субграф локально, выполните те же команды, что и выше. + +#### Ключевые функции + +- Он импортирует существующие схемы из других субграфов и добавляет объект `block` с полем `size`, представляющим размер каждого блока. +- Он слушает события блокчейна, связанные с размерами блоков (например, хранение данных или объем), и обрабатывает эти данные для обновления объектов суграфа соответствующим образом. + +### Шаг 4. Объединение в субграфе для статистики блоков + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Примечание: +> +> - Любое изменение в исходном субграфе, скорее всего, приведет к созданию нового идентификатора развертывания. +> - Обязательно обновите идентификатор развертывания в адресе источника данных в манифесте субграфа, чтобы воспользоваться последними изменениями. +> - Все исходные субграфы должны быть развернуты до того, как будет развернут композиционный субграф. + +#### Ключевые функции + +- Он предоставляет объединенную модель данных, которая охватывает все соответствующие метрики блоков. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Основные выводы + +- Этот мощный инструмент масштабирует разработку ваших субграфов и позволяет комбинировать несколько субграфов. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- Эта функция открывает возможности масштабируемости, упрощая как разработку, так и эффективность обслуживания. + +## Дополнительные ресурсы + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- Чтобы добавить продвинутые функции в ваш субграф, ознакомьтесь с [продвинутыми функциями субграфа](/developing/creating/advanced/). +- Чтобы узнать больше об агрегациях, ознакомьтесь с разделом [Временные ряды и агрегации](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/ru/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ru/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..fcc064d4190f --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Быстрая и простая отладка субграфа с использованием форков +--- + +Как и многие системы, обрабатывающие большие объемы данных, Индексаторы The Graph (Graph Nodes) могут потребовать значительное время для синхронизации вашего субграфа с целевым блокчейном. Несоответствие между быстрыми изменениями, необходимыми для отладки, и длительным временем ожидания индексирования крайне контрпродуктивно, и мы хорошо осведомлены об этом. Именно поэтому мы представляем **форк субграфа**, разработанный [LimeChain](https://limechain.tech/), и в этой статье я покажу, как эта функция может существенно ускорить процесс отладки субграфов! + +## И так, что это? + +**Форкинг субграфа** — это процесс ленивой загрузки объектов из _другого_ хранилища субграфа (обычно удаленного). + +В контексте отладки **форкинг субграфа** позволяет вам отлаживать ваш неудавшийся субграф на блоке _X_ без необходимости ждать синхронизации с этим блоком _X_. + +## Что? Как? + +Когда вы развертываете субграф на удалённой Graph Node для индексирования и он даёт сбой на блоке _X_, хорошая новость заключается в том, что Graph Node всё равно будет обслуживать GraphQL-запросы, используя своё хранилище, синхронизированное с блоком _X_. Это замечательно! Это означает, что мы можем воспользоваться этим "актуальным" хранилищем, чтобы исправить ошибки, возникающие при индексировании блока _X_. + +Короче говоря, мы собираемся _форкать неудавшийся субграф_ с удалённой Graph Node, которая гарантированно имеет индексированный субграф до блока _X_, чтобы предоставить локально развернутому субграфу, который отлаживается на блоке _X_, актуальное представление о состоянии индексирования. + +## Пожалуйста, покажите мне какой-нибудь код! + +Чтобы сосредоточиться на отладке субграфа, давайте сделаем всё проще и используем [пример субграфа](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar), индексирующий смарт-контракт Ethereum Gravity. + +Вот обработчики, определённые для индексирования `Gravatar`, без каких-либо ошибок: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Ой, как неудачно! Когда я деплою мой идеально выглядящий субграф в [Subgraph Studio](https://thegraph.com/studio/), он терпит неудачу с ошибкой _"Gravatar не найден!"_. + +Обычный способ попытаться исправить это: + +1. Внести изменения в источник мэппингов, которые, по Вашему мнению, решат проблему (в то время как я знаю, что это не так). +2. Перезапустить развертывание своего субграфа в [Subgraph Studio](https://thegraph.com/studio/) (или на другую удалённую Graph Node). +3. Подождать, пока он синхронизируется. +4. Если он снова сломается, вернуться к пункту 1, в противном случае: Ура! + +Действительно, это похоже на обычный процесс отладки, но есть один шаг, который ужасно замедляет процесс: _3. Ждите, пока завершится синхронизация._ + +Используя **форкинг субграфа**, мы можем фактически исключить этот шаг. Вот как это выглядит: + +0. Запустите локальную Graph Node с помощью **_соответстсвующего набора fork-base_**. +1. Внесите изменения в источник мэппингов, которые, по Вашему мнению, решат проблему. +2. Разверните на локальной Graph Node, **_используя форкинг неудавшегося субграфа_** и **_начав с проблемного блока_**. +3. Если он снова сломается, вернитесь к пункту 1, в противном случае: Ура! + +Сейчас у Вас может появиться 2 вопроса: + +1. fork-base - что это??? +2. Форкнуть кого?! + +И я вам отвечаю: + +1. `fork-base` — это "базовый" URL, так что когда _ID субграфа_ добавляется, получившийся URL (`/`) становится действительной конечной точкой GraphQL для хранилища субграфа. +2. Форкнуть легко, не нужно напрягаться: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Кроме того, не забудьте установить поле `dataSources.source.startBlock` в манифесте субграфа на номер проблемного блока, чтобы пропустить индексирование ненужных блоков и воспользоваться форком! + +Итак, вот что я делаю: + +1. Я запускаю локальную Graph Node ([вот как это сделать](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) с опцией `fork-base`, установленной на: `https://api.thegraph.com/subgraphs/id/`, так как я собираюсь форкать субграф, тот самый, который я развернул ранее, с [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. После тщательной проверки я замечаю, что существует несоответствие в представлениях `id`, используемых при индексировании `Gravatar` в двух моих обработчиках. В то время как `handleNewGravatar` конвертирует его в hex (`event.params.id.toHex()`), `handleUpdatedGravatar` использует int32 (`event.params.id.toI32()`), что приводит к тому, что `handleUpdatedGravatar` завершается ошибкой и появляется сообщение "Gravatar not found!". Я заставляю оба обработчика конвертировать `id` в hex. +3. После внесения изменений, я развертываю свой субграф на локальной Graph Node, **форкаю неудачно развернутый субграф** и устанавливаю значение `dataSources.source.startBlock` равным `6190343` в файле `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. Я проверяю логи, созданные локальной Graph Node, и, ура!, кажется, все работает. +5. Я развертываю теперь безошибочный субграф на удаленной Graph Node и живу счастливо до конца своих дней! (только без картошки) diff --git a/website/src/pages/ru/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/ru/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..1469f39676a8 --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Генератор кода безопасного субграфа +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) — это инструмент генерации кода, который создает набор вспомогательных функций из схемы GraphQL проекта. Он гарантирует, что все взаимодействия с объектами в Вашем субграфе будут полностью безопасными и последовательными. + +## Зачем интегрироваться с Subgraph Uncrashable? + +- **Непрерывная работоспособность**: неправильная обработка объектов может привести к сбоям в работе субграфа, что может нарушить работу проектов, зависимых от The Graph. Настройте вспомогательные функции, чтобы сделать ваши субграфы "неподвластными сбоям" и обеспечить бесперебойную работу бизнеса. + +- **Полностью безопасно**: распространенные проблемы при разработке субграфа включают ошибки загрузки неопределенных объектов, отсутствие установки или инициализации всех значений объектов, а также гонки данных при загрузке и сохранении объектов. Убедитесь, что все взаимодействия с объектов являются полностью атомарными. + +- **Конфигурируемо пользователем**: установите значения по умолчанию и настройте уровень проверок безопасности в соответствии с потребностями вашего проекта. Предупреждающие логи записываются в случае нарушения логики субграфа, что помогает устранить проблему и обеспечить точность данных. + +**Ключевые особенности** + +- Инструмент генерации кода поддерживает **все** типы субграфов и конфигурируем для пользователей, чтобы они могли устанавливать разумные значения по умолчанию. Генерация кода будет использовать эту конфигурацию для создания вспомогательных функций, соответствующих спецификации пользователя. + +- Фреймворк также включает в себя способ создания пользовательских, но безопасных функций установки для групп переменных объектов (через config-файл). Таким образом, пользователь не сможет загрузить/использовать устаревшую graph entity, и также не сможет забыть о сохранении или установке переменной, которая требуется функцией. + +- Предупреждающие логи записываются, указывая на места нарушения логики субграфа, чтобы помочь устранить проблему и обеспечить точность данных. + +Subgraph Uncrashable можно запустить как необязательный флаг с помощью команды Graph CLI codegen. + +```sh +graph codegen -u [options] [] +``` + +Ознакомьтесь с [документацией по subgraph uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/docs/) или посмотрите это [видеоруководство](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial), чтобы узнать больше и начать разрабатывать более безопасные субграфы. diff --git a/website/src/pages/ru/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ru/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..fa78162eb377 --- /dev/null +++ b/website/src/pages/ru/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Перенос в The Graph +--- + +Быстро обновите свои субграфы с любой платформы на [децентрализованную сеть The Graph](https://thegraph.com/networks/). + +## Преимущества перехода на The Graph + +- Используйте тот же субграф, который уже используется в Ваших приложениях, с миграцией без времени простоя. +- Повышайте надежность благодаря глобальной сети, поддерживаемой более чем 100 индексаторами. +- Получайте молниеносную поддержку для субграфов круглосуточно, с командой инженеров на связи. + +## Обновите свой субграф до The Graph за 3 простых шага + +1. [Настройте свою среду Studio](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Разверните свой субграф в Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Опубликуйте в сети The Graph](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Настройте свою среду в Studio + +### Создайте субграф в Subgraph Studio + +- Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и подключите свой кошелек. +- Нажмите "Создать субграф". Рекомендуется называть субграф с использованием Заглавного регистра: "Subgraph Name Chain Name". + +> Примечание: после публикации название субграфа можно будет изменять, но для этого потребуется действие в сети каждый раз, поэтому дайте ему правильное название. + +### Установите Graph CLI + +Для использования Graph CLI у Вас должны быть установлены [Node.js](https://nodejs.org/) и выбранный Вами менеджер пакетов (`npm` или `pnpm`). Проверьте [самую последнюю](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) версию CLI. + +Выполните следующую команду на своем локальном компьютере: + +Использование [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Используйте следующую команду для создания субграфа в Studio с помощью CLI: + +```sh +graph init --product subgraph-studio +``` + +### Аутентификация Вашего субграфа + +В Graph CLI используйте команду `auth`, как показано в Subgraph Studio: + +```sh +graph auth +``` + +## 2. Разверните свой субграф в Studio + +Если у Вас есть исходный код, вы можете легко развернуть его в Studio. Если его нет, вот быстрый способ развернуть ваш субграф. + +В Graph CLI выполните следующую команду: + +```sh +graph deploy --ipfs-hash + +``` + +> **Примечание:** у каждого субграфа есть хэш IPFS (Deployment ID), который выглядит так: "Qmasdfad...". Для развертывания просто используйте этот **IPFS хэш**. Вам будет предложено ввести версию (например, v0.0.1). + +## 3. Опубликуйте свой субграф в The Graph Network + +![кнопка публикации](/img/publish-sub-transfer.png) + +### Запросите Ваш Субграф + +> Чтобы привлечь около 3 индексаторов для обработки запросов к вашему субграфу, рекомендуется выделить как минимум 3 000 GRT. Чтобы узнать больше о курировании, ознакомьтесь с разделом [Курирование](/resources/roles/curating/) на The Graph. + +Вы можете начать [выполнять запросы](/subgraphs/querying/introduction/) к любому субграфу, отправляя GraphQL-запрос на конечную точку субграфа, которая находится в верхней части его страницы в эксплорере в Subgraph Studio. + +#### Пример + +[Субграф CryptoPunks Ethereum](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) от Messari: + +![URL запроса](/img/cryptopunks-screenshot-transfer.png) + +URL для запроса этого субграфа: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**Ваш-api-ключ**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Теперь Вам нужно просто вставить **Ваш собственный API-ключ**, чтобы начать отправлять GraphQL-запросы на эту конечную точку. + +### Получение собственного API-ключа + +Вы можете создать API-ключи в Subgraph Studio в меню «API Keys» в верхней части страницы: + +![API ключи](/img/Api-keys-screenshot.png) + +### Мониторинг статуса субграфа + +После обновления вы можете получить доступ к своим субграфам и управлять ими в [Subgraph Studio](https://thegraph.com/studio/) и исследовать все субграфы в [The Graph Explorer](https://thegraph.com/networks/). + +### Дополнительные ресурсы + +- Чтобы быстро создать и опубликовать новый субграф, ознакомьтесь с [Руководством по быстрому старту](/subgraphs/quick-start/). +- Чтобы исследовать все способы оптимизации и настройки вашего субграфа для лучшей производительности, читайте больше о [создании субграфа здесь](/developing/creating-a-subgraph/). diff --git a/website/src/pages/ru/subgraphs/querying/best-practices.mdx b/website/src/pages/ru/subgraphs/querying/best-practices.mdx index e7ecc1795d98..d0189ac234ee 100644 --- a/website/src/pages/ru/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/ru/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/ru/subgraphs/querying/from-an-application.mdx b/website/src/pages/ru/subgraphs/querying/from-an-application.mdx index 75853752f129..817d034d2d9b 100644 --- a/website/src/pages/ru/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/ru/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Querying from an Application +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Шаг 1 @@ -51,7 +52,7 @@ Install The Graph Client CLI in your project: ```sh yarn add -D @graphprotocol/client-cli -# or, with NPM: +# или, с NPM: npm install --save-dev @graphprotocol/client-cli ``` @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Шаг 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Шаг 1 diff --git a/website/src/pages/ru/subgraphs/querying/graph-client/README.md b/website/src/pages/ru/subgraphs/querying/graph-client/README.md index 416cadc13c6f..071bb3c883b7 100644 --- a/website/src/pages/ru/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/ru/subgraphs/querying/graph-client/README.md @@ -1,54 +1,54 @@ -# The Graph Client Tools +# Инструменты клиента The Graph -This repo is the home for [The Graph](https://thegraph.com) consumer-side tools (for both browser and NodeJS environments). +Этот репозиторий является домом для потребительских инструментов [The Graph](https://thegraph.com) (как для браузерных, так и для NodeJS сред). -## Background +## Предисловие -The tools provided in this repo are intended to enrich and extend the DX, and add the additional layer required for dApps in order to implement distributed applications. +Инструменты, предоставленные в этом репозитории, предназначены для улучшения и расширения разработческого опыта (DX), а также для добавления дополнительного слоя, необходимого для децентрализованных приложений (dApps), чтобы реализовать распределенные приложения. -Developers who consume data from [The Graph](https://thegraph.com) GraphQL API often need peripherals for making data consumption easier, and also tools that allow using multiple indexers at the same time. +Разработчики, которые потребляют данные через GraphQL API от [The Graph](https://thegraph.com), часто нуждаются в периферийных инструментах для облегчения потребления данных, а также в инструментах, которые позволяют использовать несколько индексаторов одновременно. -## Features and Goals +## Функции и цели -This library is intended to simplify the network aspect of data consumption for dApps. The tools provided within this repository are intended to run at build time, in order to make execution faster and performant at runtime. +Эта библиотека предназначена для упрощения сетевого аспекта потребления данных для децентрализованных приложений (dApps). Инструменты, предоставленные в этом репозитории, предназначены для работы во время сборки, чтобы сделать выполнение более быстрым и производительным в момент выполнения. -> The tools provided in this repo can be used as standalone, but you can also use it with any existing GraphQL Client! +> Инструменты, предоставленные в этом репозитории, могут использоваться как самостоятельно, так и в сочетании с любым существующим GraphQL клиентом! -| Status | Feature | Notes | -| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| Статус | Функция | Примечания | +| :----: | -------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | +| ✅ | Несколько индексаторов | основано на стратегиях выборки | +| ✅ | Стратегия выборки | timeout, retry, fallback, race, highestValue | +| ✅ | Валидации и оптимизации во время сборки | | +| ✅ | Композиция на стороне клиента | с улучшенным планировщиком выполнения (на основе GraphQL-Mesh) | +| ✅ | Кросс-чейн обработка субграфа | Использование схожих субграфов как единого источника | +| ✅ | Выполнение сырых данных (автономный режим) | напрямую, без GraphQL-клиента | +| ✅ | Местные (клиентские) мутации | | +| ✅ | [Отслеживание автоматического блока](../packages/block-tracking/README.md) | отслеживание номеров блоков [как описано здесь](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Автоматическая пагинация](../packages/auto-pagination/README.md) | выполнение нескольких запросов в одном вызове для получения больше лимита индексатора | +| ✅ | Интеграция с `@apollo/client` | | +| ✅ | Интеграция с `urql` | | +| ✅ | Поддержка TypeScript | со встроенным GraphQL Codegen и `TypedDocumentNode` | +| ✅ | [`@live` запросы](./live.md) | На основе опроса | -> You can find an [extended architecture design here](./architecture.md) +> Вы можете найти [расширенный архитектурный дизайн здесь](./architecture.md) -## Getting Started +## Начало работы -You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: +Вы можете подписаться на [Episode 45 из `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client), чтобы узнать больше о Graph Client: -[![GraphQL.wtf Episode 45](https://img.youtube.com/vi/ZsRAmyUtvwg/0.jpg)](https://graphql.wtf/episodes/45-the-graph-client) +[![GraphQL.wtf Эпизод 45](https://img.youtube.com/vi/ZsRAmyUtvwg/0.jpg)](https://graphql.wtf/episodes/45-the-graph-client) -To get started, make sure to install [The Graph Client CLI] in your project: +Чтобы начать, убедитесь, что установили [The Graph Client CLI] в свой проект: ```sh yarn add -D @graphprotocol/client-cli -# or, with NPM: +# или, с NPM: npm install --save-dev @graphprotocol/client-cli ``` -> The CLI is installed as dev dependency since we are using it to produce optimized runtime artifacts that can be loaded directly from your app! +> CLI устанавливается как зависимость для разработки, поскольку мы используем его для создания оптимизированных артефактов времени выполнения, которые могут быть загружены непосредственно из Вашего приложения! -Create a configuration file (called `.graphclientrc.yml`) and point to your GraphQL endpoints provided by The Graph, for example: +Создайте конфигурационный файл (под названием `.graphclientrc.yml`) и укажите Ваши GraphQL конечные точки, предоставленные The Graph, например: ```yml # .graphclientrc.yml @@ -59,28 +59,28 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 ``` -Now, create a runtime artifact by running The Graph Client CLI: +Теперь создайте артефакт времени выполнения, запустив The Graph Client CLI: ```sh graphclient build ``` -> Note: you need to run this with `yarn` prefix, or add that as a script in your `package.json`. +> Примечание: Вам нужно выполнить это с префиксом `yarn`, или добавить это как скрипт в свой `package.json`. -This should produce a ready-to-use standalone `execute` function, that you can use for running your application GraphQL operations, you should have an output similar to the following: +Это должно создать готовую к использованию автономную функцию `execute`, которую Вы сможете использовать для выполнения операций GraphQL в своем приложении. Вы должны получить вывод, похожий на следующий: ```sh -GraphClient: Cleaning existing artifacts -GraphClient: Reading the configuration -🕸️: Generating the unified schema -🕸️: Generating artifacts -🕸️: Generating index file in TypeScript -🕸️: Writing index.ts for ESM to the disk. -🕸️: Cleanup -🕸️: Done! => .graphclient +GraphClient: Очистка существующих артефактов +GraphClient: Чтение конфигурации +🕸️: Генерация унифицированной схемы +🕸️: Генерация артефактов +🕸️: Генерация индекса в TypeScript +🕸️: Запись index.ts для ESM на диск +🕸️: Очистка +🕸️: Готово! => .graphclient ``` -Now, the `.graphclient` artifact is generated for you, and you can import it directly from your code, and run your queries: +Теперь артефакт `.graphclient` для Вас сгенерирован, и Вы можете импортировать его напрямую в свой код и выполнять запросы: ```ts import { execute } from '../.graphclient' @@ -111,54 +111,54 @@ async function main() { main() ``` -### Using Vanilla JavaScript Instead of TypeScript +### Использование Vanilla JavaScript вместо TypeScript -GraphClient CLI generates the client artifacts as TypeScript files by default, but you can configure CLI to generate JavaScript and JSON files together with additional TypeScript definition files by using `--fileType js` or `--fileType json`. +По умолчанию, GraphClient CLI генерирует артефакты клиента в виде файлов TypeScript, но Вы можете настроить CLI для генерации файлов JavaScript и JSON вместе с дополнительными файлами определений TypeScript, используя `--fileType js` или `--fileType json`. -`js` flag generates all files as JavaScript files with ESM Syntax and `json` flag generates source artifacts as JSON files while entrypoint JavaScript file with old CommonJS syntax because only CommonJS supports JSON files as modules. +Флаг `js` генерирует все файлы как JavaScript файлы с синтаксисом ESM, а флаг `json` генерирует исходные артефакты как JSON файлы, при этом файл точки входа будет на старом синтаксисе CommonJS, поскольку только CommonJS поддерживает JSON файлы как модули. -Unless you use CommonJS(`require`) specifically, we'd recommend you to use `js` flag. +Если Вы специально не используете CommonJS (`require`), мы рекомендуем использовать флаг `js`. `graphclient --fileType js` -- [An example for JavaScript usage in CommonJS syntax with JSON files](../examples/javascript-cjs) -- [An example for JavaScript usage in ESM syntax](../examples/javascript-esm) +- [Пример использования JavaScript в синтаксисе CommonJS с JSON файлами](../examples/javascript-cjs) +- [Пример использования JavaScript в синтаксисе ESM](../examples/javascript-esm) -#### The Graph Client DevTools +#### Инструменты разработки The Graph Client -The Graph Client CLI comes with a built-in GraphiQL, so you can experiment with queries in real-time. +The Graph Client CLI включает встроенный GraphiQL, который позволяет Вам экспериментировать с запросами в реальном времени. -The GraphQL schema served in that environment, is the eventual schema based on all composed Subgraphs and transformations you applied. +GraphQL-схема, обслуживаемая в этой среде, представляет собой итоговую схему, основанную на всех составленных субграфах и примененных преобразованиях. -To start the DevTool GraphiQL, run the following command: +Чтобы запустить DevTool GraphiQL, выполните следующую команду: ```sh graphclient serve-dev ``` -And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 +А затем откройте [http://localhost:4000/](http://localhost:4000/), чтобы использовать GraphiQL. Теперь Вы можете экспериментировать со своей GraphQL-схемой на стороне клиента локально! 🥳 -#### Examples +#### Примеры -You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: +Вы также можете обратиться к [каталогу с примерами в этом репозитории](../examples) для более продвинутых примеров и примеров интеграции: -- [TypeScript & React example with raw `execute` and built-in GraphQL-Codegen](../examples/execute) -- [TS/JS NodeJS standalone mode](../examples/node) -- [Client-Side GraphQL Composition](../examples/composition) -- [Integration with Urql and React](../examples/urql) -- [Integration with NextJS and TypeScript](../examples/nextjs) -- [Integration with Apollo-Client and React](../examples/apollo) -- [Integration with React-Query](../examples/react-query) -- _Cross-chain merging (same Subgraph, different chains)_ -- - [Parallel SDK calls](../examples/cross-chain-sdk) -- - [Parallel internal calls with schema extensions](../examples/cross-chain-extension) -- [Customize execution with Transforms (auto-pagination and auto-block-tracking)](../examples/transforms) +- [Пример TypeScript и React с использованием `execute` и встроенного GraphQL-Codegen](../examples/execute) +- [Автономный режим TS/JS NodeJS](../examples/node) +- [Клиентская композиция GraphQL](../examples/composition) +- [Интеграция с Urql и React](../examples/urql) +- [Интеграция с NextJS и TypeScript](../examples/nextjs) +- [Интеграция с Apollo-Client и React](../examples/apollo) +- [Интеграция с React-Query](../examples/react-query) +- _Кросс-чейн слияние (тот же субграф, разные чейны)_ +- - [Параллельные вызовы SDK](../examples/cross-chain-sdk) +- - [Параллельные внутренние вызовы с расширениями схемы](../examples/cross-chain-extension) +- [Настройка выполнения с помощью трансформаций (автоматическая пагинация и автоматический отслеживание блоков)](../examples/transforms) -### Advanced Examples/Features +### Продвинутые примеры/функции -#### Customize Network Calls +#### Настройка сетевых вызовов -You can customize the network execution (for example, to add authentication headers) by using `operationHeaders`: +Вы можете настроить выполнение сетевых запросов (например, для добавления заголовков аутентификации), используя `operationHeaders`: ```yaml sources: @@ -170,7 +170,7 @@ sources: Authorization: Bearer MY_TOKEN ``` -You can also use runtime variables if you wish, and specify it in a declarative way: +Вы также можете использовать переменные времени выполнения, если хотите, и указать их декларативным способом: ```yaml sources: @@ -182,7 +182,7 @@ sources: Authorization: Bearer {context.config.apiToken} ``` -Then, you can specify that when you execute operations: +Затем Вы можете указать следующее, когда выполняете операции: ```ts execute(myQuery, myVariables, { @@ -192,11 +192,11 @@ execute(myQuery, myVariables, { }) ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Полную документацию по обработчику `graphql` можно найти [здесь](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Environment Variables Interpolation +#### Интерполяция переменных среды -If you wish to use environment variables in your Graph Client configuration file, you can use interpolation with `env` helper: +Если Вы хотите использовать переменные среды в конфигурационном файле своего Graph Client, Вы можете использовать интерполяцию с помощью помощника `env`: ```yaml sources: @@ -205,12 +205,12 @@ sources: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 operationHeaders: - Authorization: Bearer {env.MY_API_TOKEN} # runtime + Authorization: Bearer {env.MY_API_TOKEN} # время выполнения ``` -Then, make sure to have `MY_API_TOKEN` defined when you run `process.env` at runtime. +Затем убедитесь, что `MY_API_TOKEN` определён, когда Вы выполняете `process.env` во время выполнения программы. -You can also specify environment variables to be filled at build time (during `graphclient build` run) by using the env-var name directly: +Вы также можете указать переменные среды, которые будут заполняться во время сборки (при запуске `graphclient build`), используя непосредственно имя переменной средв: ```yaml sources: @@ -219,23 +219,23 @@ sources: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 operationHeaders: - Authorization: Bearer ${MY_API_TOKEN} # build time + Authorization: Bearer ${MY_API_TOKEN} # время разработки ``` -> You can find the [complete documentation for the `graphql` handler here](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). +> Полную документацию по обработчику `graphql` можно найти [здесь](https://graphql-mesh.com/docs/handlers/graphql#config-api-reference). -#### Fetch Strategies and Multiple Graph Indexers +#### Стратегии выборки данных и работа с несколькими Graph-индексаторами -It's a common practice to use more than one indexer in dApps, so to achieve the ideal experience with The Graph, you can specify several `fetch` strategies in order to make it more smooth and simple. +Это обычная практика — использовать несколько индексаторов в децентрализованных приложениях (dApps), поэтому для достижения наилучшего опыта работы с The Graph Вы можете указать несколько стратегий `fetch`, чтобы сделать процесс более плавным и простым. -All `fetch` strategies can be combined to create the ultimate execution flow. +Все стратегии`fetch` можно комбинировать для создания идеального потока выполнения.
`retry` -The `retry` mechanism allow you to specify the retry attempts for a single GraphQL endpoint/source. +Механизм `retry` позволяет указать количество попыток повторного запроса для одной GraphQL конечной точки/источника. -The retry flow will execute in both conditions: a netword error, or due to a runtime error (indexing issue/inavailability of the indexer). +Механизм повторных попыток будет выполняться в обоих случаях: при ошибке сети или при ошибке выполнения (проблемы с индексированием/недоступность индексатора). ```yaml sources: @@ -243,7 +243,7 @@ sources: handler: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 - retry: 2 # specify here, if you have an unstable/error prone indexer + retry: 2 # укажите здесь, если у вас нестабильный/подверженный ошибкам индексатор ```
@@ -251,7 +251,7 @@ sources:
`timeout` -The `timeout` mechanism allow you to specify the `timeout` for a given GraphQL endpoint. +Механизм `timeout` позволяет задать `timeout` для указанной конечной точки GraphQL. ```yaml sources: @@ -259,7 +259,7 @@ sources: handler: graphql: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 - timeout: 5000 # 5 seconds + timeout: 5000 # 5 секунд ```
@@ -267,9 +267,9 @@ sources:
`fallback` -The `fallback` mechanism allow you to specify use more than one GraphQL endpoint, for the same source. +Механизм `fallback` позволяет указать несколько конечных точек GraphQL для одного и того же источника. -This is useful if you want to use more than one indexer for the same Subgraph, and fallback when an error/timeout happens. You can also use this strategy in order to use a custom indexer, but allow it to fallback to [The Graph Hosted Service](https://thegraph.com/hosted-service). +Это полезно, если Вы хотите использовать более одного индексатора для одного и того же субграфа и переключаться на другой в случае ошибки или тайм-аута. Вы также можете использовать эту стратегию для использования кастомного индексатора, но в случае необходимости переключаться на [The Graph Hosted Service](https://thegraph.com/hosted-service). ```yaml sources: @@ -289,9 +289,9 @@ sources:
`race` -The `race` mechanism allow you to specify use more than one GraphQL endpoint, for the same source, and race on every execution. +Механизм `race` позволяет указать несколько GraphQL-эндпоинтов для одного источника данных, выполняя их конкурентный опрос при каждом запросе. -This is useful if you want to use more than one indexer for the same Subgraph, and allow both sources to race and get the fastest response from all specified indexers. +Это полезно, если вы хотите использовать несколько индексаторов для одного субграфа и позволить им конкурировать за получение самого быстрого ответа от всех указанных индексаторов. ```yaml sources: @@ -308,10 +308,10 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. -This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. +Эта стратегия позволяет отправлять параллельные запросы к различным конечным точкам для одного и того же источника и выбирать наиболее актуальный ответ. + +Это полезно, если Вы хотите выбрать наиболее синхронизированные данные для одного субграфа среди нескольких индексаторов/источников. ```yaml sources: @@ -349,9 +349,9 @@ graph LR;
-#### Block Tracking +#### Отслеживание блоков -The Graph Client can track block numbers and do the following queries by following [this pattern](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) with `blockTracking` transform; +Graph Client может отслеживать номера блоков и выполнять следующие запросы, следуя [этой схеме](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) с использованием преобразования `blockTracking`; ```yaml sources: @@ -361,23 +361,23 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - blockTracking: - # You might want to disable schema validation for faster startup + # Вы можете отключить проверку схемы для более быстрого старта validateSchema: true - # Ignore the fields that you don't want to be tracked + # Игнорируйте поля, которые вы не хотите отслеживать ignoreFieldNames: [users, prices] - # Exclude the operation with the following names + # Исключите операции с указанными именами ignoreOperationNames: [NotFollowed] ``` -[You can try a working example here](../examples/transforms) +[Здесь Вы можете попробовать рабочий пример](../examples/transforms) -#### Automatic Pagination +#### Автоматическая пагинация -With most subgraphs, the number of records you can fetch is limited. In this case, you have to send multiple requests with pagination. +Для большинства субграфов количество записей, которые Вы можете извлечь, ограничено. В этом случае Вам нужно отправить несколько запросов с пагинацией. ```graphql query { - # Will throw an error if the limit is 1000 + # Выдаст ошибку, если лимит равен 1000 users(first: 2000) { id name @@ -385,11 +385,11 @@ query { } ``` -So you have to send the following operations one after the other: +Таким образом, Вам нужно отправить следующие операции одну за другой: ```graphql query { - # Will throw an error if the limit is 1000 + # Выдаст ошибку, если лимит равен 1000 users(first: 1000) { id name @@ -397,11 +397,11 @@ query { } ``` -Then after the first response: +Затем после первого ответа: ```graphql query { - # Will throw an error if the limit is 1000 + # Выдаст ошибку, если лимит равен 1000 users(first: 1000, skip: 1000) { id name @@ -409,9 +409,9 @@ query { } ``` -After the second response, you have to merge the results manually. But instead The Graph Client allows you to do the first one and automatically does those multiple requests for you under the hood. +После второго ответа Вам пришлось бы вручную объединять результаты. Однако Graph Client позволяет выполнить первый запрос, а затем в фоновом режиме обрабатывает все остальные. -All you have to do is: +Всё, что Вам нужно сделать, это: ```yaml sources: @@ -421,21 +421,21 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2 transforms: - autoPagination: - # You might want to disable schema validation for faster startup + # Вы можете отключить проверку схемы для более быстрого старта validateSchema: true ``` -[You can try a working example here](../examples/transforms) +[Здесь Вы можете попробовать рабочий пример](../examples/transforms) -#### Client-side Composition +#### Композиция на стороне клиента -The Graph Client has built-in support for client-side GraphQL Composition (powered by [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). +Graph Client имеет встроенную поддержку композиции GraphQL на стороне клиента (реализованную с помощью [GraphQL-Tools Schema-Stitching](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas)). -You can leverage this feature in order to create a single GraphQL layer from multiple Subgraphs, deployed on multiple indexers. +Вы можете использовать эту функцию для создания единого слоя GraphQL из нескольких субграфов, развернутых на нескольких индексаторах. -> 💡 Tip: You can compose any GraphQL sources, and not only Subgraphs! +> 💡 Совет: Вы можете комбинировать любые источники GraphQL, а не только субграфы! -Trivial composition can be done by adding more than one GraphQL source to your `.graphclientrc.yml` file, here's an example: +Тривиальную композицию можно выполнить, добавив более одного источника GraphQL в Ваш файл `.graphclientrc.yml`, вот пример: ```yaml sources: @@ -449,15 +449,15 @@ sources: endpoint: https://api.thegraph.com/subgraphs/name/graphprotocol/compound-v2 ``` -As long as there a no conflicts across the composed schemas, you can compose it, and then run a single query to both Subgraphs: +Пока нет конфликтов между объединёнными схемами, Вы можете их составлять, а затем выполнить один запрос ко всем субграфам: ```graphql query myQuery { - # this one is coming from compound-v2 + # этот запрос поступает от compound-v2 markets(first: 7) { borrowRate } - # this one is coming from uniswap-v2 + # этот запрос поступает от uniswap-v2 pair(id: "0x00004ee988665cdda9a1080d5792cecd16dc1220") { id token0 { @@ -470,33 +470,33 @@ query myQuery { } ``` -You can also resolve conflicts, rename parts of the schema, add custom GraphQL fields, and modify the entire execution phase. +Вы также можете разрешать конфликты, переименовывать части схемы, добавлять пользовательские поля GraphQL и изменять всю фазу выполнения. -For advanced use-cases with composition, please refer to the following resources: +Для сложных сценариев использования композиций обратитесь к следующим ресурсам: -- [Advanced Composition Example](../examples/composition) -- [GraphQL-Mesh Schema transformations](https://graphql-mesh.com/docs/transforms/transforms-introduction) -- [GraphQL-Tools Schema-Stitching documentation](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) +- [Пример сложной композиции](../examples/composition) +- [Преобразования схемы GraphQL-Mesh](https://graphql-mesh.com/docs/transforms/transforms-introduction) +- [Документация по объединению схем с помощью GraphQL-Tools](https://graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) -#### TypeScript Support +#### Поддержка TypeScript -If your project is written in TypeScript, you can leverage the power of [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) and have a fully-typed GraphQL client experience. +Если Ваш проект написан на TypeScript, Вы можете использовать возможности [`TypedDocumentNode`](https://the-guild.dev/blog/typed-document-node) и получить полностью типизированный опыт работы с GraphQL-клиентом. -The standalone mode of The GraphQL, and popular GraphQL client libraries like Apollo-Client and urql has built-in support for `TypedDocumentNode`! +Автономный режим The GraphQL, а также популярные библиотеки GraphQL-клиентов, такие как Apollo-Client и urql, имеют встроенную поддержку `TypedDocumentNode`! -The Graph Client CLI comes with a ready-to-use configuration for [GraphQL Code Generator](https://graphql-code-generator.com), and it can generate `TypedDocumentNode` based on your GraphQL operations. +CLI Graph Client поставляется с готовой конфигурацией для [GraphQL Code Generator](https://graphql-code-generator.com) и может генерировать `TypedDocumentNode` на основе Ваших GraphQL-операций. -To get started, define your GraphQL operations in your application code, and point to those files using the `documents` section of `.graphclientrc.yml`: +Чтобы начать, определите Ваши GraphQL-операции в коде приложения и укажите пути к этим файлам в разделе `documents` файла `.graphclientrc.yml`: ```yaml sources: - - # ... your Subgraphs/GQL sources here + - # ... Ваши Субграфы/источники GQL здесь documents: - ./src/example-query.graphql ``` -You can also use Glob expressions, or even point to code files, and the CLI will find your GraphQL queries automatically: +Вы также можете использовать выражения Glob или даже указывать файлы кода, и CLI автоматически найдет Ваши GraphQL-запросы: ```yaml documents: @@ -504,37 +504,37 @@ documents: - './src/**/*.{ts,tsx,js,jsx}' ``` -Now, run the GraphQL CLI `build` command again, the CLI will generate a `TypedDocumentNode` object under `.graphclient` for every operation found. +Теперь снова выполните команду `build` в GraphQL CLI, и CLI сгенерирует объект `TypedDocumentNode` в `.graphclient` для каждой найденной операции. -> Make sure to name your GraphQL operations, otherwise it will be ignored! +> Обязательно давайте имена Вашим GraphQL-операциям, иначе они будут проигнорированы! -For example, a query called `query ExampleQuery` will have the corresponding `ExampleQueryDocument` generated in `.graphclient`. You can now import it and use that for your GraphQL calls, and you'll have a fully typed experience without writing or specifying any TypeScript manually: +Например, для запроса с именем `query ExampleQuery` будет сгенерирован соответствующий `ExampleQueryDocument` в `.graphclient`. Теперь вы можете импортировать его и использовать для GraphQL-запросов, получая полностью типизированный опыт без необходимости вручную писать или указывать TypeScript: ```ts import { ExampleQueryDocument, execute } from '../.graphclient' async function main() { - // "result" variable is fully typed, and represents the exact structure of the fields you selected in your query. + // переменная "result" полностью типизирована и представляет точную структуру полей, которые вы выбрали в вашем запросе. const result = await execute(ExampleQueryDocument, {}) console.log(result) } ``` -> You can find a [TypeScript project example here](../examples/urql). +> Вы можете найти [пример проекта на TypeScript здесь](../examples/urql). -#### Client-Side Mutations +#### Мутации на стороне клиента -Due to the nature of Graph-Client setup, it is possible to add client-side schema, that you can later bridge to run any arbitrary code. +Из-за особенностей настройки Graph-Client, возможно добавление схемы на стороне клиента, которую затем можно использовать для выполнения произвольного кода. -This is helpful since you can implement custom code as part of your GraphQL schema, and have it as unified application schema that is easier to track and develop. +Это полезно, потому что Вы можете внедрить пользовательский код в часть своей схемы GraphQL и использовать его как единую схему приложения, что облегчает отслеживание и разработку. -> This document explains how to add custom mutations, but in fact you can add any GraphQL operation (query/mutation/subscriptions). See [Extending the unified schema article](https://graphql-mesh.com/docs/guides/extending-unified-schema) for more information about this feature. +> Этот документ объясняет, как добавить пользовательские мутации, но на самом деле Вы можете добавить любую операцию GraphQL (запросы/мутации/подписки). Для получения дополнительной информации о данной функции, см. статью [Расширение единой схемы](https://graphql-mesh.com/docs/guides/extending-unified-schema). -To get started, define a `additionalTypeDefs` section in your config file: +Чтобы начать, определите раздел `additionalTypeDefs` в Вашем конфигурационном файле: ```yaml additionalTypeDefs: | - # We should define the missing `Mutation` type + # Мы должны определить отсутствующий тип `Mutation` extend schema { mutation: Mutation } @@ -548,21 +548,21 @@ additionalTypeDefs: | } ``` -Then, add a pointer to a custom GraphQL resolvers file: +Затем добавьте указатель на файл с пользовательскими GraphQL-ресолверами: ```yaml additionalResolvers: - './resolvers' ``` -Now, create `resolver.js` (or, `resolvers.ts`) in your project, and implement your custom mutation: +Теперь создайте файл `resolver.js` (или `resolvers.ts`) в своем проекте и внедрите свою пользовательскую мутацию: ```js module.exports = { Mutation: { async doSomething(root, args, context, info) { - // Here, you can run anything you wish. - // For example, use `web3` lib, connect a wallet and so on. + // Здесь Вы можете выполнить все, что хотите. + // Например, использовать библиотеку `web3`, подключить кошелек и так далее. return true }, @@ -570,17 +570,17 @@ module.exports = { } ``` -If you are using TypeScript, you can also get fully type-safe signature by doing: +Если Вы используете TypeScript, Вы также можете получить полностью безопасную типизацию подписей, сделав следующее: ```ts import { Resolvers } from './.graphclient' -// Now it's fully typed! +// Теперь всё написано! const resolvers: Resolvers = { Mutation: { async doSomething(root, args, context, info) { - // Here, you can run anything you wish. - // For example, use `web3` lib, connect a wallet and so on. + // Здесь Вы можете выполнить любые операции, которые хотите. + // Например, использовать библиотеку `web3`, подключить кошелек и так далее. return true }, @@ -590,22 +590,22 @@ const resolvers: Resolvers = { export default resolvers ``` -If you need to inject runtime variables into your GraphQL execution `context`, you can use the following snippet: +Если Вам нужно внедрить переменные времени выполнения в Ваш `context` выполнения GraphQL, вы можете использовать следующий сниппет: ```ts execute( MY_QUERY, {}, { - myHelper: {}, // this will be available in your Mutation resolver as `context.myHelper` + myHelper: {}, // это будет доступно в Вашем ресолвере мутации как `context.myHelper` }, ) ``` -> [You can read more about client-side schema extensions here](https://graphql-mesh.com/docs/guides/extending-unified-schema) +> [Вы можете прочитать больше о расширениях схемы на стороне клиента здесь](https://graphql-mesh.com/docs/guides/extending-unified-schema) -> [You can also delegate and call Query fields as part of your mutation](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) +> [Вы также можете делегировать и вызывать поля Query в рамках Вашей мутации](https://graphql-mesh.com/docs/guides/extending-unified-schema#using-the-sdk-to-fetch-sources) -## License +## Лицензия -Released under the [MIT license](../LICENSE). +Выпущена под [лицензией MIT](../LICENSE). diff --git a/website/src/pages/ru/subgraphs/querying/graph-client/_meta-titles.json b/website/src/pages/ru/subgraphs/querying/graph-client/_meta-titles.json index ee554b4ac36f..a71a02842b68 100644 --- a/website/src/pages/ru/subgraphs/querying/graph-client/_meta-titles.json +++ b/website/src/pages/ru/subgraphs/querying/graph-client/_meta-titles.json @@ -1,3 +1,3 @@ { - "README": "Introduction" + "README": "Введение" } diff --git a/website/src/pages/ru/subgraphs/querying/graph-client/live.md b/website/src/pages/ru/subgraphs/querying/graph-client/live.md index e6f726cb4352..da0133b7a768 100644 --- a/website/src/pages/ru/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/ru/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Начало работы Start by adding the following configuration to your `.graphclientrc.yml` file: @@ -12,7 +12,7 @@ plugins: defaultInterval: 1000 ``` -## Usage +## Применение Set the default update interval you wish to use, and then you can apply the following GraphQL `@directive` over your GraphQL queries: diff --git a/website/src/pages/ru/subgraphs/querying/graphql-api.mdx b/website/src/pages/ru/subgraphs/querying/graphql-api.mdx index cf058623eacf..aa09db4d8ab8 100644 --- a/website/src/pages/ru/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/ru/subgraphs/querying/graphql-api.mdx @@ -2,23 +2,23 @@ title: API GraphQL --- -Learn about the GraphQL Query API used in The Graph. +Узнайте о GraphQL API запросах, используемых в The Graph. ## Что такое GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). -## Queries with GraphQL +## Запросы с GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. -> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. +> Примечание: `query` не нужно указывать в начале `graphql` запроса при использовании The Graph. ### Примеры -Query for a single `Token` entity defined in your schema: +Запрос для одного объекта `Token`, определенного в Вашей схеме: ```graphql { @@ -29,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> Note: When querying for a single entity, the `id` field is required, and it must be written as a string. +> Примечание: При запросе одного объекта поле `id` является обязательным и должно быть записано как строка. -Query all `Token` entities: +Запрос всех объектов `Token`: ```graphql { @@ -44,10 +44,10 @@ Query all `Token` entities: ### Сортировка -When querying a collection, you may: +При запросе коллекции Вы можете: -- Use the `orderBy` parameter to sort by a specific attribute. -- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. +- Использовать параметр `orderBy` для сортировки по определенному атрибуту. +- Использовать параметр `orderDirection`, чтобы указать направление сортировки `asc` для возрастания или `desc` для убывания. #### Пример @@ -62,9 +62,9 @@ When querying a collection, you may: #### Пример сортировки вложенных объектов -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Начиная с Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), объекты можно сортировать на основе вложенных объектов. -The following example shows tokens sorted by the name of their owner: +В следующем примере мы сортируем токены по имени их владельца: ```graphql { @@ -77,18 +77,18 @@ The following example shows tokens sorted by the name of their owner: } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> В настоящее время сортировка возможна по одноуровневым полям типа `String` или `ID`, в полях `@entity` и `@derivedFrom`. К сожалению, [сортировка по интерфейсам в одноуровневых объектах](https://github.com/graphprotocol/graph-node/pull/4058), сортировка по полям-массивам и вложенным объектам пока не поддерживается. ### Пагинация -When querying a collection, it's best to: +При запросе коллекции лучше всего: -- Use the `first` parameter to paginate from the beginning of the collection. - - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. -- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. -- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. +- Использовать параметр `first` для пагинации данных с начала коллекции. + - Стандартная сортировка выполняется по `ID` в возрастающем алфавитно-числовом порядке, **не** по времени создания. +- Использовать параметр `skip`, чтобы пропускать объекты и осуществлять пагинацию. Например, `first:100` покажет первые 100 объектов, а `first:100, skip:100` покажет следующие 100 объектов. +- Избегайте использования `skip` в запросах, так как это обычно приводит к низкой производительности. Для получения большого количества элементов лучше выполнять постраничную загрузку объектов на основе атрибута, как показано в предыдущем примере. -#### Example using `first` +#### Пример использования `first` Запрос первых 10 токенов: @@ -101,11 +101,11 @@ When querying a collection, it's best to: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Чтобы запросить группы объектов в середине коллекции, параметр `skip` можно использовать в сочетании с параметром `first`, чтобы пропустить указанное количество объектов, начиная с начала коллекции. -#### Example using `first` and `skip` +#### Пример использования `first` и `skip` -Query 10 `Token` entities, offset by 10 places from the beginning of the collection: +Запрос 10 объектов `Token`, смещенных на 10 позиций от начала коллекции: ```graphql { @@ -116,9 +116,9 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect } ``` -#### Example using `first` and `id_ge` +#### Пример использования `first` и `id_ge` -If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: +Если клиенту нужно получить большое количество объектов, эффективнее выполнять запросы на основе атрибута и фильтровать по этому атрибуту. Например, клиент может получить большое количество токенов с помощью следующего запроса: ```graphql query manyTokens($lastID: String) { @@ -129,16 +129,16 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +В первый раз запрос отправляется с `lastID = ""`, а в последующих запросах `lastID` устанавливается в значение атрибута `id` последнего объекта из предыдущего запроса. Этот подход значительно эффективнее, чем использование увеличивающихся значений `skip`. ### Фильтрация -- You can use the `where` parameter in your queries to filter for different properties. -- You can filter on multiple values within the `where` parameter. +- Вы можете использовать параметр `where` в запросах для фильтрации по различным свойствам. +- Вы можете фильтровать по нескольким значениям внутри параметра `where`. -#### Example using `where` +#### Пример использования `where` -Query challenges with `failed` outcome: +Запрос задач с результатом `failed`: ```graphql { @@ -152,7 +152,7 @@ Query challenges with `failed` outcome: } ``` -You can use suffixes like `_gt`, `_lte` for value comparison: +Вы можете использовать такие суффиксы, как `_gt`, `_lte` для сравнения значений: #### Пример фильтрации диапазона @@ -168,9 +168,9 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Пример фильтрации блока -You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. +Вы также можете фильтровать объекты, которые были обновлены на указанном блоке или позже, с помощью `_change_block(number_gte: Int)`. -Это может быть полезно, если Вы хотите получить только объекты, которые изменились, например, с момента последнего опроса. Или, в качестве альтернативы, может быть полезно исследовать или отладить изменнения объектов в Вашем субграфе (в сочетании с фильтрацией блоков Вы можете изолировать только объекты, которые изменились в определенном блоке). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -184,7 +184,7 @@ You can also filter entities that were updated in or after a specified block wit #### Пример фильтрации вложенных объектов -Filtering on the basis of nested entities is possible in the fields with the `_` suffix. +Фильтрация на основе вложенных объектов возможна в полях с суффиксом `_`. Это может быть полезно, если Вы хотите получать только объекты, у которых объекты дочернего уровня удовлетворяют заданным условиям. @@ -202,11 +202,11 @@ Filtering on the basis of nested entities is possible in the fields with the `_` #### Логические операторы -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) you can group multiple parameters in the same `where` argument using the `and` or the `or` operators to filter results based on more than one criteria. +Начиная с Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), Вы можете группировать несколько параметров в одном аргументе `where`, используя операторы `and` или `or` для фильтрации результатов по нескольким критериям. -##### `AND` Operator +##### Оператор `AND` -The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +Следующий пример фильтрует задачи с `outcome` `succeeded` и `number` больше или равно `100`. ```graphql { @@ -220,7 +220,7 @@ The following example filters for challenges with `outcome` `succeeded` and `num } ``` -> **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. +> **Синтаксический сахар:** Вы можете упростить приведенный выше запрос, убрав оператор `and` и передав подвыражение, разделенное запятыми. > > ```graphql > { @@ -234,9 +234,9 @@ The following example filters for challenges with `outcome` `succeeded` and `num > } > ``` -##### `OR` Operator +##### Оператор `OR` -The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +Следующий пример фильтрует задачи с `outcome` `succeeded` или `number` больше или равно `100`. ```graphql { @@ -250,7 +250,7 @@ The following example filters for challenges with `outcome` `succeeded` or `numb } ``` -> **Note**: When constructing queries, it is important to consider the performance impact of using the `or` operator. While `or` can be a useful tool for broadening search results, it can also have significant costs. One of the main issues with `or` is that it can cause queries to slow down. This is because `or` requires the database to scan through multiple indexes, which can be a time-consuming process. To avoid these issues, it is recommended that developers use and operators instead of or whenever possible. This allows for more precise filtering and can lead to faster, more accurate queries. +> **Примечание**: При составлении запросов важно учитывать влияние оператора `or` на производительность. Хотя `or` может быть полезным инструментом для расширения результатов поиска, он также может значительно замедлить запросы. Основная проблема в том, что `or` заставляет базу данных сканировать несколько индексов, что может быть ресурсоемким процессом. Чтобы избежать этих проблем, рекомендуется по возможности использовать оператор `and` вместо `or`. Это позволяет выполнять более точную фильтрацию и делает запросы быстрее и эффективнее. #### Все фильтры @@ -279,9 +279,9 @@ _not_ends_with _not_ends_with_nocase ``` -> Please note that some suffixes are only supported for specific types. For example, `Boolean` only supports `_not`, `_in`, and `_not_in`, but `_` is available only for object and interface types. +> Обратите внимание, что некоторые суффиксы поддерживаются только для определенных типов. Например, `Boolean` поддерживает только `_not`, `_in` и `_not_in`, тогда как `_` доступен только для объектных и интерфейсных типов. -In addition, the following global filters are available as part of `where` argument: +Кроме того, в качестве части аргумента `where` доступны следующие глобальные фильтры: ```graphql _change_block(number_gte: Int) @@ -289,11 +289,11 @@ _change_block(number_gte: Int) ### Запросы на Time-travel -You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. +Вы можете запрашивать состояние своих объектов не только для последнего блока, который используется по умолчанию, но и для произвольного блока в прошлом. Блок, в котором должен выполняться запрос, можно указать либо по номеру блока, либо по его хэшу, включив аргумент `block` в поля верхнего уровня запросов. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +Результат такого запроса не изменится со временем, то есть запрос на определенном прошедшем блоке вернет тот же результат, независимо от времени выполнения, за исключением случая, когда запрос выполняется на блоке, который находится очень близко к началу чейна. В этом случае результат может измениться, если этот блок окажется **не** на основном чейне, и чейн будет реорганизован. Как только блок можно будет считать окончательным, результат запроса больше не изменится. -> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Примечание: Текущая реализация все еще подвержена определенным ограничениям, которые могут нарушить эти гарантии. Реализация не всегда может точно определить, что данный хэш блока вообще не находится на основном чейне, или что результат запроса по хэшу блока для блока, который еще не считается окончательным, может быть изменен из-за реорганизации блоков, происходящей одновременно с запросом. Эти ограничения не влияют на результаты запросов по хэшу блока, если блок окончателен и подтвержден на основном чейне. [Этот вопрос](https://github.com/graphprotocol/graph-node/issues/1405) подробно объясняет, в чем состоят эти ограничения. #### Пример @@ -309,7 +309,7 @@ The result of such a query will not change over time, i.e., querying at a certai } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing block number 8,000,000. +Этот запрос вернет объекты `Challenge` и связанные с ними объекты `Application` в том виде, в каком они существовали сразу после обработки блока номер 8,000,000. #### Пример @@ -325,26 +325,26 @@ This query will return `Challenge` entities, and their associated `Application` } ``` -This query will return `Challenge` entities, and their associated `Application` entities, as they existed directly after processing the block with the given hash. +Этот запрос вернет объекты `Challenge` и связанные с ними объекты `Application` в том виде, в каком они существовали сразу после обработки блока с заданным хешем. ### Полнотекстовые поисковые запросы -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. -Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. +Запросы полнотекстового поиска имеют одно обязательное поле, `text`, для предоставления поисковых запросов. В этом поле поиска `text` можно использовать несколько специальных операторов полнотекстового поиска. Полнотекстовые поисковые операторы: -| Символ | Оператор | Описание | -| --- | --- | --- | -| `&` | `And` | Для объединения нескольких условий поиска в фильтр для объектов, которые включают все указанные условия | -| | | `Or` | Запросы с несколькими условиями поиска, разделенные оператором or, вернут все объекты, которые соответствуют любому из предоставленных условий | -| `<->` | `Follow by` | Укажите расстояние между двумя словами. | -| `:*` | `Prefix` | Используйте поисковый запрос по префиксу, чтобы найти слова с соответствующим префиксом (необходимо 2 символа) | +| Символ | Оператор | Описание | +| ------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | Для объединения нескольких условий поиска в фильтр для объектов, которые включают все указанные условия | +| | | `Or` | Запросы с несколькими условиями поиска, разделенные оператором or, вернут все объекты, которые соответствуют любому из предоставленных условий | +| `<->` | `Follow by` | Укажите расстояние между двумя словами. | +| `:*` | `Prefix` | Используйте поисковый запрос по префиксу, чтобы найти слова с соответствующим префиксом (необходимо 2 символа) | #### Примеры -Using the `or` operator, this query will filter to blog entities with variations of either "anarchism" or "crumpet" in their fulltext fields. +Используя оператор `or`, этот запрос отфильтрует объекты блога, содержащие варианты слов "anarchism" или "crumpet" в их полнотекстовых полях. ```graphql { @@ -357,7 +357,7 @@ Using the `or` operator, this query will filter to blog entities with variations } ``` -The `follow by` operator specifies a words a specific distance apart in the fulltext documents. The following query will return all blogs with variations of "decentralize" followed by "philosophy" +Оператор `follow by` определяет слова, находящиеся на определённом расстоянии друг от друга в полнотекстовых документах. Следующий запрос вернёт все блоги, содержащие варианты слова "decentralize", за которым следует "philosophy" ```graphql { @@ -385,25 +385,25 @@ The `follow by` operator specifies a words a specific distance apart in the full ### Валидация -Graph Node implements [specification-based](https://spec.graphql.org/October2021/#sec-Validation) validation of the GraphQL queries it receives using [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), which is based on the [graphql-js reference implementation](https://github.com/graphql/graphql-js/tree/main/src/validation). Queries which fail a validation rule do so with a standard error - visit the [GraphQL spec](https://spec.graphql.org/October2021/#sec-Validation) to learn more. +Graph Node реализует валидацию [на основе спецификации](https://spec.graphql.org/October2021/#sec-Validation) для получаемых GraphQL-запросов с использованием [graphql-tools-rs](https://github.com/dotansimha/graphql-tools-rs#validation-rules), которая основана на [референсной реализации graphql-js](https://github.com/graphql/graphql-js/tree/main/src/validation). Запросы, не прошедшие проверку валидации, завершаются стандартной ошибкой. Ознакомьтесь со [спецификацией GraphQL](https://spec.graphql.org/October2021/#sec-Validation), чтобы узнать больше. ## Схема -The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +Схема Ваших источников данных, то есть типы объектов, значения и связи, доступные для запросов, определяется с помощью [Языка определения интерфейсов GraphQL (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Примечание: Наш API не подвергается мутациям, поскольку ожидается, что разработчики будут отправлять транзакции напрямую на базовый блокчейн из своих приложений. ### Объекты -All GraphQL types with `@entity` directives in your schema will be treated as entities and must have an `ID` field. +Все типы GraphQL с директивами `@entity` в Вашей схеме будут рассматриваться как объекты и должны содержать поле `ID`. -> **Note:** Currently, all types in your schema must have an `@entity` directive. In the future, we will treat types without an `@entity` directive as value objects, but this is not yet supported. +> **Примечание:** В настоящее время все типы в Вашей схеме должны иметь директиву `@entity`. В будущем мы будем рассматривать типы без директивы `@entity` как объекты значений, но на данный момент это не поддерживается. ### Метаданные субграфа -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,14 +419,14 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Если предоставлен блок, метаданные относятся к этому блоку, в противном случае используется последний проиндексированный блок. Если предоставляется блок, он должен быть после начального блока субграфа и меньше или равен последнему проиндексированному блоку. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. -`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. +`deployment` — это уникальный идентификатор, соответствующий IPFS CID файла `subgraph.yaml`. -`block` provides information about the latest block (taking into account any block constraints passed to `_meta`): +`block` предоставляет информацию о последнем блоке (с учетом любых ограничений блоков, переданных в `_meta`): - hash: хэш блока - number: номер блока -- timestamp: временная метка блока, если она доступна (в настоящее время доступна только для субграфов, индексирующих сети EVM) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/ru/subgraphs/querying/introduction.mdx b/website/src/pages/ru/subgraphs/querying/introduction.mdx index d28d11fa28e6..d7cc8fa082c3 100644 --- a/website/src/pages/ru/subgraphs/querying/introduction.mdx +++ b/website/src/pages/ru/subgraphs/querying/introduction.mdx @@ -1,32 +1,32 @@ --- title: Запрос The Graph -sidebarTitle: Introduction +sidebarTitle: Введение --- -To start querying right away, visit [The Graph Explorer](https://thegraph.com/explorer). +Чтобы сразу приступить к запросу, посетите [The Graph Explorer](https://thegraph.com/explorer). ## Обзор -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Специфические особенности -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. -![Query Subgraph Button](/img/query-button-screenshot.png) +![Кнопка запроса субграфа](/img/query-button-screenshot.png) -![Query Subgraph URL](/img/query-url-screenshot.png) +![Запрос URL субграфа](/img/query-url-screenshot.png) -You will notice that this query URL must use a unique API key. You can create and manage your API keys in [Subgraph Studio](https://thegraph.com/studio), under the "API Keys" section. Learn more about how to use Subgraph Studio [here](/deploying/subgraph-studio/). +Как Вы можете заметить, этот URL-адрес запроса должен использовать уникальный API-ключ. Вы можете создавать и управлять своими API-ключами в [Subgraph Studio](https://thegraph.com/studio) в разделе "API-ключи". Узнайте больше о том, как использовать Subgraph Studio [здесь](/deploying/subgraph-studio/). -Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). +Пользователи Subgraph Studio начинают с Бесплатного плана, который позволяет делать 100 000 запросов в месяц. Дополнительные запросы доступны в рамках Плана роста, который предлагает гибкую оплату за дополнительные запросы, оплачиваемые кредитной картой, или GRT в Арбитрум. Подробнее о тарифах можно узнать [здесь](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > -> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. +> Примечание: Если Вы столкнулись с ошибками 405 при выполнении GET-запроса к URL Graph Explorer, попробуйте использовать POST-запрос. ### Дополнительные ресурсы -- Use [GraphQL querying best practices](/subgraphs/querying/best-practices/). -- To query from an application, click [here](/subgraphs/querying/from-an-application/). -- View [querying examples](https://github.com/graphprotocol/query-examples/tree/main). +- Используйте [лучшие практики выполнения запросов GraphQL](/subgraphs/querying/best-practices/). +- Чтобы выполнить запрос из приложения, нажмите [здесь](/subgraphs/querying/from-an-application/). +- Посмотреть [примеры запросов](https://github.com/graphprotocol/query-examples/tree/main). diff --git a/website/src/pages/ru/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ru/subgraphs/querying/managing-api-keys.mdx index 002aa22be689..b9a52472b66b 100644 --- a/website/src/pages/ru/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/ru/subgraphs/querying/managing-api-keys.mdx @@ -1,34 +1,34 @@ --- -title: Управление вашими ключами API +title: Managing API keys --- ## Обзор -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. -### Create and Manage API Keys +### Создание и управление API ключами -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. -The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. +В таблице "API keys" перечислены существующие ключи API и Вы можете управлять ими или удалять их. Для каждого ключа, Вы можете увидеть его статус, стоимость за текущий период, лимит расходов за текущий период и общее количество запросов. -You can click the "three dots" menu to the right of a given API key to: +Вы можете нажать на меню "три точки" справа от заданного ключа API, чтобы: -- Rename API key -- Regenerate API key -- Delete API key -- Manage spending limit: this is an optional monthly spending limit for a given API key, in USD. This limit is per billing period (calendar month). +- Переименовать API ключ +- Восстановить API ключ +- Удалить API ключ +- Управлять лимитом расходов: это необязательный лимит ежемесячных расходов для данного API ключа в USD. Этот лимит за расчетный период (календарный месяц). -### API Key Details +### Детали API ключа -You can click on an individual API key to view the Details page: +Вы можете нажать на отдельный ключ API, чтобы перейти на страницу с подробной информацией: -1. Under the **Overview** section, you can: +1. В разделе **Обзор** можно: - Отредактируйте свое ключевое имя - Регенерировать ключи API - Просмотр текущего использования ключа API со статистикой: - Количество запросов - Количество потраченных GRT -2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: +2. В разделе **Безопасность** Вы можете выбрать параметры безопасности в зависимости от необходимого Вам уровня контроля. А именно: - Просматривайте доменные имена, авторизованные для использования вашего API-ключа, и управляйте ими - - Назначьте субграфы, которые могут быть запрошены с помощью вашего API-ключа + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/ru/subgraphs/querying/python.mdx b/website/src/pages/ru/subgraphs/querying/python.mdx index b450ba9276de..f2e0b317b482 100644 --- a/website/src/pages/ru/subgraphs/querying/python.mdx +++ b/website/src/pages/ru/subgraphs/querying/python.mdx @@ -1,11 +1,11 @@ --- -title: Query The Graph with Python and Subgrounds +title: Запросы к The Graph с использованием Python и Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! -Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. +Subgrounds предлагает простой Pythonic API для создания GraphQL-запросов, автоматизирует утомительные рабочие процессы, такие как пагинация, и предоставляет расширенные возможности для опытных пользователей через управляемые преобразования схем. ## Начало работы @@ -13,18 +13,18 @@ Subgrounds requires Python 3.10 or higher and is available on [pypi](https://pyp ```bash pip install --upgrade subgrounds -# or +# или python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") @@ -54,4 +54,4 @@ Since subgrounds has a large feature set to explore, here are some helpful start - [Concurrent Queries](https://docs.playgrounds.network/subgrounds/getting_started/async/) - Learn how to level up your queries by parallelizing them. - [Exporting Data to CSVs](https://docs.playgrounds.network/subgrounds/faq/exporting/) - - A quick article on how to seamlessly save your data as CSVs for further analysis. + - Краткая статья о том, как легко сохранять данные в формате CSV для дальнейшего анализа. diff --git a/website/src/pages/ru/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ru/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..b697d9cfd5e6 100644 --- a/website/src/pages/ru/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/ru/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -1,27 +1,27 @@ --- -title: Subgraph ID vs Deployment ID +title: Идентификатор субграфа vs Идентификатор развертывания --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +Субграф идентифицируется с помощью идентификатора субграфа, а каждая его версия — с помощью идентификатора развертывания. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +При выполнении запроса к субграфу можно использовать любой из идентификаторов, но обычно рекомендуется использовать идентификатор развертывания, так как он позволяет указать конкретную версию субграфа. -Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) +Вот некоторые ключевые различия между двумя ID: ![](/img/subgraph-id-vs-deployment-id.png) -## Deployment ID +## Идентификатор развертывания -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +Идентификатор развертывания — это IPFS-хеш скомпилированного файла манифеста, который ссылается на другие файлы в IPFS вместо относительных URL на компьютере. Например, скомпилированный манифест можно открыть по ссылке: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. Чтобы изменить идентификатор развертывания, можно просто обновить файл манифеста, например, изменив поле description, как описано в [документации манифеста субграфа](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +Когда запросы выполняются с использованием идентификатора развертывания субграфа, мы указываем конкретную версию этого субграфа для запроса. Использование идентификатора развертывания для запроса определённой версии субграфа обеспечивает более продвинутую и надежную настройку, так как даёт полный контроль над версией субграфа, к которой выполняется запрос. Однако это приводит к необходимости вручную обновлять код запроса каждый раз при публикации новой версии субграфа. -Example endpoint that uses Deployment ID: +Пример конечной точки, использующей идентификатор развертывания: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/deployments/id/QmfYaVdSSekUeK6expfm47tP8adg3NNdEGnVExqswsSwaB` -## Subgraph ID +## Идентификатор субграфа -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +Идентификатор субграфа — это уникальный идентификатор для субграфа. Он остаётся постоянным для всех версий субграфа. Рекомендуется использовать идентификатор субграфа для запроса последней версии субграфа, хотя существуют некоторые особенности. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Имейте в виду, что запросы с использованием идентификатора субграфа могут привести к тому, что на запрос будет отвечать старая версия субграфа, так как новой версии может потребоваться время для синхронизации. Также новые версии могут вводить изменения в схеме, которые являются несовместимыми с предыдущими версиями. -Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` +Пример конечной точки, использующей идентификатор субграфа: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/ru/subgraphs/quick-start.mdx b/website/src/pages/ru/subgraphs/quick-start.mdx index a8113aa22586..c676f1cf698d 100644 --- a/website/src/pages/ru/subgraphs/quick-start.mdx +++ b/website/src/pages/ru/subgraphs/quick-start.mdx @@ -2,22 +2,22 @@ title: Быстрый старт --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Узнайте, как легко создать, опубликовать и запросить [Субграф](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) на The Graph. -## Prerequisites +## Предварительные требования - Криптовалютный кошелек -- A smart contract address on a [supported network](/supported-networks/) -- [Node.js](https://nodejs.org/) installed -- A package manager of your choice (`npm`, `yarn` or `pnpm`) +- Адрес смарт-контракта в [поддерживаемой сети](/supported-networks/) +- [Node.js](https://nodejs.org/) установлен +- Менеджер пакетов на Ваш выбор (`npm`, `yarn` или `pnpm`) -## How to Build a Subgraph +## Как создать субграф -### 1. Create a subgraph in Subgraph Studio +### 1. Создайте Субграф в Subgraph Studio Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и подключите свой кошелек. -Subgraph Studio позволяет создавать, управлять, развертывать и публиковать субграфы, а также создавать и управлять API-ключами. +Subgraph Studio позволяет создавать, управлять, развертывать и публиковать Субграфы, а также создавать и управлять API-ключами. Нажмите "Создать субграф". Рекомендуется называть субграф с использованием Заглавного регистра: "Subgraph Name Chain Name". @@ -37,56 +37,56 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -### 3. Инициализация Вашего cубграфа +### 3. Инициализируйте ваш субграф -> Вы можете найти команды для своего конкретного субграфа на странице субграфа в [Subgraph Studio](https://thegraph.com/studio/). +> Вы можете найти команды для вашего конкретного субграфа на странице субграфа в [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +Команда `graph init` автоматически создаст каркас субграфа на основе событий вашего контракта. -Следующая команда инициализирует ваш субграф из существующего контракта: +Следующая команда инициализирует ваш субграф на основе существующего контракта: ```sh graph init ``` -If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. +Если Ваш контракт верифицирован на соответствующем блоксканере, где он развернут (например, [Etherscan](https://etherscan.io/)), то ABI будет автоматически создан в CLI. -При инициализации субграфа CLI запросит у Вас следующую информацию: +Когда вы инициализируете свой субграф, CLI запросит у вас следующую информацию: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. -- **Contract address**: Locate the smart contract address you’d like to query data from. -- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. -- **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. -- **Add another contract** (optional): You can add another contract. +- **Протокол**: Выберите протокол, данные из которого будет индексировать ваш субграф. +- **Слаг субграфа**: Создайте имя для вашего субграфа. Слаг субграфа — это идентификатор для вашего субграфа. +- **Каталог**: Выберите каталог, в котором будет создан ваш субграф. +- **Сеть Ethereum** (необязательно): Вам может понадобиться указать, из какой совместимой с EVM сети ваш субграф будет индексировать данные. +- **Адрес контракта**: Найдите адрес смарт-контракта, из которого Вы хотите запрашивать данные. +- **ABI**: Если ABI не заполнен автоматически, Вам придется ввести его вручную в формате JSON. +- **Начальный блок**: Вы должны ввести начальный блок для оптимизации индексирования данных субграфа. Найдите начальный блок, определив блок, в котором был развернут ваш контракт. +- **Имя контракта**: Введите имя Вашего контракта. +- **Индексирование событий контракта как объектов**: Рекомендуется установить это значение в "true", так как это автоматически добавит мэппинги для каждого сгенерированного события в ваш субграф. +- **Добавление еще одного контракта (опционально)**: Вы можете добавить еще один контракт. -На следующем скриншоте показан пример того, чего следует ожидать при инициализации субграфа: +Вот скриншот, который демонстрирует, чего ожидать при инициализации вашего субграфа: -![Subgraph command](/img/CLI-Example.png) +![Команда субграфа](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Редактирование вашего субграфа -Команда `init` на предыдущем шаге создает шаблон субграфа, который Вы можете использовать в качестве отправной точки для его разработки. +Команда `init` на предыдущем шаге создает скелет субграфа, который вы можете использовать в качестве отправной точки для создания вашего субграфа. -При внесении изменений в субграф Вы будете работать в основном с тремя файлами: +При внесении изменений в субграф вы будете в основном работать с тремя файлами: -- Манифест (`subgraph.yaml`) — определяет, какие источники данных Ваш субграф будет индексировать. -- Схема (`schema.graphql`) - Схема GraphQL определяет, какие данные Вы хотите извлечь из субграфа. +- Манифест (`subgraph.yaml`) — определяет, какие источники данных ваш субграф будет индексировать. +- Схема (`schema.graphql`) — определяет, какие данные вы хотите извлекать из субграфа. - AssemblyScript Mappings (mapping.ts) - это код, который преобразует данные из Ваших источников данных в объекты, определенные в схеме. -Для получения более детальной информации о том, как создать свой субграф, ознакомьтесь с разделом [Creating a Subgraph](/developing/creating-a-subgraph/). +Для подробного объяснения того, как писать ваш субграф, ознакомьтесь с разделом [Создание субграфа](/developing/creating-a-subgraph/). ### 5. Развертывание Вашего субграфа -> Remember, deploying is not the same as publishing. +> Помните, развертывание — это не то же самое, что публикация. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +Когда вы **разворачиваете** субграф, вы отправляете его в [Subgraph Studio](https://thegraph.com/studio/), где можете тестировать, настраивать и проверять его. Индексирование развернутого субграфа выполняется с помощью [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), который является единственным Индексатором, принадлежащим и управляемым Edge & Node, а не многими децентрализованными Индексаторами в сети The Graph. **Развернутый** субграф бесплатен для использования, имеет ограничения по количеству запросов, не виден для общественности и предназначен для разработки, настройки и тестирования. -После того как Ваш субграф будет написан, выполните следующие команды: +Как только ваш субграф написан, выполните следующие команды: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Аутентифицируйте и разверните свой субграф. Ключ развертывания можно найти на странице Subgraph в Subgraph Studio. +Аутентифицируйтесь и разверните ваш субграф. Ключ для развертывания можно найти на странице субграфа в Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -107,39 +107,39 @@ graph deploy ``` ```` -The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. +CLI запросит метку версии. Настоятельно рекомендуется использовать [семантическую версию](https://semver.org/), например, `0.0.1`. ### 6. Просмотр Вашего субграфа -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +Если вы хотите протестировать свой субграф перед его публикацией, вы можете использовать [Subgraph Studio](https://thegraph.com/studio/) для выполнения следующих действий: - Запустить пример запроса. -- Проанализировать Ваш субграф на панели управления для проверки информации. -- Проверить логи на панели управления, чтобы убедиться, нет ли ошибок в Вашем субграфе. Логи рабочего субграфа будут выглядеть следующим образом: +- Анализируйте свой субграф в панели управления, чтобы проверить информацию. +- Проверьте логи на панели управления, чтобы узнать, есть ли ошибки в вашем субграфе. Логи работающего субграфа будут выглядеть так: ![Subgraph logs](/img/subgraph-logs-image.png) -### Публикация Вашего субграфа в сети The Graph +### 7. Опубликуйте свой субграф в сети The Graph -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +Когда ваш субграф готов к использованию в рабочей среде, вы можете опубликовать его в децентрализованную сеть. Публикация — это действие в сети, которое выполняет следующие задачи: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- Это делает ваш субграф доступным для индексирования децентрализованными [Индексаторами](/indexing/overview/) в сети The Graph. +- Это снимает ограничения по количеству запросов и делает ваш субграф общедоступным для поиска и запросов в [Graph Explorer](https://thegraph.com/explorer/). +- Это делает ваш субграф доступным для [Кураторов](/resources/roles/curating/), чтобы они могли его курировать. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> Чем больше GRT вы и другие курируете на вашем субграфе, тем больше Индексаторов будут мотивированы индексировать ваш субграф, что улучшит качество обслуживания, уменьшит задержку и повысит избыточность сети для вашего субграфа. #### Публикация с помощью Subgraph Studio -Чтобы опубликовать свой субграф, нажмите кнопку «Опубликовать» на панели управления. +Чтобы опубликовать ваш субграф, нажмите кнопку "Опубликовать" на панели управления. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Выберите сеть, в которую хотите опубликовать свой субграф. +Выберите сеть, в которую вы хотите опубликовать свой субграф. #### Публикация с помощью CLI -Начиная с версии 0.73.0, Вы также можете публиковать свой субграф с помощью Graph CLI. +Начиная с версии 0.73.0, вы также можете опубликовать свой субграф с помощью Graph CLI. Откройте `graph-cli`. @@ -150,7 +150,7 @@ When your subgraph is ready for a production environment, you can publish it to graph codegen && graph build ``` -Then, +Затем, ```sh graph publish @@ -161,28 +161,28 @@ graph publish ![cli-ui](/img/cli-ui.png) -To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +Чтобы настроить ваше развертывание, смотрите раздел [Публикация субграфа](/subgraphs/developing/publishing/publishing-a-subgraph/). #### Добавление сигнала к Вашему субграфу -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. Чтобы привлечь Индексаторов для запросов к вашему субграфу, вам следует добавить сигнал курирования GRT. - - Это действие улучшает качество обслуживания, снижает задержку и увеличивает надежность и доступность сети для Вашего субграфа. + - Это действие улучшает качество обслуживания, снижает задержку и повышает сетевую избыточность и доступность для вашего субграфа. 2. Если индексаторы имеют право на получение вознаграждений за индексацию, они получат вознаграждения в GRT, в соответствии с количеством поданного сигнала. - - Рекомендуется добавить как минимум 3,000 GRT, чтобы привлечь 3 индексаторов. Проверьте право на вознаграждение на основе использования функций субграфа и поддерживаемых сетей. + - Рекомендуется курировать как минимум 3,000 GRT, чтобы привлечь 3 Индексаторов. Проверьте право на вознаграждения в зависимости от использования функций субграфа и поддерживаемых сетей. -To learn more about curation, read [Curating](/resources/roles/curating/). +Чтобы узнать больше о кураторстве, прочитайте статью [Курирование](/resources/roles/curating/). -Чтобы сэкономить на расходах на газ, Вы можете курировать свой субграф в той же транзакции, в которой Вы его публикуете, выбрав эту опцию: +Чтобы сэкономить на газовых расходах, вы можете закрепить свой субграф в той же транзакции, в которой его публикуете, выбрав эту опцию: -![Subgraph publish](/img/studio-publish-modal.png) +![Публикация субграфа](/img/studio-publish-modal.png) ### 8. Запрос Вашего субграфа -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +Теперь у вас есть доступ к 100 000 бесплатных запросов в месяц для вашего субграфа в The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +Вы можете выполнять запросы к своему субграфу, отправляя запросы GraphQL по его URL для запросов, который можно найти, нажав кнопку "Запрос". -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +Для получения дополнительной информации о том, как выполнять запросы к данным из вашего субграфа, прочитайте статью [Запросы к данным в The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/ru/substreams/_meta-titles.json b/website/src/pages/ru/substreams/_meta-titles.json index 6262ad528c3a..b4353cede681 100644 --- a/website/src/pages/ru/substreams/_meta-titles.json +++ b/website/src/pages/ru/substreams/_meta-titles.json @@ -1,3 +1,3 @@ { - "developing": "Developing" + "developing": "Разработка" } diff --git a/website/src/pages/ru/substreams/developing/dev-container.mdx b/website/src/pages/ru/substreams/developing/dev-container.mdx index bd4acf16eec7..71d84bce5eb8 100644 --- a/website/src/pages/ru/substreams/developing/dev-container.mdx +++ b/website/src/pages/ru/substreams/developing/dev-container.mdx @@ -1,48 +1,48 @@ --- -title: Substreams Dev Container -sidebarTitle: Dev Container +title: Контейнер для разработки субпотоков +sidebarTitle: Контейнер для разработки --- -Develop your first project with Substreams Dev Container. +Разработайте свой первый проект с помощью контейнера для разработки. -## What is a Dev Container? +## Что такое контейнер для разработки? -It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). +Это инструмент, который поможет Вам создать первый проект. Вы можете использовать его удалённо через Github Codespaces или локально, клонировав [репозиторий Substreams Starter](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Внутри контейнера для разработки команда `substreams init` настраивает сгенерированный код проекта субпотоков, позволяя вам легко создать субграф или решение на базе SQL для обработки данных. -## Prerequisites +## Предварительные требования -- Ensure Docker and VS Code are up-to-date. +- Убедитесь, что Docker и VS Code обновлены до последних версий. -## Navigating the Dev Container +## Ориентирование в контейнере для разработки -In the Dev Container, you can either build or import your own `substreams.yaml` and associate modules within the minimal path or opt for the automatically generated Substreams paths. Then, when you run the `Substreams Build` it will generate the Protobuf files. +В контейнере для разработки вы можете либо создать, либо импортировать свой собственный файл `substreams.yaml` и ассоциировать модули в минимальном пути, либо выбрать автоматически сгенерированные пути субпотоков. Затем, когда вы запускаете команду `Substreams Build`, она создаёт файлы Protobuf. -### Options +### Параметры -- **Minimal**: Starts you with the raw block `.proto` and requires development. This path is intended for experienced users. -- **Non-Minimal**: Extracts filtered data using network-specific caches and Protobufs taken from corresponding foundational modules (maintained by the StreamingFast team). This path generates a working Substreams out of the box. +- **Minimal**: Начинает с необработанного блока `.proto` и требует доработки. Этот путь предназначен для опытных пользователей. +- **Non-Minimal**: Извлекает отфильтрованные данные, используя сетевые кэши и Protobuf файлы, полученные из соответствующих основных модулей (поддерживаемых командой StreamingFast). Этот путь создаёт рабочие субпотоки "из коробки". -To share your work with the broader community, publish your `.spkg` to [Substreams registry](https://substreams.dev/) using: +Чтобы поделиться своей работой с широкой аудиторией, опубликуйте свой `.spkg` в [реестре субпотоков](https://substreams.dev/) с помощью: - `substreams registry login` - `substreams registry publish` -> Note: If you run into any problems within the Dev Container, use the `help` command to access trouble shooting tools. +> Примечание: Если у вас возникнут проблемы внутри контейнера для разработки, используйте команду `help`, чтобы получить доступ к инструментам для устранения неполадок. -## Building a Sink for Your Project +## Создание Sink для вашего проекта -You can configure your project to query data either through a Subgraph or directly from an SQL database: +Вы можете настроить свой проект для запроса данных либо через субграф, либо напрямую из базы данных SQL: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). -- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +- **Субграф**: Запустите команду `substreams codegen subgraph`. Это создаст проект с базовыми файлами `schema.graphql` и `mappings.ts`. Вы можете настроить эти файлы для определения объектов на основе данных, извлечённых с помощью субпотоков. Больше настроек смотрите в [документации по хранилищу субграфа](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **SQL**: Запустите команду `substreams codegen sql` для SQL-запросов. Для получения дополнительной информации о настройке SQL sink, обратитесь к [документации по SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). -## Deployment Options +## Варианты развертывания -To deploy a Subgraph, you can either run the `graph-node` locally using the `deploy-local` command or deploy to Subgraph Studio by using the `deploy` command found in the `package.json` file. +Чтобы развернуть субграф, вы можете либо запустить `graph-node` локально с помощью команды `deploy-local`, либо развернуть его в Subgraph Studio, используя команду `deploy`, указанную в файле `package.json`. -## Common Errors +## Распространённые ошибки -- When running locally, make sure to verify that all Docker containers are healthy by running the `dev-status` command. -- If you put the wrong start-block while generating your project, navigate to the `substreams.yaml` to change the block number, then re-run `substreams build`. +- При запуске локально убедитесь, что все Docker-контейнеры работают корректно, выполнив команду `dev-status`. +- Если при создании проекта вы указали неверный начальный блок, откройте файл `substreams.yaml`, измените номер блока и заново выполните команду `substreams build`. diff --git a/website/src/pages/ru/substreams/developing/sinks.mdx b/website/src/pages/ru/substreams/developing/sinks.mdx index f1c5360f39a9..f4b6f7326f49 100644 --- a/website/src/pages/ru/substreams/developing/sinks.mdx +++ b/website/src/pages/ru/substreams/developing/sinks.mdx @@ -1,51 +1,51 @@ --- -title: Official Sinks +title: Подключите свои субпотоки --- -Choose a sink that meets your project's needs. +Подберите sink, соответствующий требованиям Вашего проекта. ## Обзор -Once you find a package that fits your needs, you can choose how you want to consume the data. +Как только вы найдете пакет, который соответствует Вашим потребностям, Вы можете выбрать способ потребления данных. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks — это интеграции, которые позволяют отправлять извлечённые данные в различные системы-получатели такие как SQL база данных, файл или субграф. ## Sinks -> Note: Some of the sinks are officially supported by the StreamingFast core development team (i.e. active support is provided), but other sinks are community-driven and support can't be guaranteed. +> Примечание: Некоторые из sinks официально поддерживаются командой разработчиков ядра StreamingFast (то есть предоставляется активная поддержка), в то время как другие sinks являются проектами, созданными сообществом, и поддержка для них не может быть гарантирована. -- [SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Send the data to a database. -- [Subgraph](/sps/introduction/): Configure an API to meet your data needs and host it on The Graph Network. -- [Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream): Stream data directly from your application. -- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Send data to a PubSub topic. -- [Community Sinks](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Explore quality community maintained sinks. +- [База данных SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink): Отправьте данные в базу данных. +- [Субграф](/sps/introduction/): Настройте API, чтобы удовлетворить потребности в данных, и разместите его в сети The Graph. +- [Прямая трансляция](https://docs.substreams.dev/how-to-guides/sinks/stream): Транслируйте данные напрямую из вашего приложения. +- [PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub): Отправляйте данные в тему PubSub. +- [Sinks сообщества](https://docs.substreams.dev/how-to-guides/sinks/community-sinks): Изучите качественные sinks, поддерживаемые сообществом. -> Important: If you’d like your sink (e.g., SQL or PubSub) hosted for you, reach out to the StreamingFast team [here](mailto:sales@streamingfast.io). +> Важно: Если вы хотите, чтобы ваш sink (например, SQL или PubSub) был размещён для Вас, свяжитесь с командой StreamingFast [здесь](mailto:sales@streamingfast.io). -## Navigating Sink Repos +## Ориентирование в репозиториях Sink -### Official +### Официально -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Имя | Поддержка | Мейнтейнер | Исходный код | +| ---------- | --------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | -### Community +### Сообщество -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Имя | Поддержка | Мейнтейнер | Исходный код | +| ---------- | --------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Сообщество | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Сообщество | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Сообщество | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Сообщество | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -- O = Official Support (by one of the main Substreams providers) -- C = Community Support +- O = Официальная поддержка (от одного из основных поставщиков субпотоков) +- C = Поддержка сообщества diff --git a/website/src/pages/ru/substreams/developing/solana/account-changes.mdx b/website/src/pages/ru/substreams/developing/solana/account-changes.mdx index 7f089022b1f0..6aea139b7ae7 100644 --- a/website/src/pages/ru/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/ru/substreams/developing/solana/account-changes.mdx @@ -1,57 +1,57 @@ --- -title: Solana Account Changes +title: Изменения в учетной записи Solana sidebarTitle: Account Changes --- -Learn how to consume Solana account change data using Substreams. +Узнайте, как использовать данные изменений учетных записей Solana с помощью субпотоков. -## Introduction +## Введение -This guide walks you through the process of setting up your environment, configuring your first Substreams stream, and consuming account changes efficiently. By the end of this guide, you will have a working Substreams feed that allows you to track real-time account changes on the Solana blockchain, as well as historical account change data. +Это руководство проведет Вас через процесс настройки вашей среды, конфигурирования Ваших первых субпотоков и эффективного потребления изменений учетных записей. К концу этого руководства у Вас будут рабочие субпотоки, которые позволят отслеживать изменения учетных записей в реальном времени на блокчейне Solana, а также получать исторические данные об изменениях учетных записей. -> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. +> Примечание: история изменений учетных записей Solana начинается с 2025 года, блок 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +Для каждого субпотока Solana, фиксируется только последнее обновление для каждой учетной записи. См. [Protobuf справочник](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). Если учетная запись была удалена, в payload будет указано `deleted == True`. Кроме того, события с низким приоритетом, такие как изменения с участием специального владельца "Vote11111111…" или изменения, не влияющие на данные учетной записи (например, изменения лампортов), были опущены. -> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. +> ПРИМЕЧАНИЕ: чтобы проверить задержку субпотоков для аккаунтов Solana, измеряемую как отклонение от головного блока, установите [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) и выполните команду `substreams run solana-common blocks_without_votes -s -1 -o clock`. ## Начало работы -### Prerequisites +### Предварительные требования -Before you begin, ensure that you have the following: +Прежде чем начать, убедитесь, что у Вас есть следующее: -1. [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) installed. -2. A [Substreams key](https://docs.substreams.dev/reference-material/substreams-cli/authentication) for access to the Solana Account Change data. -3. Basic knowledge of [how to use](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) the command line interface (CLI). +1. [Субпотоки CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) установлены. +2. [Ключ субпотока](https://docs.substreams.dev/reference-material/substreams-cli/authentication) для доступа к данным об изменении учетной записи Солана. +3. Базовые знания о том, [как использовать](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) интерфейс командной строки (CLI). -### Step 1: Set Up a Connection to Solana Account Change Substreams +### Шаг 1: Настройка подключения к субпотокам изменений аккаунтов Solana -Now that you have Substreams CLI installed, you can set up a connection to the Solana Account Change Substreams feed. +Теперь, когда у вас установлен CLI субпотоков, Вы можете настроить подключение к потоку изменений аккаунтов Solana в субпотоках. -- Using the [Solana Accounts Foundational Module](https://substreams.dev/packages/solana-accounts-foundational/latest), you can choose to stream data directly or use the GUI for a more visual experience. The following `gui` example filters for Honey Token account data. +- Используя [основной модуль аккаунтов Solana](https://substreams.dev/packages/solana-accounts-foundational/latest), Вы можете либо транслировать данные напрямую, либо использовать графический интерфейс (GUI) для более наглядного взаимодействия. В следующем примере `gui` выполняется фильтрация данных аккаунта токена Honey. ```bash substreams gui solana-accounts-foundational filtered_accounts -t +10 -p filtered_accounts="owner:TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA || account:4vMsoUT2BWatFweudnQM1xedRLfJgJ7hswhcpz4xgBTy" ``` -- This command will stream account changes directly to your terminal. +- Эта команда будет транслировать изменения аккаунта непосредственно в ваш терминал. ```bash substreams run solana-accounts-foundational filtered_accounts -s -1 -o clock ``` -The Foundational Module has support for filtering on specific accounts and/or owners. You can adjust the query based on your needs. +Основной модуль поддерживает фильтрацию по конкретным аккаунтам и/или владельцам. Вы можете настроить запрос в соответствии с Вашими потребностями. -### Step 2: Sink the Substreams +### Шаг 2: Подключение субпотоков -Consume the account stream [directly in your application](https://docs.substreams.dev/how-to-guides/sinks/stream) using a callback or make it queryable by using the [SQL-DB sink](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +Используйте поток данных аккаунтов [напрямую в вашем приложении](https://docs.substreams.dev/how-to-guides/sinks/stream), используя callback-функцию, или сделайте его доступным для запросов, используя [SQL-DB sink](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). -### Step 3: Setting up a Reconnection Policy +### Шаг 3: Настройка политики переподключения -[Cursor Management](https://docs.substreams.dev/reference-material/reliability-guarantees) ensures seamless continuity and retraceability by allowing you to resume from the last consumed block if the connection is interrupted. This functionality prevents data loss and maintains a persistent stream. +[Управление курсором](https://docs.substreams.dev/reference-material/reliability-guarantees) обеспечивает бесперебойную непрерывность и возможность возврата, позволяя возобновить обработку с последнего потребленного блока в случае разрыва соединения. Эта функция предотвращает потерю данных и поддерживает стабильный поток. -When creating or using a sink, the user's primary responsibility is to provide implementations of BlockScopedDataHandler and a BlockUndoSignalHandler implementation(s) which has the following interface: +При создании или использовании sink, основной задачей пользователя является предоставление реализаций BlockScopedDataHandler и BlockUndoSignalHandler, которая должна иметь следующий интерфейс: ```go import ( diff --git a/website/src/pages/ru/substreams/developing/solana/transactions.mdx b/website/src/pages/ru/substreams/developing/solana/transactions.mdx index dbd16d487158..242cccdfb006 100644 --- a/website/src/pages/ru/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/ru/substreams/developing/solana/transactions.mdx @@ -1,61 +1,61 @@ --- -title: Solana Transactions -sidebarTitle: Transactions +title: Транзакции Solana +sidebarTitle: Транзакции --- -Learn how to initialize a Solana-based Substreams project within the Dev Container. +Узнайте, как инициализировать проект Substreams на основе Solana в рамках Dev Container. -> Note: This guide excludes [Account Changes](/substreams/developing/solana/account-changes/). +> Примечание: Этот гид не включает [Изменения аккаунтов](/substreams/developing/solana/account-changes/). -## Options +## Параметры -If you prefer to begin locally within your terminal rather than through the Dev Container (VS Code required), refer to the [Substreams CLI installation guide](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). +Если вы предпочитаете начать работу локально в вашем терминале, а не через Dev Container (требуется VS Code), обратитесь к [руководству по установке Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli). -## Step 1: Initialize Your Solana Substreams Project +## Шаг 1: Инициализация Вашего проекта субпотоков Solana -1. Open the [Dev Container](https://github.com/streamingfast/substreams-starter) and follow the on-screen steps to initialize your project. +1. Откройте [Dev Container](https://github.com/streamingfast/substreams-starter) и следуйте шагам на экране, чтобы инициализировать Ваш проект. -2. Running `substreams init` will give you the option to choose between two Solana project options. Select the best option for your project: - - **sol-minimal**: This creates a simple Substreams that extracts raw Solana block data and generates corresponding Rust code. This path will start you with the full raw block, and you can navigate to the `substreams.yaml` (the manifest) to modify the input. - - **sol-transactions**: This creates a Substreams that filters Solana transactions based on one or more Program IDs and/or Account IDs, using the cached [Solana Foundational Module](https://substreams.dev/streamingfast/solana-common/v0.3.0). - - **sol-anchor-beta**: This creates a Substreams that decodes instructions and events with an Anchor IDL. If an IDL isn’t available (reference [Anchor CLI](https://www.anchor-lang.com/docs/cli)), then you’ll need to provide it yourself. +2. Выполнив команду `substreams init`, Вам будет предложено выбрать между двумя вариантами проектов для Solana. Выберите наиболее подходящий вариант для Вашего проекта: + - **sol-minimal**: Этот вариант создаёт простые субпотоки, которые извлекают сырые данные блоков Solana и генерирует соответствующий код на Rust. Этот путь начнёт с полного сырого блока, и вы сможете перейти к файлу `substreams.yaml` (манифест), чтобы изменить входные данные. + - **sol-transactions**: Этот вариант создаёт субпотоки, которые фильтруют транзакции Solana на основе одного или нескольких Program ID и/или Account ID, используя кешированный [Solana Foundational Module](https://substreams.dev/streamingfast/solana-common/v0.3.0). + - **sol-anchor-beta**: Этот вариант создаёт субпотоки, которые декодируют инструкции и события с использованием Anchor IDL. Если IDL недоступен (смотрите [Anchor CLI](https://www.anchor-lang.com/docs/cli)), Вам нужно будет предоставить его самостоятельно. -The modules within Solana Common do not include voting transactions. To gain a 75% reduction in data processing size and costs, delay your stream by over 1000 blocks from the head. This can be done using the [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) function in Rust. +Модули в Solana Common не включают транзакции голосования. Чтобы уменьшить размер и затраты на обработку данных на 75%, задержите Ваш поток на более чем 1000 блоков от начала. Это можно сделать с помощью функции [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) в Rust. -To access voting transactions, use the full Solana block, `sf.solana.type.v1.Block`, as input. +Чтобы получить доступ к транзакциям голосования, используйте полный блок Solana, `sf.solana.type.v1.Block`, в качестве входных данных. -## Step 2: Visualize the Data +## Шаг 2: Визуализация данных -1. Run `substreams auth` to create your [account](https://thegraph.market/) and generate an authentication token (JWT), then pass this token back as input. +1. Выполните команду `substreams auth`, чтобы создать Ваш [аккаунт](https://thegraph.market/) и сгенерировать токен аутентификации (JWT), затем передайте этот токен обратно в качестве входных данных. -2. Now you can freely use the `substreams gui` to visualize and iterate on your extracted data. +2. Теперь Вы можете свободно использовать команду `substreams gui`, чтобы визуализировать и итеративно работать с Вашими извлечёнными данными. -## Step 2.5: (Optionally) Transform the Data +## Шаг 2.5: (По желанию) Преобразование данных -Within the generated directories, modify your Substreams modules to include additional filters, aggregations, and transformations, then update the manifest accordingly. +В сгенерированных директориях отредактируйте Ваши модули субпотоков, чтобы добавить дополнительные фильтры, агрегации и преобразования, а затем обновите манифест соответственно. -## Step 3: Load the Data +## Шаг 3: Загрузка данных -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +Чтобы сделать Ваш запрос в субпотоках доступным для выполнения (в отличие от [прямой трансляции](https://docs.substreams.dev/how-to-guides/sinks/stream)), Вы можете автоматически сгенерировать [субграф на базе субпотоков](/sps/introduction/) или базу данных SQL sink. -### Subgraph +### Субграф -1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. -3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. +1. Выполните команду `substreams codegen subgraph`, чтобы инициализировать sink, создавая необходимые файлы и определения функций. +2. Создайте Ваши [мэппинги субграфа](/sps/triggers/) в файле `mappings.ts` и связанные объекты в файле `schema.graphql`. +3. Создайте и разверните локально или в [Subgraph Studio](https://thegraph.com/studio-pricing/), выполнив команду `deploy-studio`. ### SQL -1. Run `substreams codegen sql` and choose from either ClickHouse or Postgres to initialize the sink, producing the necessary files. -2. Run `substreams build` build the [Substream SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink) sink. -3. Run `substreams-sink-sql` to sink the data into your selected SQL DB. +1. Выполните команду `substreams codegen sql` и выберите либо ClickHouse, либо Postgres, чтобы инициализировать sink и создать необходимые файлы. +2. Выполните команду `substreams build`, чтобы собрать sink [SQL субпотока](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). +3. Выполните команду `substreams-sink-sql`, чтобы записать данные в выбранную Вами базу данных SQL. -> Note: Run `help` to better navigate the development environment and check the health of containers. +> Примечание: Выполните команду `help`, чтобы лучше ориентироваться в среде разработки и проверить состояние контейнеров. ## Дополнительные ресурсы -You may find these additional resources helpful for developing your first Solana application. +Вам могут быть полезны следующие дополнительные ресурсы для разработки Вашего первого приложения на Solana. -- The [Dev Container Reference](/substreams/developing/dev-container/) helps you navigate the container and its common errors. -- The [CLI reference](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) lets you explore all the tools available in the Substreams CLI. -- The [Components Reference](https://docs.substreams.dev/reference-material/substreams-components/packages) dives deeper into navigating the `substreams.yaml`. +- [Справочник по Dev Container](/substreams/developing/dev-container/) поможет Вам ориентироваться в контейнере и решать распространённые ошибки. +- [Справочник по CLI](https://docs.substreams.dev/reference-material/substreams-cli/command-line-interface) позволяет Вам изучить все инструменты, доступные в CLI субпотоков. +- [Справочник по компонентам](https://docs.substreams.dev/reference-material/substreams-components/packages) более подробно объясняет, как работать с файлом `substreams.yaml`. diff --git a/website/src/pages/ru/substreams/introduction.mdx b/website/src/pages/ru/substreams/introduction.mdx index 320c8c262175..2b3d0a89d87b 100644 --- a/website/src/pages/ru/substreams/introduction.mdx +++ b/website/src/pages/ru/substreams/introduction.mdx @@ -1,26 +1,26 @@ --- -title: Introduction to Substreams -sidebarTitle: Introduction +title: Введение в Субпотоки +sidebarTitle: Введение --- -![Substreams Logo](/img/substreams-logo.png) +![Логотип Субпотоков](/img/substreams-logo.png) To start coding right away, check out the [Substreams Quick Start](/substreams/quick-start/). ## Обзор -Substreams is a powerful parallel blockchain indexing technology designed to enhance performance and scalability within The Graph Network. +Субпотоки — это мощная технология параллельного индексирования блокчейна, разработанная для повышения производительности и масштабируемости в сети The Graph. -## Substreams Benefits +## Преимущества Субпотоков -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. -- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. -- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. -- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. +- **Ускоренное индексирование**: Повышает скорость индексирования субграфов с помощью параллельного движка для более быстрого извлечения и обработки данных. +- **Мультичейн-поддержка**: Расширяет возможности индексирования за пределы сетей на основе EVM, поддерживая такие экосистемы, как Solana, Injective, Starknet и Vara. +- **Усовершенствованная модель данных**: Обеспечивает доступ к детализированным данным, таким как данные уровня `trace` в EVM или изменения аккаунтов в Solana, с эффективным управлением форками и разрывами соединения. +- **Поддержка нескольких хранилищ**: Для Субграфа, базы данных Postgres, Clickhouse и Mongo. -## How Substreams Works in 4 Steps +## Как работают Субпотоки: 4 этапа -1. You write a Rust program, which defines the transformations that you want to apply to the blockchain data. For example, the following Rust function extracts relevant information from an Ethereum block (number, hash, and parent hash). +1. Вы пишете программу на Rust, которая определяет преобразования, применяемые к данным блокчейна. Например, следующая функция на Rust извлекает соответствующую информацию из блока Ethereum (номер, хеш и хеш родительского блока). ```rust fn get_my_block(blk: Block) -> Result { @@ -34,12 +34,12 @@ fn get_my_block(blk: Block) -> Result { } ``` -2. You wrap up your Rust program into a WASM module just by running a single CLI command. +2. Вы упаковываете свою программу на Rust в WASM-модуль с помощью одной команды в CLI. -3. The WASM container is sent to a Substreams endpoint for execution. The Substreams provider feeds the WASM container with the blockchain data and the transformations are applied. +3. WASM-контейнер отправляется на конечную точку Субпотоков для выполнения. Провайдер Субпотоков передает в WASM-контейнер данные блокчейна, и к ним применяются преобразования. -4. You select a [sink](https://docs.substreams.dev/how-to-guides/sinks), a place where you want to send the transformed data (such as a SQL database or a Subgraph). +4. Вы выбираете [хранилище](https://docs.substreams.dev/how-to-guides/sinks), куда хотите отправить преобразованные данные (например, SQL-базу данных или Субграф). ## Дополнительные ресурсы -All Substreams developer documentation is maintained by the StreamingFast core development team on the [Substreams registry](https://docs.substreams.dev). +Вся документация для разработчиков Субпотоков поддерживается командой разработчиков ядра StreamingFast в [реестре Субпотоков](https://docs.substreams.dev). diff --git a/website/src/pages/ru/substreams/publishing.mdx b/website/src/pages/ru/substreams/publishing.mdx index 42808170179f..d19904d26e9e 100644 --- a/website/src/pages/ru/substreams/publishing.mdx +++ b/website/src/pages/ru/substreams/publishing.mdx @@ -1,53 +1,53 @@ --- -title: Publishing a Substreams Package -sidebarTitle: Publishing +title: Публикация пакета Субпотоков +sidebarTitle: Публикация --- -Learn how to publish a Substreams package to the [Substreams Registry](https://substreams.dev). +Узнайте, как опубликовать пакет Субпотоков в [реестре Субпотоков](https://substreams.dev). ## Обзор -### What is a package? +### Что такое пакет? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +Пакет Субпотоков — это предварительно скомпилированный бинарный файл, который определяет конкретные данные, извлекаемые из блокчейна, аналогично файлу `mapping.ts` в традиционных субграфах. -## Publish a Package +## Публикация пакета -### Prerequisites +### Предварительные требования -- You must have the Substreams CLI installed. -- You must have a Substreams package (`.spkg`) that you want to publish. +- У Вас должен быть установлен CLI Субпотоков. +- У Вас должен быть пакет Субпотоков (`.spkg`), который Вы хотите опубликовать. -### Step 1: Run the `substreams publish` Command +### Шаг 1: Запустите команду `substreams publish` -1. In a command-line terminal, run `substreams publish .spkg`. +1. В терминале командной строки выполните `substreams publish .spkg`. -2. If you do not have a token set in your computer, navigate to `https://substreams.dev/me`. +2. Если у Вас не установлен токен на компьютере, перейдите на `https://substreams.dev/me`. -![get token](/img/1_get-token.png) +![получить токен](/img/1_get-token.png) -### Step 2: Get a Token in the Substreams Registry +### Шаг 2: Получите токен в реестре Субпотоков -1. In the Substreams Registry, log in with your GitHub account. +1. Войдите в реестр Субпотоков с использованием своей учетной записи GitHub. -2. Create a new token and copy it in a safe location. +2. Создайте новый токен и сохраните его в надежном месте. -![new token](/img/2_new_token.png) +![новый токен](/img/1_get-token.png) -### Step 3: Authenticate in the Substreams CLI +### Шаг 3: Аутентифицируйтесь в CLI Субпотоков -1. Back in the Substreams CLI, paste the previously generated token. +1. Вернитесь в CLI Субпотоков и вставьте ранее сгенерированный токен. -![paste token](/img/3_paste_token.png) +![вставить токен](/img/2_new_token.png) -2. Lastly, confirm that you want to publish the package. +2. В заключение, подтвердите, что хотите опубликовать пакет. -![confirm](/img/4_confirm.png) +![подтвердить](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +Вот и все! Вы успешно опубликовали пакет в реестре Субпотоков. -![success](/img/5_success.png) +![успех](/img/5_success.png) ## Дополнительные ресурсы -Visit [Substreams](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. +Посетите сайт [Субпотоков](https://substreams.dev/), чтобы изучить растущую коллекцию готовых к использованию пакетов Субпотоков, поддерживающих различные блокчейн-сети. diff --git a/website/src/pages/ru/substreams/quick-start.mdx b/website/src/pages/ru/substreams/quick-start.mdx index c74623e3c753..922ee9c9e2db 100644 --- a/website/src/pages/ru/substreams/quick-start.mdx +++ b/website/src/pages/ru/substreams/quick-start.mdx @@ -1,30 +1,30 @@ --- -title: Substreams Quick Start +title: Быстрый старт с Субпотоками sidebarTitle: Быстрый старт --- -Discover how to utilize ready-to-use substream packages or develop your own. +Узнайте, как использовать готовые пакеты Субпотоков или разработать собственные. ## Обзор -Integrating Substreams can be quick and easy. They are permissionless, and you can [obtain a key here](https://thegraph.market/) without providing personal information to start streaming on-chain data. +Интеграция Субпотоков может быть быстрой и простой. Они не требуют разрешений, и Вы можете без предоставления личной информации [получить здесь ключ](https://thegraph.market/) для того, чтобы начать потоковую передачу он-чейн данных. -## Start Building +## Начало создания -### Use Substreams Packages +### Использование пакетов Субпотоков -There are many ready-to-use Substreams packages available. You can explore these packages by visiting the [Substreams Registry](https://substreams.dev) and [sinking them](/substreams/developing/sinks/). The registry lets you search for and find any package that meets your needs. +Доступно множество готовых пакетов Субпотоков. Вы можете изучить эти пакеты, посетив [реестр Субпотоков](https://substreams.dev) и [используя их](/substreams/developing/sinks/). Реестр позволяет Вам искать и находить любые пакеты, которые соответствуют Вашим требованиям. -Once you find a package that fits your needs, you can choose how you want to consume the data: +Найдя пакет, который соответствует Вашим потребностям, Вы можете выбрать способ потребления данных: -- **[Subgraph](/sps/introduction/)**: Configure an API to meet your data needs and host it on The Graph Network. -- **[SQL Database](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)**: Send the data to a database. -- **[Direct Streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Stream data directly to your application. -- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Send data to a PubSub topic. +- **[Субграф](/sps/introduction/)**: Настройте API для удовлетворения своих потребностей в данных и разместите его в сети The Graph. +- **[База данных SQL](https://docs.substreams.dev/how-to-guides/sinks/sql-sink)**: Отправьте данные в базу данных. +- **[Прямая потоковая передача](https://docs.substreams.dev/how-to-guides/sinks/stream)**: Потоковая передача данных непосредственно в Ваше приложение. +- **[PubSub](https://docs.substreams.dev/how-to-guides/sinks/pubsub)**: Отправьте данные в тему PubSub. -### Develop Your Own +### Разработка своего собственного -If you can't find a Substreams package that meets your specific needs, you can develop your own. Substreams are built with Rust, so you'll write functions that extract and filter the data you need from the blockchain. To get started, check out the following tutorials: +Если Вы не можете найти пакет Субпотоков, который соответствует Вашим конкретным потребностям, Вы можете разработать свой собственный. Субпотоки создаются с использованием Rust, поэтому Вы будете писать функции, которые извлекают и фильтруют необходимые Вам данные из блокчейна. Чтобы начать, ознакомьтесь со следующими руководствами: - [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) - [Solana](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-solana) @@ -32,11 +32,11 @@ If you can't find a Substreams package that meets your specific needs, you can d - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) -To build and optimize your Substreams from zero, use the minimal path within the [Dev Container](/substreams/developing/dev-container/). +Чтобы создать и оптимизировать свои Субпотоки с нуля, используйте минимальный путь внутри [контейнера для разработки](/substreams/developing/dev-container/). -> Note: Substreams guarantees that you'll [never miss data](https://docs.substreams.dev/reference-material/reliability-guarantees) with a simple reconnection policy. +> Примечание: Субпотоки гарантируют, что Вы [никогда не пропустите данные](https://docs.substreams.dev/reference-material/reliability-guarantees) благодаря простой политике повторного подключения. ## Дополнительные ресурсы -- For additional guidance, reference the [Tutorials](https://docs.substreams.dev/tutorials/intro-to-tutorials) and follow the [How-To Guides](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) on Streaming Fast docs. -- For a deeper understanding of how Substreams works, explore the [architectural overview](https://docs.substreams.dev/reference-material/architecture) of the data service. +- Для получения дополнительной помощи обратитесь к [урокам](https://docs.substreams.dev/tutorials/intro-to-tutorials) и следуйте [пошаговым инструкциям](https://docs.substreams.dev/how-to-guides/develop-your-own-substreams) в документации Streaming Fast. +- Для более глубокого понимания того, как работают Субпотоки, ознакомьтесь с [обзором архитектуры](https://docs.substreams.dev/reference-material/architecture) обслуживания данных. diff --git a/website/src/pages/ru/supported-networks.json b/website/src/pages/ru/supported-networks.json index 2a11f68ef9a8..f9d46c46a19d 100644 --- a/website/src/pages/ru/supported-networks.json +++ b/website/src/pages/ru/supported-networks.json @@ -1,7 +1,7 @@ { - "name": "Name", - "id": "ID", + "name": "Имя", + "id": "Идентификатор", "subgraphs": "Субграфы", - "substreams": "Substreams", + "substreams": "Субпотоки", "firehose": "Firehose" } diff --git a/website/src/pages/ru/supported-networks.mdx b/website/src/pages/ru/supported-networks.mdx index 9993a8821b0d..6399dfa3844c 100644 --- a/website/src/pages/ru/supported-networks.mdx +++ b/website/src/pages/ru/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Поддерживаемые сети hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio полагается на стабильность и надежность базовых технологий, например, таких, как JSON-RPC, Firehose и конечных точек Substreams. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- Если субграф был опубликован через CLI и выбран индексатором, технически его можно было бы запросить даже без поддержки, и в настоящее время предпринимаются усилия для упрощения интеграции новых сетей. -- For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- Субграфы, индексирующие Gnosis Chain, теперь можно развертывать с идентификатором сети `gnosis`. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- Полный список поддерживаемых функций в децентрализованной сети можно найти [на этой странице](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Локальный запуск Graph Node -If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. +Если предпочитаемая Вами сеть не поддерживается в децентрализованной сети The Graph, Вы можете запустить собственную [Graph Node](https://github.com/graphprotocol/graph-node) для индексирования любой совместимой с EVM сети. Убедитесь, что [версия](https://github.com/graphprotocol/graph-node/releases), которую вы используете, поддерживает эту сеть и у Вас есть необходимая конфигурация. -Graph Node также может индексировать другие протоколы через интеграцию с Firehose. Интеграции Firehose созданы для сетей на базе NEAR, Arweave и Cosmos. Кроме того, Graph Node может поддерживать субграфы на основе Substreams для любой сети с поддержкой Substreams. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/ru/token-api/_meta-titles.json b/website/src/pages/ru/token-api/_meta-titles.json new file mode 100644 index 000000000000..e3d12c4a864f --- /dev/null +++ b/website/src/pages/ru/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "Часто задаваемые вопросы" +} diff --git a/website/src/pages/ru/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/ru/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/ru/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/ru/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/ru/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/ru/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/ru/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/ru/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/ru/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/ru/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/ru/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/ru/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/ru/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/ru/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/ru/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/ru/token-api/faq.mdx b/website/src/pages/ru/token-api/faq.mdx new file mode 100644 index 000000000000..78b478d6d7ef --- /dev/null +++ b/website/src/pages/ru/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Общая информация + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/ru/token-api/mcp/claude.mdx b/website/src/pages/ru/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..25a29164f8cb --- /dev/null +++ b/website/src/pages/ru/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Предварительные требования + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Конфигурация + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/ru/token-api/mcp/cline.mdx b/website/src/pages/ru/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..374877608d17 --- /dev/null +++ b/website/src/pages/ru/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Предварительные требования + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Конфигурация + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/ru/token-api/mcp/cursor.mdx b/website/src/pages/ru/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..5dc411608825 --- /dev/null +++ b/website/src/pages/ru/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Предварительные требования + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Конфигурация + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/ru/token-api/monitoring/get-health.mdx b/website/src/pages/ru/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/ru/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/ru/token-api/monitoring/get-networks.mdx b/website/src/pages/ru/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/ru/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/ru/token-api/monitoring/get-version.mdx b/website/src/pages/ru/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/ru/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/ru/token-api/quick-start.mdx b/website/src/pages/ru/token-api/quick-start.mdx new file mode 100644 index 000000000000..a878bea36a20 --- /dev/null +++ b/website/src/pages/ru/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Быстрый старт +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Предварительные требования + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/sv/about.mdx b/website/src/pages/sv/about.mdx index 90c63c0f036d..8f3ae9f1a8e7 100644 --- a/website/src/pages/sv/about.mdx +++ b/website/src/pages/sv/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![En grafik som förklarar hur The Graf använder Graf Node för att servera frågor till datakonsumenter](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ Följande steg följs: 1. En dapp lägger till data i Ethereum genom en transaktion på ett smart kontrakt. 2. Det smarta kontraktet sänder ut en eller flera händelser under bearbetningen av transaktionen. -3. Graf Node skannar kontinuerligt Ethereum efter nya block och den data för din subgraf de kan innehålla. -4. Graf Node hittar Ethereum-händelser för din subgraf i dessa block och kör de kartläggande hanterarna du tillhandahållit. Kartläggningen är en WASM-modul som skapar eller uppdaterar de dataenheter som Graph Node lagrar som svar på Ethereum-händelser. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. Dappen frågar Graph Node om data som indexerats från blockkedjan med hjälp av nodens [GraphQL-slutpunkt](https://graphql.org/learn/). Graph Node översätter i sin tur GraphQL-frågorna till frågor för sin underliggande datalagring för att hämta dessa data, och använder lagrets indexeringsegenskaper. Dappen visar dessa data i ett användarvänligt gränssnitt för slutanvändare, som de använder för att utfärda nya transaktioner på Ethereum. Cykeln upprepas. ## Nästa steg -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/sv/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/sv/archived/arbitrum/arbitrum-faq.mdx index a3162cf19888..aba7e13387a4 100644 --- a/website/src/pages/sv/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/sv/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Säkerhet ärvt från Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Graph gemenskapen beslutade att gå vidare med Arbitrum förra året efter resultatet av diskussionen [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -39,7 +39,7 @@ För att dra fördel av att använda The Graph på L2, använd den här rullgard ![Dropdown-väljare för att växla Arbitrum](/img/arbitrum-screenshot-toggle.png) -## Som subgrafutvecklare, datakonsument, indexerare, curator eller delegator, vad behöver jag göra nu? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Allt har testats noggrant och en beredskapsplan finns på plats för att säkerställa en säker och sömlös övergång. Detaljer finns [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-faq.mdx index b158efaed6ff..272fa705dfe5 100644 --- a/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con L2 Överföringsverktygen använder Arbitrums nativa mekanism för att skicka meddelanden från L1 till L2. Denna mekanism kallas en "retryable ticket" och används av alla nativa token-broar, inklusive Arbitrum GRT-broen. Du kan läsa mer om retryable tickets i [Arbitrums dokumentation](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -När du överför dina tillgångar (subgraf, insats, delegation eller kurering) till L2 skickas ett meddelande genom Arbitrum GRT-broen, vilket skapar en retryable ticket i L2. Överföringsverktyget inkluderar ett visst ETH-värde i transaktionen, som används för att 1) betala för att skapa biljetten och 2) betala för gasen för att utföra biljetten i L2. Men eftersom gaspriserna kan variera fram till att biljetten är redo att utföras i L2 kan det hända att detta automatiska utförsel försöket misslyckas. När det händer kommer Arbitrum-broen att behålla retryable ticket i livet i upp till 7 dagar, och vem som helst kan försöka "inlösa" biljetten (vilket kräver en plånbok med en viss mängd ETH broad till Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -Detta är vad vi kallar "Bekräfta"-steget i alla överföringsverktygen - det kommer att köras automatiskt i de flesta fall, eftersom den automatiska utförandet oftast är framgångsrikt, men det är viktigt att du kontrollerar att det gick igenom. Om det inte lyckas och det inte finns några framgångsrika försök på 7 dagar kommer Arbitrum-broen att kasta biljetten, och dina tillgångar (subgraf, insats, delegation eller kurering) kommer att gå förlorade och kan inte återvinnas. The Graphs kärnutvecklare har ett övervakningssystem på plats för att upptäcka dessa situationer och försöka lösa biljetterna innan det är för sent, men det är i slutändan ditt ansvar att se till att din överföring är klar i tid. Om du har svårt att bekräfta din transaktion, kontakta oss via [detta formulär](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms), och kärnutvecklarna kommer att vara där för att hjälpa dig. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### Jag startade min överföring av delegation/insats/kurering, och jag är osäker på om den lyckades komma till L2, hur kan jag bekräfta att den överfördes korrekt? @@ -36,43 +36,43 @@ Om du har L1-transaktionshashen (som du kan hitta genom att titta på de senaste ## Subgraf Överföring -### Hur överför jag min subgraf? +### How do I transfer my Subgraph? -För att överföra din subgraf måste du slutföra följande steg: +To transfer your Subgraph, you will need to complete the following steps: 1. Initiera överföringen på Ethereum huvudnätet 2. Vänta 20 minuter på bekräftelse -3. Bekräfta subgraföverföringen på Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Slutför publiceringen av subgraf på Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Uppdatera fråge-URL (rekommenderas) -\*Observera att du måste bekräfta överföringen inom 7 dagar, annars kan din subgraf gå förlorad. I de flesta fall kommer detta steg att köras automatiskt, men en manuell bekräftelse kan behövas om det finns en gasprisspike på Arbitrum. Om det uppstår några problem under denna process finns det resurser för att hjälpa: kontakta support på support@thegraph.com eller på [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Var ska jag initiera min överföring från? -Du kan initiera din överföring från [Subgraph Studio](https://thegraph.com/studio/), [Utforskaren,](https://thegraph.com/explorer) eller från vilken som helst subgrafsdetaljsida. Klicka på knappen "Överför subgraf" på subgrafsdetaljsidan för att starta överföringen. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### Hur länge måste jag vänta tills min subgraf överförs? +### How long do I need to wait until my Subgraph is transferred Överföringstiden tar ungefär 20 minuter. Arbitrum-broen arbetar i bakgrunden för att slutföra broöverföringen automatiskt. I vissa fall kan gasavgifterna öka, och du måste bekräfta transaktionen igen. -### Kommer min subgraf fortfarande vara sökbar efter att jag har överfört den till L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Din subgraf kommer endast vara sökbar på det nätverk där den är publicerad. Till exempel, om din subgraf är på Arbitrum One, kan du endast hitta den i Utforskaren på Arbitrum One och kommer inte att kunna hitta den på Ethereum. Se till att du har valt Arbitrum One i nätverksväxlaren högst upp på sidan för att säkerställa att du är på rätt nätverk.  Efter överföringen kommer L1-subgrafen att visas som föråldrad. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Måste min subgraf vara publicerad för att kunna överföra den? +### Does my Subgraph need to be published to transfer it? -För att dra nytta av subgraföverföringsverktyget måste din subgraf redan vara publicerad på Ethereum huvudnät och måste ha något kureringssignal ägt av plånboken som äger subgrafen. Om din subgraf inte är publicerad rekommenderas det att du helt enkelt publicerar direkt på Arbitrum One - de associerade gasavgifterna kommer att vara betydligt lägre. Om du vill överföra en publicerad subgraf men ägarplånboken inte har kuraterat något signal på den kan du signalera en liten mängd (t.ex. 1 GRT) från den plånboken; se till att välja "automigrering" signal. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### Vad händer med Ethereum huvudnätversionen av min subgraf efter att jag har överfört till Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -Efter att ha överfört din subgraf till Arbitrum kommer Ethereum huvudnätversionen att föråldras. Vi rekommenderar att du uppdaterar din fråge-URL inom 48 timmar. Det finns dock en nådperiod som gör att din huvudnät-URL fungerar så att stöd från tredjeparts-dappar kan uppdateras. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### Behöver jag också publicera om på Arbitrum efter överföringen? @@ -80,21 +80,21 @@ Efter de 20 minuters överföringsfönstret måste du bekräfta överföringen m ### Kommer min endpunkt att ha nertid under ompubliceringen? -Det är osannolikt, men det är möjligt att uppleva en kort nertid beroende på vilka indexeringar som stöder subgrafen på L1 och om de fortsätter att indexera den tills subgrafen är fullt stödd på L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Är publicering och versionering densamma på L2 som på Ethereum huvudnätet? -Ja. Välj Arbitrum One som ditt publicerade nätverk när du publicerar i Subgraph Studio. I studion kommer den senaste ändpunkt att vara tillgänglig, som pekar till den senaste uppdaterade versionen av subgrafen. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Kommer min subgrafs kurering att flyttas med min subgraf? +### Will my Subgraph's curation move with my Subgraph? -Om du har valt automatisk migreringssignal kommer 100% av din egen kurering att flyttas med din subgraf till Arbitrum One. All subgrafens kureringssignal kommer att konverteras till GRT vid överföringstillfället, och GRT som motsvarar din kureringssignal kommer att användas för att prägla signal på L2-subgrafen. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Andra kuratorer kan välja att ta tillbaka sin del av GRT eller också överföra den till L2 för att prägla signal på samma subgraf. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Kan jag flytta min subgraf tillbaka till Ethereum huvudnätet efter överföringen? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -När den är överförd kommer din Ethereum huvudnätversion av denna subgraf att vara föråldrad. Om du vill flytta tillbaka till huvudnätet måste du omimplementera och publicera på huvudnätet igen. Dock avråds starkt från att flytta tillbaka till Ethereum huvudnätet eftersom indexbelöningar till sist kommer att fördelas helt på Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Varför behöver jag bridged ETH för att slutföra min överföring? @@ -206,19 +206,19 @@ För att överföra din kurering måste du följa följande steg: \*Om det behövs - dvs. du använder en kontraktadress. -### Hur vet jag om den subgraph jag har kuraterat har flyttats till L2? +### How will I know if the Subgraph I curated has moved to L2? -När du tittar på sidan med detaljer om subgraphen kommer en banner att meddela dig att denna subgraph har flyttats. Du kan följa uppmaningen för att överföra din kurering. Du kan också hitta denna information på sidan med detaljer om subgraphen som har flyttat. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### Vad händer om jag inte vill flytta min kurering till L2? -När en subgraph avvecklas har du möjlighet att ta tillbaka din signal. På samma sätt, om en subgraph har flyttats till L2, kan du välja att ta tillbaka din signal på Ethereum huvudnät eller skicka signalen till L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### Hur vet jag att min kurering har överförts framgångsrikt? Signaldetaljer kommer att vara tillgängliga via Explorer ungefär 20 minuter efter att L2-överföringsverktyget har initierats. -### Kan jag överföra min kurering på fler än en subgraph samtidigt? +### Can I transfer my curation on more than one Subgraph at a time? Det finns för närvarande ingen möjlighet till bulköverföring. @@ -266,7 +266,7 @@ Det tar ungefär 20 minuter för L2-överföringsverktyget att slutföra överf ### Måste jag indexer på Arbitrum innan jag överför min insats? -Du kan effektivt överföra din insats först innan du sätter upp indexering, men du kommer inte att kunna hämta några belöningar på L2 förrän du allokerar till subgrapher på L2, indexerar dem och presenterar POI. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Kan Delegators flytta sin delegation innan jag flyttar min indexinsats? diff --git a/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-guide.mdx index 4dde699e5079..9cdb196e9c09 100644 --- a/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/sv/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph har gjort det enkelt att flytta till L2 på Arbitrum One. För varje p Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## Så här överför du din subgraf till Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Fördelar med att överföra dina subgrafer +## Benefits of transferring your Subgraphs The Graphs community och kärnutvecklare har [förberett sig](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) för att flytta till Arbitrum under det senaste året. Arbitrum, en blockkedja av lager 2 eller "L2", ärver säkerheten från Ethereum men ger drastiskt lägre gasavgifter. -När du publicerar eller uppgraderar din subgraf till The Graph Network, interagerar du med smarta kontrakt på protokollet och detta kräver att du betalar för gas med ETH. Genom att flytta dina subgrafer till Arbitrum kommer alla framtida uppdateringar av din subgraf att kräva mycket lägre gasavgifter. De lägre avgifterna, och det faktum att curation bonding-kurvorna på L2 är platta, gör det också lättare för andra curatorer att kurera på din subgraf, vilket ökar belöningarna för Indexers på din subgraf. Denna miljö med lägre kostnader gör det också billigare för indexerare att indexera och betjäna din subgraf. Indexeringsbelöningar kommer att öka på Arbitrum och minska på Ethereums mainnet under de kommande månaderna, så fler och fler indexerare kommer att överföra sin andel och sätta upp sin verksamhet på L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Förstå vad som händer med signal, din L1 subgraf och frågewebbadresser +## Understanding what happens with signal, your L1 Subgraph and query URLs -Att överföra en subgraf till Arbitrum använder Arbitrum GRT-bryggan, som i sin tur använder den inhemska Arbitrum-bryggan, för att skicka subgrafen till L2. "Överföringen" kommer att fasa ut subgrafen på mainnet och skicka informationen för att återskapa subgrafen på L2 med hjälp av bryggan. Den kommer också att inkludera subgrafägarens signalerade GRT, som måste vara mer än noll för att bryggan ska acceptera överföringen. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -När du väljer att överföra subgrafen kommer detta att konvertera hela subgrafens kurationssignal till GRT. Detta motsvarar att "avskriva" subgrafen på mainnet. GRT som motsvarar din kuration kommer att skickas till L2 tillsammans med subgrafen, där de kommer att användas för att skapa signaler å dina vägnar. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Andra kuratorer kan välja om de vill ta tillbaka sin del av GRT eller också överföra den till L2 för att få en signal på samma subgraf. Om en subgrafägare inte överför sin subgraf till L2 och manuellt fasar ut den via ett kontraktsanrop, kommer Curatorer att meddelas och kommer att kunna dra tillbaka sin curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -Så snart subgrafen har överförts, eftersom all kuration konverteras till GRT, kommer indexerare inte längre att få belöningar för att indexera subgrafen. Det kommer dock att finnas indexerare som kommer 1) att fortsätta visa överförda subgrafer i 24 timmar och 2) omedelbart börja indexera subgrafen på L2. Eftersom dessa indexerare redan har subgrafen indexerad, borde det inte finnas något behov av att vänta på att subgrafen ska synkroniseras, och det kommer att vara möjligt att fråga L2-subgrafen nästan omedelbart. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Förfrågningar till L2-subgrafen kommer att behöva göras till en annan URL (på `arbitrum-gateway.thegraph.com`), men L1-URL:n fortsätter att fungera i minst 48 timmar. Efter det kommer L1-gatewayen att vidarebefordra frågor till L2-gatewayen (under en tid), men detta kommer att lägga till latens så det rekommenderas att byta alla dina frågor till den nya URL:en så snart som möjligt. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Välja din L2 plånbok -När du publicerade din subgraf på mainnet använde du en ansluten plånbok för att skapa subgrafen, och denna plånbok äger NFT som representerar denna subgraf och låter dig publicera uppdateringar. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -När du överför subgrafen till Arbitrum kan du välja en annan plånbok som kommer att äga denna subgraf NFT på L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. Om du använder en "vanlig" plånbok som MetaMask (ett externt ägt konto eller EOA, d.v.s. en plånbok som inte är ett smart kontrakt), så är detta valfritt och det rekommenderas att behålla samma ägaradress som i L1. -Om du använder en smart kontraktsplånbok, som en multisig (t.ex. ett kassaskåp), är det obligatoriskt att välja en annan L2-plånboksadress, eftersom det är mest troligt att det här kontot bara finns på mainnet och att du inte kommer att kunna göra transaktioner på Arbitrum med denna plånbok. Om du vill fortsätta använda en smart kontraktsplånbok eller multisig, skapa en ny plånbok på Arbitrum och använd dess adress som L2-ägare till din subgraf. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**Det är mycket viktigt att använda en plånboksadress som du kontrollerar, och som kan göra transaktioner på Arbitrum. Annars kommer subgrafen att gå förlorad och kan inte återställas.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Förbereder för överföringen: överbrygga lite ETH -Att överföra subgrafen innebär att man skickar en transaktion genom bryggan och sedan utför en annan transaktion på Arbitrum. Den första transaktionen använder ETH på huvudnätet och inkluderar en del ETH för att betala för gas när meddelandet tas emot på L2. Men om denna gas är otillräcklig måste du göra om transaktionen och betala för gasen direkt på L2 (detta är "Steg 3: Bekräfta överföringen" nedan). Detta steg **måste utföras inom 7 dagar efter att överföringen påbörjats**. Dessutom kommer den andra transaktionen ("Steg 4: Avsluta överföringen på L2") att göras direkt på Arbitrum. Av dessa skäl behöver du lite ETH på en Arbitrum-plånbok. Om du använder ett multisig- eller smart kontraktskonto måste ETH: en finnas i den vanliga (EOA) plånboken som du använder för att utföra transaktionerna, inte på själva multisig plånboken. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. Du kan köpa ETH på vissa börser och ta ut den direkt till Arbitrum, eller så kan du använda Arbitrum-bryggan för att skicka ETH från en mainnet-plånbok till L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Eftersom gasavgifterna på Arbitrum är lägre bör du bara behöva en liten summa. Det rekommenderas att du börjar vid en låg tröskel (0.t.ex. 01 ETH) för att din transaktion ska godkännas. -## Hitta subgrafen Överföringsverktyg +## Finding the Subgraph Transfer Tool -Du kan hitta L2 Överföringsverktyg när du tittar på din subgrafs sida på Subgraf Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![Överföringsverktyg](/img/L2-transfer-tool1.png) -Den är också tillgänglig på Explorer om du är ansluten till plånboken som äger en subgraf och på den subgrafens sida på Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Överför till L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Genom att klicka på knappen Överför till L2 öppnas överföringsverktyget d ## Steg 1: Starta överföringen -Innan du påbörjar överföringen måste du bestämma vilken adress som ska äga subgrafen på L2 (se "Välja din L2 plånbok" ovan), och det rekommenderas starkt att ha lite ETH för gas som redan är överbryggad på Arbitrum (se "Förbereda för överföringen: brygga" lite ETH" ovan). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Observera också att överföring av subgrafen kräver att en signal som inte är noll på subgrafen med samma konto som äger subgrafen; om du inte har signalerat på subgrafen måste du lägga till lite curation (att lägga till en liten mängd som 1 GRT skulle räcka). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -Efter att ha öppnat överföringsverktyget kommer du att kunna ange L2-plånboksadressen i fältet "Mottagande plånboksadress" - **se till att du har angett rätt adress här**. Om du klickar på Transfer Subgraph kommer du att uppmana dig att utföra transaktionen på din plånbok (observera att ett ETH-värde ingår för att betala för L2-gas); detta kommer att initiera överföringen och fasa ut din L1-subgraf (se "Förstå vad som händer med signal, din L1-subgraf och sökadresser" ovan för mer information om vad som händer bakom kulisserna). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -Om du utför det här steget, **se till att du fortsätter tills du har slutfört steg 3 om mindre än 7 dagar, annars försvinner subgrafen och din signal-GRT.** Detta beror på hur L1-L2-meddelanden fungerar på Arbitrum: meddelanden som skickas genom bryggan är "omförsökbara biljetter" som måste utföras inom 7 dagar, och det första utförandet kan behöva ett nytt försök om det finns toppar i gaspriset på Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Steg 2: Väntar på att subgrafen ska komma till L2 +## Step 2: Waiting for the Subgraph to get to L2 -När du har startat överföringen måste meddelandet som skickar din L1 subgraf till L2 spridas genom Arbitrum bryggan. Detta tar cirka 20 minuter (bryggan väntar på att huvudnäts blocket som innehåller transaktionen är "säkert" från potentiella kedjereorganisationer). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). När denna väntetid är över kommer Arbitrum att försöka utföra överföringen automatiskt på L2 kontrakten. @@ -80,7 +80,7 @@ När denna väntetid är över kommer Arbitrum att försöka utföra överförin ## Steg 3: Bekräfta överföringen -I de flesta fall kommer detta steg att utföras automatiskt eftersom L2-gasen som ingår i steg 1 borde vara tillräcklig för att utföra transaktionen som tar emot subgrafen på Arbitrum-kontrakten. I vissa fall är det dock möjligt att en topp i gaspriserna på Arbitrum gör att denna autoexekvering misslyckas. I det här fallet kommer "biljetten" som skickar din subgraf till L2 att vara vilande och kräver ett nytt försök inom 7 dagar. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. Om så är fallet måste du ansluta med en L2 plånbok som har lite ETH på Arbitrum, byta ditt plånboksnätverk till Arbitrum och klicka på "Bekräfta överföring" för att försöka genomföra transaktionen igen. @@ -88,33 +88,33 @@ Om så är fallet måste du ansluta med en L2 plånbok som har lite ETH på Arbi ## Steg 4: Avsluta överföringen på L2 -Vid det här laget har din subgraf och GRT tagits emot på Arbitrum, men subgrafen är inte publicerad ännu. Du måste ansluta med L2 plånboken som du valde som mottagande plånbok, byta ditt plånboksnätverk till Arbitrum och klicka på "Publicera subgraf" +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publicera subgrafen](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Vänta på att subgrafen ska publiceras](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -Detta kommer att publicera subgrafen så att indexerare som är verksamma på Arbitrum kan börja servera den. Det kommer också att skapa kurations signaler med hjälp av GRT som överfördes från L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Steg 5: Uppdatera sökfrågans URL -Din subgraf har överförts till Arbitrum! För att fråga subgrafen kommer den nya webbadressen att vara: +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Observera att subgraf-ID: t på Arbitrum kommer att vara ett annat än det du hade på mainnet, men du kan alltid hitta det på Explorer eller Studio. Som nämnts ovan (se "Förstå vad som händer med signal, dina L1-subgraf- och sökwebbadresser") kommer den gamla L1-URL: n att stödjas under en kort stund, men du bör byta dina frågor till den nya adressen så snart subgrafen har synkroniserats på L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## Så här överför du din kuration till Arbitrum (L2) -## Förstå vad som händer med curation vid subgraf överföringar till L2 +## Understanding what happens to curation on Subgraph transfers to L2 -När ägaren av en subgraf överför en subgraf till Arbitrum, omvandlas all subgrafs signal till GRT samtidigt. Detta gäller för "auto-migrerad" signal, det vill säga signal som inte är specifik för en subgraf version eller utbyggnad men som följer den senaste versionen av en subgraf. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -Denna omvandling från signal till GRT är densamma som vad som skulle hända om subgrafägaren avskaffade subgrafen i L1. När subgrafen föråldras eller överförs, "bränns" all curation-signal samtidigt (med hjälp av curation bonding-kurvan) och den resulterande GRT hålls av GNS smarta kontraktet (det är kontraktet som hanterar subgrafuppgraderingar och automatisk migrerad signal). Varje kurator i det stycket har därför ett anspråk på den GRT som är proportionell mot antalet aktier de hade för stycket. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -En bråkdel av dessa BRT som motsvarar subgrafägaren skickas till L2 tillsammans med subgrafen. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -Vid denna tidpunkt kommer den kurerade BRT inte att samla på sig några fler frågeavgifter, så kuratorer kan välja att dra tillbaka sin BRT eller överföra den till samma subgraf på L2, där den kan användas för att skapa en ny kurationssignal. Det är ingen brådska att göra detta eftersom BRT kan hjälpa till på obestämd tid och alla får ett belopp som är proportionellt mot sina aktier, oavsett när de gör det. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Välja din L2 plånbok @@ -130,9 +130,9 @@ Om du använder en smart kontraktsplånbok, som en multisig (t.ex. ett kassaskå Innan du påbörjar överföringen måste du bestämma vilken adress som ska äga kurationen på L2 (se "Välja din L2-plånbok" ovan), och det rekommenderas att ha en del ETH för gas som redan är överbryggad på Arbitrum ifall du behöver försöka utföra exekveringen av meddelande på L2. Du kan köpa ETH på vissa börser och ta ut den direkt till Arbitrum, eller så kan du använda Arbitrum-bryggan för att skicka ETH från en mainnet-plånbok till L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - eftersom gasavgifterna på Arbitrum är så låga ska du bara behöva en liten summa, t.ex. 0,01 ETH kommer förmodligen att vara mer än tillräckligt. -Om en subgraf som du kurerar till har överförts till L2 kommer du att se ett meddelande i Explorer som talar om att du kurerar till en överförd subgraf. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -När du tittar på subgraf sidan kan du välja att dra tillbaka eller överföra kurationen. Genom att klicka på "Överför signal till Arbitrum" öppnas överföringsverktyget. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Överföringssignal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ Om så är fallet måste du ansluta med en L2 plånbok som har lite ETH på Arbi ## Dra tillbaka din kuration på L1 -Om du föredrar att inte skicka din GRT till L2, eller om du hellre vill överbrygga GRT manuellt, kan du ta tillbaka din kurerade BRT på L1. På bannern på subgraf sidan väljer du "Ta tillbaka signal" och bekräftar transaktionen; GRT kommer att skickas till din kurator adress. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/sv/archived/sunrise.mdx b/website/src/pages/sv/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/sv/archived/sunrise.mdx +++ b/website/src/pages/sv/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/sv/global.json b/website/src/pages/sv/global.json index 3793fbf29d78..20aef5782977 100644 --- a/website/src/pages/sv/global.json +++ b/website/src/pages/sv/global.json @@ -6,6 +6,7 @@ "subgraphs": "Subgrafer", "substreams": "Underströmmar", "sps": "Substreams-Powered Subgraphs", + "tokenApi": "Token API", "indexing": "Indexing", "resources": "Resources", "archived": "Archived" @@ -24,9 +25,51 @@ "linkToThisSection": "Link to this section" }, "content": { - "note": "Note", + "callout": { + "note": "Note", + "tip": "Tip", + "important": "Important", + "warning": "Warning", + "caution": "Caution" + }, "video": "Video" }, + "openApi": { + "parameters": { + "pathParameters": "Path Parameters", + "queryParameters": "Query Parameters", + "headerParameters": "Header Parameters", + "cookieParameters": "Cookie Parameters", + "parameter": "Parameter", + "description": "Beskrivning", + "value": "Value", + "required": "Required", + "deprecated": "Deprecated", + "defaultValue": "Default value", + "minimumValue": "Minimum value", + "maximumValue": "Maximum value", + "acceptedValues": "Accepted values", + "acceptedPattern": "Accepted pattern", + "format": "Format", + "serializationFormat": "Serialization format" + }, + "request": { + "label": "Test this endpoint", + "noCredentialsRequired": "No credentials required", + "send": "Send Request" + }, + "responses": { + "potentialResponses": "Potential Responses", + "status": "Status", + "description": "Beskrivning", + "liveResponse": "Live Response", + "example": "Exempel" + }, + "errors": { + "invalidApi": "Could not retrieve API {0}.", + "invalidOperation": "Could not retrieve operation {0} in API {1}." + } + }, "notFound": { "title": "Oops! This page was lost in space...", "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", diff --git a/website/src/pages/sv/index.json b/website/src/pages/sv/index.json index 778f5e81d7f9..23a97080ffc1 100644 --- a/website/src/pages/sv/index.json +++ b/website/src/pages/sv/index.json @@ -7,7 +7,7 @@ "cta2": "Build your first subgraph" }, "products": { - "title": "The Graph’s Products", + "title": "The Graph's Products", "description": "Choose a solution that fits your needs—interact with blockchain data your way.", "subgraphs": { "title": "Subgrafer", @@ -21,7 +21,7 @@ }, "sps": { "title": "Substreams-Powered Subgraphs", - "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "description": "Boost your subgraph's efficiency and scalability by using Substreams.", "cta": "Set up a Substreams-powered subgraph" }, "graphNode": { @@ -37,10 +37,86 @@ }, "supportedNetworks": { "title": "Nätverk som stöds", + "details": "Network Details", + "services": "Services", + "type": "Typ", + "protocol": "Protocol", + "identifier": "Identifier", + "chainId": "Chain ID", + "nativeCurrency": "Native Currency", + "docs": "Dokument", + "shortName": "Short Name", + "guides": "Guides", + "search": "Search networks", + "showTestnets": "Show Testnets", + "loading": "Loading...", + "infoTitle": "Info", + "infoText": "Boost your developer experience by enabling The Graph's indexing network.", + "infoLink": "Integrate new network", "description": { "base": "The Graph supports {0}. To add a new network, {1}", "networks": "networks", "completeThisForm": "complete this form" + }, + "emptySearch": { + "title": "No networks found", + "description": "No networks match your search for \"{0}\"", + "clearSearch": "Clear search", + "showTestnets": "Show testnets" + }, + "tableHeaders": { + "name": "Name", + "id": "ID", + "subgraphs": "Subgrafer", + "substreams": "Underströmmar", + "firehose": "Firehose", + "tokenapi": "Token API" + } + }, + "networkGuides": { + "evm": { + "subgraphQuickStart": { + "title": "Subgraph quick start", + "description": "Kickstart your journey into subgraph development." + }, + "substreams": { + "title": "Underströmmar", + "description": "Stream high-speed data for real-time indexing." + }, + "timeseries": { + "title": "Timeseries & Aggregations", + "description": "Learn to track metrics like daily volumes or user growth." + }, + "advancedFeatures": { + "title": "Advanced subgraph features", + "description": "Leverage features like custom data sources, event handlers, and topic filters." + }, + "billing": { + "title": "Fakturering", + "description": "Optimize costs and manage billing efficiently." + } + }, + "nonEvm": { + "officialDocs": { + "title": "Official Substreams docs", + "description": "Stream high-speed data for real-time indexing." + }, + "spsIntro": { + "title": "Substreams-powered Subgraphs Intro", + "description": "Supercharge your subgraph's efficiency with Substreams." + }, + "substreamsDev": { + "title": "Substreams.dev", + "description": "Access tutorials, templates, and documentation to build custom data modules." + }, + "substreamsStarter": { + "title": "Substreams starter", + "description": "Leverage this boilerplate to create your first Substreams module." + }, + "substreamsRepo": { + "title": "Substreams repo", + "description": "Study, contribute to, or customize the core Substreams framework." + } } }, "guides": { @@ -80,15 +156,15 @@ "watchOnYouTube": "Watch on YouTube", "theGraphExplained": { "title": "The Graph Explained In 1 Minute", - "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video." }, "whatIsDelegating": { "title": "What is Delegating?", - "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph." }, "howToIndexSolana": { "title": "How to Index Solana with a Substreams-powered Subgraph", - "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases." } }, "time": { diff --git a/website/src/pages/sv/indexing/chain-integration-overview.mdx b/website/src/pages/sv/indexing/chain-integration-overview.mdx index 147468f7dc17..94f8e8dd42e5 100644 --- a/website/src/pages/sv/indexing/chain-integration-overview.mdx +++ b/website/src/pages/sv/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ Denna process är relaterad till Subgraf Data Service och gäller endast nya Sub ### 2. Vad händer om stöd för Firehose & Substreams kommer efter det att nätverket stöds på mainnet? -Detta skulle endast påverka protokollstödet för indexbelöningar på Substreams-drivna subgrafer. Den nya Firehose-implementeringen skulle behöva testas på testnätet, enligt den metodik som beskrivs för Fas 2 i detta GIP. På liknande sätt, förutsatt att implementationen är prestanda- och tillförlitlig, skulle en PR på [Funktionsstödsmatrisen](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) krävas (`Substreams data sources` Subgraf Feature), liksom en ny GIP för protokollstöd för indexbelöningar. Vem som helst kan skapa PR och GIP; Stiftelsen skulle hjälpa till med Rådets godkännande. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/sv/indexing/new-chain-integration.mdx b/website/src/pages/sv/indexing/new-chain-integration.mdx index c33a501eb77f..504940f98a6b 100644 --- a/website/src/pages/sv/indexing/new-chain-integration.mdx +++ b/website/src/pages/sv/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex - `eth_getBlockByHash` - `net_version` - `eth_getTransactionReceipt`, i en JSON-RPC batch-begäran -- `trace_filter` *(limited tracing and optionally required for Graph Node)* +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ ### 2. Firehose Integration @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graf Node-konfiguration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Klona Graf Node](https://github.com/graphprotocol/graph-node) @@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/sv/indexing/overview.mdx b/website/src/pages/sv/indexing/overview.mdx index 26ecf1330d60..3cbd4c66ccf5 100644 --- a/website/src/pages/sv/indexing/overview.mdx +++ b/website/src/pages/sv/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexerare är nodoperatörer i The Graph Network som satsar Graph Tokens (GRT) GRT som satsas i protokollet är föremål för en tiningperiod och kan drabbas av strykning om indexerare är skadliga och tillhandahåller felaktiga data till applikationer eller om de indexerar felaktigt. Indexerare tjänar också belöningar för delegerat satsning från Delegater, för att bidra till nätverket. -Indexerare väljer subgrafer att indexera baserat på subgrafens kuratersignal, där Curators satsar GRT för att ange vilka subgrafer som är av hög kvalitet och bör prioriteras. Konsumenter (t.ex. applikationer) kan också ställa in parametrar för vilka indexerare som behandlar frågor för deras subgrafer och ange preferenser för pris på frågebetalning. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Graf Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graf Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. -8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. ``` setDelegationParameters(950000, 600000, 500) @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/sv/indexing/supported-network-requirements.mdx b/website/src/pages/sv/indexing/supported-network-requirements.mdx index f7a4943afd1b..fde736d5e70d 100644 --- a/website/src/pages/sv/indexing/supported-network-requirements.mdx +++ b/website/src/pages/sv/indexing/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Nätverk | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Nätverk | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/sv/indexing/tap.mdx b/website/src/pages/sv/indexing/tap.mdx index d69cb7b5bc91..65582940a499 100644 --- a/website/src/pages/sv/indexing/tap.mdx +++ b/website/src/pages/sv/indexing/tap.mdx @@ -1,21 +1,21 @@ --- -title: TAP Migration Guide +title: GraphTally Guide --- -Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. +Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust. ## Översikt -[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: +GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: - Efficiently handles micropayments. - Adds a layer of consolidations to onchain transactions and costs. - Allows Indexers control of receipts and payments, guaranteeing payment for queries. - It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. -## Specifics +### Specifics -TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. +GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. @@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed | Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | | Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | -### Krav +### Prerequisites -In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc 1. **Indexer Agent** - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). - - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs. 2. **Indexer Service** @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/sv/indexing/tooling/graph-node.mdx b/website/src/pages/sv/indexing/tooling/graph-node.mdx index e53a127b3fcd..0e6241f265fc 100644 --- a/website/src/pages/sv/indexing/tooling/graph-node.mdx +++ b/website/src/pages/sv/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graf Node --- -Graf Node är komponenten som indexerar subgraffar och gör den resulterande datan tillgänglig för förfrågan via en GraphQL API. Som sådan är den central för indexeringsstacken, och korrekt drift av Graph Node är avgörande för att driva en framgångsrik indexerare. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graf Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL-databas -Huvudlagret för Graph Node, här lagras subgrafdata, liksom metadata om subgraffar och nätverksdata som är oberoende av subgraffar, som blockcache och eth_call-cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Nätverkskunder För att indexera ett nätverk behöver Graf Node åtkomst till en nätverksklient via ett EVM-kompatibelt JSON-RPC API. Denna RPC kan ansluta till en enda klient eller så kan det vara en mer komplex konfiguration som lastbalanserar över flera. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS-noder -Metadata för distribution av subgraffar lagras på IPFS-nätverket. Graf Node har främst åtkomst till IPFS-noden under distributionen av subgraffar för att hämta subgrafens manifest och alla länkade filer. Nätverksindexerare behöver inte värd sin egen IPFS-nod. En IPFS-nod för nätverket är värd på https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus server för mätvärden @@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit När Graph Node är igång exponerar den följande portar: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | -| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | -| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. ## Avancerad konfiguration av Graf Node -På sitt enklaste sätt kan Graph Node användas med en enda instans av Graph Node, en enda PostgreSQL-databas, en IPFS-nod och nätverksklienter som krävs av de subgrafer som ska indexeras. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Flera Grafnoder -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Observera att flera Graph Nodes alla kan konfigureras att använda samma databas, som i sig kan skalas horisontellt via sharding. #### Regler för utplacering -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Exempel på konfiguration av deployeringsregler: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Alla noder vars --node-id matchar reguljärt uttryck kommer att konfigureras fö För de flesta användningsfall är en enda Postgres-databas tillräcklig för att stödja en graph-node-instans. När en graph-node-instans växer utöver en enda Postgres-databas är det möjligt att dela upp lagringen av graph-node-data över flera Postgres-databaser. Alla databaser tillsammans bildar lagringsutrymmet för graph-node-instansen. Varje individuell databas kallas en shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding blir användbart när din befintliga databas inte kan hålla jämna steg med belastningen som Graph Node sätter på den och när det inte längre är möjligt att öka databasens storlek. -> Det är generellt sett bättre att göra en enda databas så stor som möjligt innan man börjar med shards. Ett undantag är när frågetrafiken är mycket ojämnt fördelad mellan subgrafer; i dessa situationer kan det hjälpa dramatiskt om högvolymsubgraferna hålls i en shard och allt annat i en annan, eftersom den konfigurationen gör det mer troligt att data för högvolymsubgraferna stannar i databasens interna cache och inte ersätts av data som inte behövs lika mycket från lågvolymsubgrafer. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. När det gäller att konfigurera anslutningar, börja med max_connections i postgresql.conf som är inställt på 400 (eller kanske till och med 200) och titta på Prometheus-metrarna store_connection_wait_time_ms och store_connection_checkout_count. Märkbara väntetider (något över 5 ms) är en indikation på att det finns för få anslutningar tillgängliga; höga väntetider beror också på att databasen är mycket upptagen (som hög CPU-belastning). Om databasen verkar annars stabil, indikerar höga väntetider att antalet anslutningar behöver ökas. I konfigurationen är det en övre gräns för hur många anslutningar varje graph-node-instans kan använda, och Graph Node kommer inte att hålla anslutningar öppna om det inte behöver dem. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Stöd för flera nätverk -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Flera nätverk - Flera leverantörer per nätverk (detta kan göra det möjligt att dela upp belastningen mellan leverantörer, och kan också möjliggöra konfiguration av fullständiga noder samt arkivnoder, där Graph Node föredrar billigare leverantörer om en viss arbetsbelastning tillåter det). @@ -225,11 +225,11 @@ Användare som driver en skalad indexering med avancerad konfiguration kan dra n ### Hantera Graf Noder -Med en körande Graph Node (eller Graph Nodes!) är utmaningen sedan att hantera distribuerade subgrafer över dessa noder. Graph Node erbjuder en rad verktyg för att hjälpa till med hanteringen av subgrafer. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Loggning -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Arbeta med undergrafer +### Working with Subgraphs #### Indexerings status API -Tillgänglig som standard på port 8030/graphql, exponerar indexeringstatus-API: en en rad metoder för att kontrollera indexeringstatus för olika subgrafer, kontrollera bevis för indexering, inspektera subgrafegenskaper och mer. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ Det finns tre separata delar av indexeringsprocessen: - Bearbeta händelser i rätt ordning med lämpliga hanterare (detta kan innebära att kedjan anropas för status och att data hämtas från lagret) - Skriva de resulterande data till butiken -Dessa stadier är pipelinerade (det vill säga de kan utföras parallellt), men de är beroende av varandra. När subgrafer är långsamma att indexera beror orsaken på den specifika subgrafgen. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Vanliga orsaker till indexeringslångsamhet: @@ -276,24 +276,24 @@ Vanliga orsaker till indexeringslångsamhet: - Leverantören själv faller bakom kedjehuvudet - Långsamhet vid hämtning av nya kvitton från leverantören vid kedjehuvudet -Subgrafindexeringsmetriker kan hjälpa till att diagnostisera grunden till indexeringens långsamhet. I vissa fall ligger problemet med subgrafgenen själv, men i andra fall kan förbättrade nätverksleverantörer, minskad databaskonflikt och andra konfigurationsförbättringar markant förbättra indexeringens prestanda. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Undergrafer som misslyckats +#### Failed Subgraphs -Under indexering kan subgrafer misslyckas om de stöter på data som är oväntad, om någon komponent inte fungerar som förväntat eller om det finns något fel i händelsehanterare eller konfiguration. Det finns två allmänna typer av misslyckande: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministiska fel: detta är fel som inte kommer att lösas med retries - Icke-deterministiska fel: dessa kan bero på problem med leverantören eller något oväntat Graph Node-fel. När ett icke-deterministiskt fel inträffar kommer Graph Node att försöka igen med de felande hanterarna och backa över tid. -I vissa fall kan ett misslyckande vara lösbart av indexören (till exempel om felet beror på att det inte finns rätt typ av leverantör, kommer att tillåta indexering att fortsätta om den nödvändiga leverantören läggs till). Men i andra fall krävs en ändring i subgrafkoden. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Blockera och anropa cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. Om en blockcache-inkonsekvens misstänks, som att en tx-kvitto saknar händelse: @@ -304,7 +304,7 @@ Om en blockcache-inkonsekvens misstänks, som att en tx-kvitto saknar händelse: #### Fråga frågor och fel -När en subgraf har indexeras kan indexörer förvänta sig att servera frågor via subgrafens dedikerade frågendpunkt. Om indexören hoppas på att betjäna en betydande mängd frågor rekommenderas en dedikerad frågenod, och vid mycket höga frågevolymer kan indexörer vilja konfigurera replikskivor så att frågor inte påverkar indexeringsprocessen. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. Men även med en dedikerad frågenod och repliker kan vissa frågor ta lång tid att utföra, och i vissa fall öka minnesanvändningen och negativt påverka frågetiden för andra användare. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analyserar frågor -Problematiska frågor dyker oftast upp på ett av två sätt. I vissa fall rapporterar användare själva att en viss fråga är långsam. I det fallet är utmaningen att diagnostisera orsaken till långsamheten - om det är ett generellt problem eller specifikt för den subgraf eller fråga. Och naturligtvis att lösa det om det är möjligt. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. I andra fall kan utlösaren vara hög minnesanvändning på en frågenod, i vilket fall utmaningen först är att identifiera frågan som orsakar problemet. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Ta bort undergrafer +#### Removing Subgraphs > Detta är ny funktionalitet, som kommer att vara tillgänglig i Graf Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/sv/indexing/tooling/graphcast.mdx b/website/src/pages/sv/indexing/tooling/graphcast.mdx index 213029e1836b..56b93af13fc2 100644 --- a/website/src/pages/sv/indexing/tooling/graphcast.mdx +++ b/website/src/pages/sv/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ För närvarande avgörs kostnaden för att sända information till andra nätve Graphcast SDK (Utrustning för programvaruutveckling) gör det möjligt för utvecklare att bygga Radios, vilka är applikationer som drivs av gossipeffekt och som indexare kan köra för att tjäna ett visst syfte. Vi avser också att skapa några Radios (eller ge stöd åt andra utvecklare/team som önskar bygga Radios) för följande användningsområden: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Genomföra auktioner och koordinering för warp-synkronisering av delgrafer, delströmmar och Firehose-data från andra indexare. -- Självrapportering om aktiv frågeanalys, inklusive delgrafförfrågningsvolym, avgiftsvolym etc. -- Självrapportering om indexeringanalys, inklusive tid för delgrafindexering, gasavgifter för handler, påträffade indexeringsfel etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Självrapportering om stackinformation inklusive graph-node-version, Postgres-version, Ethereum-klientversion etc. ### Läs mer diff --git a/website/src/pages/sv/resources/benefits.mdx b/website/src/pages/sv/resources/benefits.mdx index b3c5e957cb54..f227edf6f961 100644 --- a/website/src/pages/sv/resources/benefits.mdx +++ b/website/src/pages/sv/resources/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | -| :-: | :-: | :-: | -| Månatlig kostnad för server\* | $350 per månad | $0 | -| Kostnad för frågor | $0+ | $0 per month | -| Konstruktionstid | $400 per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | -| Frågor per månad | Begränsad till infra kapacitet | 100,000 (Free Plan) | -| Kostnad per fråga | $0 | $0 | -| Infrastructure | Centraliserad | Decentraliserad | -| Geografisk redundans | $750+ per extra nod | Inkluderat | -| Drifttid | Varierande | 99.9%+ | -| Total Månadskostnad | $750+ | $0 | +| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | +| :---------------------------: | :-------------------------------------: | :-----------------------------------------------------------: | +| Månatlig kostnad för server\* | $350 per månad | $0 | +| Kostnad för frågor | $0+ | $0 per month | +| Konstruktionstid | $400 per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | +| Frågor per månad | Begränsad till infra kapacitet | 100,000 (Free Plan) | +| Kostnad per fråga | $0 | $0 | +| Infrastructure | Centraliserad | Decentraliserad | +| Geografisk redundans | $750+ per extra nod | Inkluderat | +| Drifttid | Varierande | 99.9%+ | +| Total Månadskostnad | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | -| :-: | :-: | :-: | -| Månatlig kostnad för server\* | $350 per månad | $0 | -| Kostnad för frågor | $500 per månad | $120 per month | -| Konstruktionstid | $800 per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | -| Frågor per månad | Begränsad till infra kapacitet | ~3,000,000 | -| Kostnad per fråga | $0 | $0.00004 | -| Infrastructure | Centraliserad | Decentraliserad | -| Kostnader för ingenjörsarbete | $200 per timme | Inkluderat | -| Geografisk redundans | $1,200 i totala kostnader per extra nod | Inkluderat | -| Drifttid | Varierande | 99.9%+ | -| Total Månadskostnad | $1,650+ | $120 | +| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | +| :---------------------------: | :----------------------------------------: | :-----------------------------------------------------------: | +| Månatlig kostnad för server\* | $350 per månad | $0 | +| Kostnad för frågor | $500 per månad | $120 per month | +| Konstruktionstid | $800 per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | +| Frågor per månad | Begränsad till infra kapacitet | ~3,000,000 | +| Kostnad per fråga | $0 | $0.00004 | +| Infrastructure | Centraliserad | Decentraliserad | +| Kostnader för ingenjörsarbete | $200 per timme | Inkluderat | +| Geografisk redundans | $1,200 i totala kostnader per extra nod | Inkluderat | +| Drifttid | Varierande | 99.9%+ | +| Total Månadskostnad | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | -| :-: | :-: | :-: | -| Månatlig kostnad för server\* | $1100 per månad, per nod | $0 | -| Kostnad för frågor | $4000 | $1,200 per month | -| Antal noder som behövs | 10 | Ej tillämpligt | -| Konstruktionstid | $6,000 eller mer per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | -| Frågor per månad | Begränsad till infra kapacitet | ~30,000,000 | -| Kostnad per fråga | $0 | $0.00004 | -| Infrastructure | Centraliserad | Decentraliserad | -| Geografisk redundans | $1,200 i totala kostnader per extra nod | Inkluderat | -| Drifttid | Varierande | 99.9%+ | -| Total Månadskostnad | $11,000+ | $1,200 | +| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | +| :---------------------------: | :-----------------------------------------: | :-----------------------------------------------------------: | +| Månatlig kostnad för server\* | $1100 per månad, per nod | $0 | +| Kostnad för frågor | $4000 | $1,200 per month | +| Antal noder som behövs | 10 | Ej tillämpligt | +| Konstruktionstid | $6,000 eller mer per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | +| Frågor per månad | Begränsad till infra kapacitet | ~30,000,000 | +| Kostnad per fråga | $0 | $0.00004 | +| Infrastructure | Centraliserad | Decentraliserad | +| Geografisk redundans | $1,200 i totala kostnader per extra nod | Inkluderat | +| Drifttid | Varierande | 99.9%+ | +| Total Månadskostnad | $11,000+ | $1,200 | \*inklusive kostnader för backup: $50-$100 per månad @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Att kurera signal på en subgraf är en valfri engångskostnad med noll nettokostnad (t.ex. $1k i signal kan kurera på en subgraf och senare dras tillbaka - med potential att tjäna avkastning i processen). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/sv/resources/glossary.mdx b/website/src/pages/sv/resources/glossary.mdx index dd930819456b..72ab2ba9333a 100644 --- a/website/src/pages/sv/resources/glossary.mdx +++ b/website/src/pages/sv/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Ordlista - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Ordlista - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/sv/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/sv/resources/migration-guides/assemblyscript-migration-guide.mdx index e0f49fc2c71e..ed21fdc33744 100644 --- a/website/src/pages/sv/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/sv/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migrationsguide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -Det kommer att möjliggöra för undergrafutvecklare att använda nyare funktioner i AS-språket och standardbiblioteket. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Funktioner @@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version ` ## Hur uppgraderar du? -1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`: +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: ```yaml ... @@ -52,7 +52,7 @@ dataSources: ... mapping: ... - apiVersion: 0.0.6 + apiVersion: 0.0.9 ... ``` @@ -91,30 +91,30 @@ maybeValue.aMethod(); Men i den nyare versionen, eftersom värdet är nullable, måste du kontrollera, så här: ```typescript -let maybeValue = load() +let maybeValue = load(); if (maybeValue) { - maybeValue.aMethod() // `maybeValue` is not null anymore + maybeValue.aMethod(); // `maybeValue` is not null anymore } ``` Eller gör så här: ```typescript -let maybeValue = load()! // bryts i runtime om värdet är null +let maybeValue = load()!; // bryts i runtime om värdet är null maybeValue.aMethod() ``` -Om du är osäker på vilken du ska välja, rekommenderar vi alltid att använda den säkra versionen. Om värdet inte finns kanske du bara vill göra ett tidigt villkorligt uttalande med en retur i din undergrafshanterare. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variabelskuggning Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: ```typescript -let a = 10 -let b = 20 +let a = 10; +let b = 20; let a = a + b ``` @@ -132,7 +132,7 @@ Du måste döpa om dina duplicerade variabler om du hade variabelskuggning. ### Jämförelser med nollvärden -När du gör uppgraderingen av din subgraf kan du ibland få fel som dessa: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -171,8 +171,8 @@ Exempel: ```typescript // primitive casting -let a: usize = 10 -let b: isize = 5 +let a: usize = 10; +let b: isize = 5; let c: usize = a + (b as usize) ``` @@ -229,10 +229,10 @@ If you just want to remove nullability, you can keep using the `as` operator (or ```typescript // ta bort ogiltighet -let previousBalance = AccountBalance.load(balanceId) // AccountBalance | null +let previousBalance = AccountBalance.load(balanceId); // AccountBalance | null if (previousBalance != null) { - return previousBalance as AccountBalance // safe remove null + return previousBalance as AccountBalance; // safe remove null } let newBalance = new AccountBalance(balanceId) @@ -252,18 +252,18 @@ Vi har också lagt till några fler statiska metoder i vissa typer för att unde To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: ```typescript -let something: string | null = 'data' +let something: string | null = "data"; -let somethingOrElse = something ? something : 'else' +let somethingOrElse = something ? something : "else"; // or -let somethingOrElse +let somethingOrElse; if (something) { - somethingOrElse = something + somethingOrElse = something; } else { - somethingOrElse = 'else' + somethingOrElse = "else"; } ``` @@ -274,10 +274,10 @@ class Container { data: string | null } -let container = new Container() -container.data = 'data' +let container = new Container(); +container.data = "data"; -let somethingOrElse: string = container.data ? container.data : 'else' // Kompilerar inte +let somethingOrElse: string = container.data ? container.data : "else"; // Kompilerar inte ``` Vilket ger detta fel: @@ -296,12 +296,12 @@ class Container { data: string | null } -let container = new Container() -container.data = 'data' +let container = new Container(); +container.data = "data"; -let data = container.data +let data = container.data; -let somethingOrElse: string = data ? data : 'else' // kompilerar helt okej :) +let somethingOrElse: string = data ? data : "else"; // kompilerar helt okej :) ``` ### Operatörsöverladdning med egenskapsaccess @@ -310,7 +310,7 @@ Om du försöker summera (till exempel) en nullable typ (från en property acces ```typescript class BigInt extends Uint8Array { - @operator('+') + @operator("+") plus(other: BigInt): BigInt { // ... } @@ -320,26 +320,26 @@ class Wrapper { public constructor(public n: BigInt | null) {} } -let x = BigInt.fromI32(2) -let y: BigInt | null = null +let x = BigInt.fromI32(2); +let y: BigInt | null = null; -x + y // ge kompileringsfel om ogiltighet +x + y; // ge kompileringsfel om ogiltighet -let wrapper = new Wrapper(y) +let wrapper = new Wrapper(y); -wrapper.n = wrapper.n + x // ger inte kompileringsfel som det borde +wrapper.n = wrapper.n + x; // ger inte kompileringsfel som det borde ``` -Vi har öppnat en fråga om AssemblyScript-kompilatorn för detta, men om du gör den här typen av operationer i dina subgraf-mappningar bör du ändra dem så att de gör en null-kontroll innan den. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript -let wrapper = new Wrapper(y) +let wrapper = new Wrapper(y); if (!wrapper.n) { - wrapper.n = BigInt.fromI32(0) + wrapper.n = BigInt.fromI32(0); } -wrapper.n = wrapper.n + x // nu är `n` garanterat ett BigInt +wrapper.n = wrapper.n + x; // nu är `n` garanterat ett BigInt ``` ### Initialisering av värde @@ -347,17 +347,17 @@ wrapper.n = wrapper.n + x // nu är `n` garanterat ett BigInt Om du har någon kod som denna: ```typescript -var value: Type // null -value.x = 10 -value.y = 'content' +var value: Type; // null +value.x = 10; +value.y = "content" ``` -Det kommer att kompilera men brytas vid körning, det händer eftersom värdet inte har initialiserats, så se till att din subgraf har initialiserat sina värden, så här: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript -var value = new Type() // initialized -value.x = 10 -value.y = 'content' +var value = new Type(); // initialized +value.x = 10; +value.y = "content" ``` Även om du har nullable properties i en GraphQL-entitet, som denna: @@ -372,10 +372,10 @@ type Total @entity { Och du har en kod som liknar den här: ```typescript -let total = Total.load('latest') +let total = Total.load("latest"); if (total === null) { - total = new Total('latest') + total = new Total("latest") } total.amount = total.amount + BigInt.fromI32(1) @@ -384,11 +384,11 @@ total.amount = total.amount + BigInt.fromI32(1) You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: ```typescript -let total = Total.load('latest') +let total = Total.load("latest") if (total === null) { - total = new Total('latest') - total.amount = BigInt.fromI32(0) + total = new Total("latest") + total.amount = BigInt.fromI32(0); } total.tokens = total.tokens + BigInt.fromI32(1) @@ -404,10 +404,10 @@ type Total @entity { ``` ```typescript -let total = Total.load('latest') +let total = Total.load("latest"); if (total === null) { - total = new Total('latest') // initierar redan icke-nullställbara egenskaper + total = new Total("latest"); // initierar redan icke-nullställbara egenskaper } total.amount = total.amount + BigInt.fromI32(1) @@ -435,17 +435,17 @@ export class Something { // or export class Something { - value: Thing + value: Thing; constructor(value: Thing) { - this.value = value + this.value = value; } } // or export class Something { - value!: Thing + value!: Thing; } ``` diff --git a/website/src/pages/sv/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/sv/resources/migration-guides/graphql-validations-migration-guide.mdx index 25d4c50249e1..647bead3ee4f 100644 --- a/website/src/pages/sv/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/sv/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -1,5 +1,5 @@ --- -title: Migrationsguide för GraphQL-validering +title: GraphQL Validations Migration Guide --- Snart kommer `graph-node` att stödja 100 % täckning av [GraphQL Valideringsspecifikationen](https://spec.graphql.org/June2018/#sec-Validation). @@ -20,7 +20,7 @@ För att vara i linje med dessa valideringar, följ migrationsguiden. Du kan använda CLI-migrationsverktyget för att hitta eventuella problem i dina GraphQL-operationer och åtgärda dem. Alternativt kan du uppdatera ändpunkten för din GraphQL-klient att använda ändpunkten `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Att testa dina frågor mot denna ändpunkt kommer att hjälpa dig att hitta problemen i dina frågor. -> Inte alla subgrafer behöver migreras, om du använder [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) eller [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), ser de redan till att dina frågor är giltiga. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migrations-CLI-verktyg diff --git a/website/src/pages/sv/resources/roles/curating.mdx b/website/src/pages/sv/resources/roles/curating.mdx index fa6a279e5b1e..0ae08de7bc3a 100644 --- a/website/src/pages/sv/resources/roles/curating.mdx +++ b/website/src/pages/sv/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Kuratering --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## Hur man Signaliserar -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -En kurator kan välja att signalera på en specifik subgrafversion, eller så kan de välja att ha sin signal automatiskt migrerad till den nyaste produktionsversionen av den subgrafen. Båda är giltiga strategier och har sina egna för- och nackdelar. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Att ha din signal automatiskt migrerad till den nyaste produktionsversionen kan vara värdefullt för att säkerställa att du fortsätter att ackumulera frågeavgifter. Varje gång du signalerar åläggs en kuratoravgift på 1%. Du kommer också att betala en kuratoravgift på 0,5% vid varje migration. Subgrafutvecklare uppmanas att inte publicera nya versioner för ofta - de måste betala en kuratoravgift på 0,5% på alla automatiskt migrerade kuratorandelar. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risker 1. Frågemarknaden är i grunden ung på The Graph och det finns en risk att din %APY kan vara lägre än du förväntar dig på grund av tidiga marknadsmekanik. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. En subgraf kan misslyckas på grund av en bugg. En misslyckad subgraf genererar inte frågeavgifter. Som ett resultat måste du vänta tills utvecklaren rättar felet och distribuerar en ny version. - - Om du prenumererar på den nyaste versionen av en subgraf kommer dina andelar automatiskt att migreras till den nya versionen. Detta kommer att medföra en kuratoravgift på 0,5%. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Kurations-FAQ ### 1. Vilken % av frågeavgifterna tjänar Kuratorer? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. Hur bestämmer jag vilka subgrafer av hög kvalitet att signalera på? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. Vad kostar det att uppdatera en subgraf? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. Hur ofta kan jag uppdatera min subgraf? +### 4. How often can I update my Subgraph? -Det föreslås att du inte uppdaterar dina subgrafer för ofta. Se frågan ovan för mer information. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Kan jag sälja mina kuratorandelar? diff --git a/website/src/pages/sv/resources/roles/delegating/undelegating.mdx b/website/src/pages/sv/resources/roles/delegating/undelegating.mdx index 9ea2ef752778..0363867230e5 100644 --- a/website/src/pages/sv/resources/roles/delegating/undelegating.mdx +++ b/website/src/pages/sv/resources/roles/delegating/undelegating.mdx @@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. 2. Click on your profile. You can find it on the top right corner of the page. - - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. 3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. 4. Click on the Indexer from which you wish to withdraw your tokens. - - Make sure to note the specific Indexer, as you will need to find them again to withdraw. 5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: @@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the ### Step-by-Step 1. Find your delegation transaction on Arbiscan. - - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) 2. Navigate to "Transaction Action" where you can find the staking extension contract: - - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) 3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) @@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the 11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: - ![Call `getWithdrawableDelegatedTokens` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) ## Ytterligare resurser diff --git a/website/src/pages/sv/resources/subgraph-studio-faq.mdx b/website/src/pages/sv/resources/subgraph-studio-faq.mdx index f2d35d39c1ee..5787f5c2dfeb 100644 --- a/website/src/pages/sv/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/sv/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Vanliga frågor om Subgraf Studio ## 1. Vad är Subgraf Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. Hur skapar jag en API-nyckel? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th När du har skapat en API-nyckel kan du i avsnittet Säkerhet definiera vilka domäner som kan ställa frågor till en specifik API-nyckel. -## 5. Kan jag överföra min subgraf till en annan ägare? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Observera att du inte längre kommer att kunna se eller redigera undergrafen i Studio när den har överförts. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. Hur hittar jag fråge-URL: er för undergrafer om jag inte är utvecklaren av den undergraf jag vill använda? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Kom ihåg att du kan skapa en API-nyckel och ställa frågor till alla undergrafer som publicerats i nätverket, även om du själv har byggt en undergraf. Dessa förfrågningar via den nya API-nyckeln är betalda förfrågningar som alla andra i nätverket. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/sv/resources/tokenomics.mdx b/website/src/pages/sv/resources/tokenomics.mdx index 3d6c4666a960..120c43db7ee1 100644 --- a/website/src/pages/sv/resources/tokenomics.mdx +++ b/website/src/pages/sv/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Översikt -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Kuratorer - Hitta de bästa subgrafterna för Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Grundvalen för blockkedjedata @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Skapa en subgraf +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Fråga en befintlig subgraf +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/sv/sps/introduction.mdx b/website/src/pages/sv/sps/introduction.mdx index 6c9a0b9ece89..30e643fff68a 100644 --- a/website/src/pages/sv/sps/introduction.mdx +++ b/website/src/pages/sv/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduktion --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Översikt -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Ytterligare resurser @@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil - [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) - [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) - [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) +- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar) diff --git a/website/src/pages/sv/sps/sps-faq.mdx b/website/src/pages/sv/sps/sps-faq.mdx index 74ae7af82977..e5313465d87c 100644 --- a/website/src/pages/sv/sps/sps-faq.mdx +++ b/website/src/pages/sv/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## Vad är Substreams-drivna subgrafer? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## Hur skiljer sig Substreams-drivna subgrafer från subgrafer? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## Vilka fördelar har användning av Substreams-drivna subgrafer? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## Vilka fördelar har Substreams? @@ -35,7 +35,7 @@ Det finns många fördelar med att använda Substreams, inklusive: - Högpresterande indexering: Ordervärden snabbare indexering genom storskaliga kluster av parallella operationer (tänk BigQuery). -- Utdata var som helst: Du kan sänka dina data var som helst du vill: PostgreSQL, MongoDB, Kafka, subgrafer, platta filer, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmerbarhet: Använd kod för att anpassa extrahering, utföra transformationsbaserade aggregeringar och modellera din utdata för flera sänkar. @@ -63,17 +63,17 @@ Det finns många fördelar med att använda Firehose, inklusive: - Använder platta filer: Blockkedjedata extraheras till platta filer, den billigaste och mest optimerade datorkällan som finns tillgänglig. -## Var kan utvecklare få mer information om Substreams-drivna subgrafer och Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## Vad är rollen för Rust-moduler i Substreams? -Rust-moduler är motsvarigheten till AssemblyScript-mappers i subgrafer. De kompileras till WASM på ett liknande sätt, men programmeringsmodellen tillåter parallell körning. De definierar vilken typ av omvandlingar och aggregeringar du vill tillämpa på råblockkedjedata. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst Vid användning av Substreams sker sammansättningen på omvandlingsnivån, vilket gör att cachade moduler kan återanvändas. -Som exempel kan Alice bygga en DEX-prismodul, Bob kan använda den för att bygga en volymaggregator för vissa intressanta tokens, och Lisa kan kombinera fyra individuella DEX-prismoduler för att skapa en prisoracle. En enda Substreams-begäran kommer att paketera alla dessa individuella moduler, länka dem samman, för att erbjuda en mycket mer förädlad dataström. Den strömmen kan sedan användas för att fylla i en subgraf och frågas av användare. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## Hur kan man bygga och distribuera en Substreams-drivna subgraf? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Var kan jag hitta exempel på Substreams och Substreams-drivna subgrafer? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -Du kan besöka [detta Github-repo](https://github.com/pinax-network/awesome-substreams) för att hitta exempel på Substreams och Substreams-drivna subgrafer. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## Vad innebär Substreams och Substreams-drivna subgrafer för The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? Integrationen lovar många fördelar, inklusive extremt högpresterande indexering och ökad sammansättbarhet genom att dra nytta av gemenskapsmoduler och bygga vidare på dem. diff --git a/website/src/pages/sv/sps/triggers.mdx b/website/src/pages/sv/sps/triggers.mdx index d618f8254691..77b382a28280 100644 --- a/website/src/pages/sv/sps/triggers.mdx +++ b/website/src/pages/sv/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Översikt -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Ytterligare resurser diff --git a/website/src/pages/sv/sps/tutorial.mdx b/website/src/pages/sv/sps/tutorial.mdx index 12b0127acb81..7d749958f087 100644 --- a/website/src/pages/sv/sps/tutorial.mdx +++ b/website/src/pages/sv/sps/tutorial.mdx @@ -1,9 +1,9 @@ --- -title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Komma igång @@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -73,7 +73,7 @@ dataSources: moduleName: map_spl_transfers # Module defined in the substreams.yaml file: ./my-project-sol-v0.1.0.spkg mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 kind: substreams/graph-entities file: ./src/mappings.ts handler: handleTriggers @@ -81,7 +81,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/sv/subgraphs/_meta-titles.json b/website/src/pages/sv/subgraphs/_meta-titles.json index 0556abfc236c..79dc0c23f596 100644 --- a/website/src/pages/sv/subgraphs/_meta-titles.json +++ b/website/src/pages/sv/subgraphs/_meta-titles.json @@ -1,6 +1,6 @@ { "querying": "Querying", "developing": "Developing", - "cookbook": "Cookbook", - "best-practices": "Best Practices" + "guides": "How-to Guides", + "best-practices": "Bästa praxis" } diff --git a/website/src/pages/sv/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/sv/subgraphs/best-practices/avoid-eth-calls.mdx index e40a7b3712e4..07249c97dd2a 100644 --- a/website/src/pages/sv/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/avoid-eth-calls.mdx @@ -1,19 +1,19 @@ --- title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls -sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls' +sidebarTitle: Avoiding eth_calls --- ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/sv/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/sv/subgraphs/best-practices/derivedfrom.mdx index db3a49928c89..093eb29255ab 100644 --- a/website/src/pages/sv/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/derivedfrom.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom -sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom' +sidebarTitle: Arrays with @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,17 +60,17 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/sv/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/sv/subgraphs/best-practices/grafting-hotfix.mdx index 2fea7d3f3239..acc7aa19a5ec 100644 --- a/website/src/pages/sv/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/grafting-hotfix.mdx @@ -1,26 +1,26 @@ --- title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment -sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing' +sidebarTitle: Grafting and Hotfixing --- ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Översikt -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 5000000 mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing 2. **New Grafted Subgraph Manifest (subgraph.yaml)** ```yaml - specVersion: 1.0.0 + specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing startBlock: 6000001 # Block after the last indexed block mapping: kind: ethereum/events - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Ytterligare resurser - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/sv/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/sv/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 6ff60ec9ab34..3a633244e0f2 100644 --- a/website/src/pages/sv/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -1,6 +1,6 @@ --- title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs -sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs' +sidebarTitle: Immutable Entities and Bytes as IDs --- ## TLDR @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/sv/subgraphs/best-practices/pruning.mdx b/website/src/pages/sv/subgraphs/best-practices/pruning.mdx index 1b51dde8894f..2d4f9ad803e0 100644 --- a/website/src/pages/sv/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/pruning.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning -sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints' +sidebarTitle: Pruning with indexerHints --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml -specVersion: 1.0.0 +specVersion: 1.3.0 schema: file: ./schema.graphql indexerHints: @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/sv/subgraphs/best-practices/timeseries.mdx b/website/src/pages/sv/subgraphs/best-practices/timeseries.mdx index 63786a945971..3b416d32b2bd 100644 --- a/website/src/pages/sv/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/sv/subgraphs/best-practices/timeseries.mdx @@ -1,11 +1,11 @@ --- title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations -sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations' +sidebarTitle: Timeseries and Aggregations --- ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Översikt @@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri ## How to Implement Timeseries and Aggregations +### Prerequisites + +You need `spec version 1.1.0` for this feature. + ### Defining Timeseries Entities A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: @@ -51,7 +55,7 @@ Exempel: type Data @entity(timeseries: true) { id: Int8! timestamp: Timestamp! - price: BigDecimal! + amount: BigDecimal! } ``` @@ -68,11 +72,11 @@ Exempel: type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { id: Int8! timestamp: Timestamp! - sum: BigDecimal! @aggregate(fn: "sum", arg: "price") + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") } ``` -In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. ### Querying Aggregated Data @@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/sv/subgraphs/billing.mdx b/website/src/pages/sv/subgraphs/billing.mdx index d864c1d3d6fb..614d84dd04f3 100644 --- a/website/src/pages/sv/subgraphs/billing.mdx +++ b/website/src/pages/sv/subgraphs/billing.mdx @@ -4,12 +4,14 @@ title: Fakturering ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. - **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +Learn more about pricing [here](https://thegraph.com/studio-pricing/). + ## Query Payments with credit card diff --git a/website/src/pages/sv/subgraphs/cookbook/arweave.mdx b/website/src/pages/sv/subgraphs/cookbook/arweave.mdx index 8a78a4ffa184..4a5591b45c72 100644 --- a/website/src/pages/sv/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/sv/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Bygga subgrafer på Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! I den här guiden kommer du att lära dig hur du bygger och distribuerar subgrafer för att indexera Weaver-blockkedjan. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are För att kunna bygga och distribuera Arweave Subgraphs behöver du två paket: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Subgraphs komponenter -Det finns tre komponenter i en subgraf: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,25 +40,25 @@ Definierar datakällorna av intresse och hur de ska behandlas. Arweave är en ny Här definierar du vilken data du vill kunna fråga efter att du har indexerat din subgrafer med GraphQL. Detta liknar faktiskt en modell för ett API, där modellen definierar strukturen för en begäran. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` Detta är logiken som avgör hur data ska hämtas och lagras när någon interagerar med datakällorna du lyssnar på. Data översätts och lagras utifrån det schema du har listat. -Under subgrafutveckling finns det två nyckelkommandon: +During Subgraph development there are two key commands: ``` $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Definition av subgraf manifestet -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml -specVersion: 0.0.5 +specVersion: 1.3.0 description: Arweave Blocks Indexing schema: file: ./schema.graphql # link to the schema file @@ -70,7 +70,7 @@ dataSources: owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet startBlock: 0 # set this to 0 to start indexing from chain genesis mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/blocks.ts # link to the file with the Assemblyscript mappings entities: @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave datakällor introducerar ett valfritt source.owner fält, som är den publika nyckeln till en Arweave plånbok @@ -99,7 +99,7 @@ Arweave datakällor stöder två typer av hanterare: ## Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript mappningar @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Fråga efter en Arweave-subgraf -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Exempel på subgrafer -Här är ett exempel på subgraf som referens: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Kan en subgraf indexera Arweave och andra kedjor? +### Can a Subgraph index Arweave and other chains? -Nej, en subgraf kan bara stödja datakällor från en kedja/nätverk. +No, a Subgraph can only support data sources from one chain/network. ### Kan jag indexera de lagrade filerna på Arweave? För närvarande indexerar The Graph bara Arweave som en blockkedja (dess block och transaktioner). -### Kan jag identifiera Bundlr buntar i min subgraf? +### Can I identify Bundlr bundles in my Subgraph? Detta stöds inte för närvarande. @@ -188,7 +188,7 @@ Source.owner kan vara användarens publika nyckel eller kontoadress. ### Vad är det aktuella krypteringsformatet? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/sv/subgraphs/cookbook/enums.mdx b/website/src/pages/sv/subgraphs/cookbook/enums.mdx index 2fc0efcc5831..3b90caab564e 100644 --- a/website/src/pages/sv/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/sv/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/sv/subgraphs/cookbook/grafting.mdx b/website/src/pages/sv/subgraphs/cookbook/grafting.mdx index e43fd73014c3..d88057cdac80 100644 --- a/website/src/pages/sv/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/sv/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Byt ut ett kontrakt och behåll dess historia med ympning --- -I den här guiden kommer du att lära dig hur du bygger och distribuerar nya subgrafer genom att ympa befintliga subgrafer. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## Vad är ympning? -Ympning återanvänder data från en befintlig subgraf och börjar indexera den vid ett senare block. Detta är användbart under utveckling för att snabbt komma förbi enkla fel i mappningarna eller för att tillfälligt få en befintlig subgraf att fungera igen efter att den har misslyckats. Det kan också användas när du lägger till en funktion till en subgraf som tar lång tid att indexera från början. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -Den ympade subgrafen kan använda ett GraphQL-schema som inte är identiskt med det i bas subgrafen, utan bara är kompatibelt med det. Det måste vara ett giltigt subgraf schema i sig, men kan avvika från bas undergrafens schema på följande sätt: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Den lägger till eller tar bort entitetstyper - Det tar bort attribut från entitetstyper @@ -22,38 +22,38 @@ För mer information kan du kontrollera: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Viktig anmärkning om ympning vid uppgradering till nätverket -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Varför är detta viktigt? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Bästa praxis -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. Genom att följa dessa riktlinjer minimerar du riskerna och säkerställer en smidigare migreringsprocess. ## Bygga en befintlig subgraf -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Definition av subgraf manifestet -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 schema: file: ./schema.graphql dataSources: @@ -66,7 +66,7 @@ dataSources: startBlock: 5955690 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Withdrawal @@ -85,27 +85,27 @@ dataSources: ## Ympnings manifest Definition -Ympning kräver att två nya objekt läggs till i det ursprungliga subgraf manifestet: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Distribuera Bas Subgraf -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. När du är klar kontrollerar du att subgrafen indexerar korrekt. Om du kör följande kommando i The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ Den returnerar ungefär så här: } ``` -När du har verifierat att subgrafen indexerar korrekt kan du snabbt uppdatera subgrafen med ympning. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Utplacering av ympning subgraf Transplantatersättningen subgraph.yaml kommer att ha en ny kontraktsadress. Detta kan hända när du uppdaterar din dapp, omdisponerar ett kontrakt, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. När du är klar kontrollerar du att subgrafen indexerar korrekt. Om du kör följande kommando i The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ Det bör returnera följande: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Ytterligare resurser diff --git a/website/src/pages/sv/subgraphs/cookbook/near.mdx b/website/src/pages/sv/subgraphs/cookbook/near.mdx index 833a4b7c997d..d766a44ad511 100644 --- a/website/src/pages/sv/subgraphs/cookbook/near.mdx +++ b/website/src/pages/sv/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Bygger subgrafer på NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## Vad är NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## Vad är NEAR subgrafer? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Blockhanterare: dessa körs på varje nytt block - Kvittohanterare: körs varje gång ett meddelande körs på ett angivet konto @@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Att bygga en NEAR Subgraf -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Att bygga en NEAR subgraf är mycket lik att bygga en subgraf som indexerar Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -Det finns tre aspekter av subgraf definition: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -Under subgrafutveckling finns det två nyckelkommandon: +During Subgraph development there are two key commands: ```bash $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Definition av subgraf manifestet -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml -specVersion: 0.0.2 +specVersion: 1.3.0 schema: file: ./src/schema.graphql # link to the schema file dataSources: @@ -61,7 +61,7 @@ dataSources: account: app.good-morning.near # This data source will monitor this account startBlock: 10662188 # Required for NEAR mapping: - apiVersion: 0.0.5 + apiVersion: 0.0.9 language: wasm/assemblyscript blockHandlers: - handler: handleNewBlock # the function name in the mapping file @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ NEAR datakällor stöder två typer av hanterare: ### Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript mappningar @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Utplacera en NEAR Subgraf -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -Nodkonfigurationen beror på var subgrafen distribueras. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraf Studion @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -När din subgraf har distribuerats kommer den att indexeras av Graph Node. Du kan kontrollera dess framsteg genom att fråga själva subgrafen: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ Vi kommer snart att ge mer information om hur du kör ovanstående komponenter. ## Fråga efter en NEAR subgraf -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Exempel på subgrafer -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### Hur fungerar betan? -NEAR stödet är i beta, vilket innebär att det kan bli ändringar i API:t när vi fortsätter att arbeta med att förbättra integrationen. Skicka ett e-postmeddelande till near@thegraph.com så att vi kan hjälpa dig att bygga NEAR subgrafer och hålla dig uppdaterad om den senaste utvecklingen! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Kan en subgraf indexera både NEAR och EVM kedjor? +### Can a Subgraph index both NEAR and EVM chains? -Nej, en subgraf kan bara stödja datakällor från en kedja/nätverk. +No, a Subgraph can only support data sources from one chain/network. -### Kan subgrafer reagera på mer specifika triggers? +### Can Subgraphs react to more specific triggers? För närvarande stöds endast blockerings- och kvittoutlösare. Vi undersöker utlösare för funktionsanrop till ett specificerat konto. Vi är också intresserade av att stödja eventutlösare, när NEAR har inbyggt eventsupport. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### Kan NEAR subgrafer göra visningsanrop till NEAR konton under mappningar? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? Detta stöds inte. Vi utvärderar om denna funktionalitet krävs för indexering. -### Kan jag använda data källmallar i min NEAR subgraf? +### Can I use data source templates in my NEAR Subgraph? Detta stöds inte för närvarande. Vi utvärderar om denna funktionalitet krävs för indexering. -### Ethereum subgrafer stöder "väntande" och "nuvarande" versioner, hur kan jag distribuera en "väntande" version av en NEAR subgraf? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -Väntande funktionalitet stöds ännu inte för NEAR subgrafer. Under tiden kan du distribuera en ny version till en annan "namngiven" subgraf, och när den sedan synkroniseras med kedjehuvudet kan du distribuera om till din primära "namngivna" subgraf, som kommer att använda samma underliggande implementerings-ID, så huvudsubgrafen synkroniseras omedelbart. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### Min fråga har inte besvarats, var kan jag få mer hjälp med att bygga NEAR subgrafer? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## Referenser diff --git a/website/src/pages/sv/subgraphs/cookbook/polymarket.mdx b/website/src/pages/sv/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/sv/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/sv/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/sv/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/sv/subgraphs/cookbook/secure-api-keys-nextjs.mdx index a9e82a6baa72..f90b30ccdd8c 100644 --- a/website/src/pages/sv/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/sv/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## Översikt -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/sv/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/sv/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..4c3ddf7baedc --- /dev/null +++ b/website/src/pages/sv/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Översikt + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Komma igång + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Ytterligare resurser + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/sv/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/sv/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..cb8901f3f76d --- /dev/null +++ b/website/src/pages/sv/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Introduktion + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Komma igång + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Ytterligare resurser + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/sv/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/sv/subgraphs/cookbook/subgraph-debug-forking.mdx index aee8ecf8791f..75bff8ee89a8 100644 --- a/website/src/pages/sv/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/sv/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Snabb och enkel subgraf felsökning med gafflar --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## Ok, vad är det? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## Vad?! Hur? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## Snälla, visa mig lite kod! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. Det vanliga sättet att försöka fixa är: 1. Gör en förändring i mappningskällan, som du tror kommer att lösa problemet (även om jag vet att det inte kommer att göra det). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Vänta tills det synkroniseras. 4. Om den går sönder igen gå tillbaka till 1, annars: Hurra! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Gör en ändring i mappningskällan som du tror kommer att lösa problemet. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. Om den går sönder igen, gå tillbaka till 1, annars: Hurra! Nu kanske du har 2 frågor: @@ -69,18 +69,18 @@ Nu kanske du har 2 frågor: Och jag svarar: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Gaffling är lätt, du behöver inte svettas: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! Så här är vad jag gör: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/sv/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/sv/subgraphs/cookbook/subgraph-uncrashable.mdx index ce8e87ecfd46..9b0652bf1a85 100644 --- a/website/src/pages/sv/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/sv/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Säker subgraf kodgenerator --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Varför integrera med Subgraf Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - Ramverket innehåller också ett sätt (via konfigurationsfilen) att skapa anpassade, men säkra, sätterfunktioner för grupper av entitetsvariabler. På så sätt är det omöjligt för användaren att ladda/använda en inaktuell grafenhet och det är också omöjligt att glömma att spara eller ställa in en variabel som krävs av funktionen. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable kan köras som en valfri flagga med kommandot Graph CLI codegen. @@ -26,4 +26,4 @@ Subgraph Uncrashable kan köras som en valfri flagga med kommandot Graph CLI cod graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/sv/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/sv/subgraphs/cookbook/transfer-to-the-graph.mdx index f06ed1722258..de3e762e2d40 100644 --- a/website/src/pages/sv/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/sv/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -1,14 +1,14 @@ --- -title: Tranfer to The Graph +title: Transfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Exempel -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Ytterligare resurser -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/sv/subgraphs/developing/creating/advanced.mdx b/website/src/pages/sv/subgraphs/developing/creating/advanced.mdx index 7ed946aee07e..e8476a0d9bdf 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Översikt -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - fullTextSearch @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified ## Icke dödliga fel -Indexeringsfel på redan synkroniserade delgrafer kommer, som standard, att få delgrafen att misslyckas och sluta synkronisera. Delgrafer kan istället konfigureras för att fortsätta synkroniseringen i närvaro av fel, genom att ignorera ändringarna som orsakades av hanteraren som provocerade felet. Det ger delgrafsförfattare tid att korrigera sina delgrafer medan förfrågningar fortsätter att behandlas mot det senaste blocket, även om resultaten kan vara inkonsekventa på grund av felet som orsakade felet. Observera att vissa fel alltid är dödliga. För att vara icke-dödliga måste felet vara känt för att vara deterministiskt. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Aktivering av icke-dödliga fel kräver att följande funktionsflagga sätts i delgrafens manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum features: - nonFatalErrors ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -Filbaserade datakällor är en ny delgrafsfunktion för att få tillgång till data utanför kedjan under indexering på ett robust, utökat sätt. Filbaserade datakällor stödjer hämtning av filer från IPFS och från Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > Detta lägger också grunden för deterministisk indexering av data utanför kedjan, samt möjligheten att introducera godtycklig data som hämtas via HTTP. @@ -221,7 +221,7 @@ templates: - name: TokenMetadata kind: file/ipfs mapping: - apiVersion: 0.0.7 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mapping.ts handler: handleMetadata @@ -290,7 +290,7 @@ Exempel: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//Denna exempelkod är för en undergraf för kryptosamverkan. Ovanstående ipfs-hash är en katalog med tokenmetadata för alla kryptosamverkande NFT:er. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -300,7 +300,7 @@ export function handleTransfer(event: TransferEvent): void { token.tokenURI = '/' + event.params.tokenId.toString() + '.json' const tokenIpfsHash = ipfshash + token.tokenURI - //Detta skapar en sökväg till metadata för en enskild Crypto coven NFT. Den konkaterar katalogen med "/" + filnamn + ".json" + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" token.ipfsURI = tokenIpfsHash @@ -317,23 +317,23 @@ Detta kommer att skapa en ny filbaserad datakälla som kommer att övervaka Grap This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Grattis, du använder filbaserade datakällor! -#### Distribuera dina delgrafer +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Begränsningar -Filbaserade datakällahanterare och entiteter är isolerade från andra delgrafentiteter, vilket säkerställer att de är deterministiska när de körs och att ingen förorening av kedjebaserade datakällor sker. För att vara specifik: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entiteter skapade av Filbaserade datakällor är oföränderliga och kan inte uppdateras - Filbaserade datakällahanterare kan inte komma åt entiteter från andra filbaserade datakällor - Entiteter associerade med filbaserade datakällor kan inte nås av kedjebaserade hanterare -> Även om denna begränsning inte bör vara problematisk för de flesta användningsfall kan den införa komplexitet för vissa. Var god kontakta oss via Discord om du har problem med att modellera din data baserad på fil i en delgraf! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Dessutom är det inte möjligt att skapa datakällor från en filbaserad datakälla, vare sig det är en datakälla på kedjan eller en annan filbaserad datakälla. Denna begränsning kan komma att hävas i framtiden. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Eftersom ympning kopierar data istället för att indexera basdata går det mycket snabbare att få delgrafen till det önskade blocket än att indexera från början, även om den initiala datorkopieringen fortfarande kan ta flera timmar för mycket stora delgrafer. Medan den ympade delgrafen initialiseras kommer Graph Node att logga information om de entitetstyper som redan har kopierats. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -Den ympade subgrafen kan använda ett GraphQL-schema som inte är identiskt med det i bas subgrafen, utan bara är kompatibelt med det. Det måste vara ett giltigt subgraf schema i sig, men kan avvika från bas undergrafens schema på följande sätt: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - Den lägger till eller tar bort entitetstyper - Det tar bort attribut från entitetstyper @@ -560,4 +560,4 @@ Den ympade subgrafen kan använda ett GraphQL-schema som inte är identiskt med - Den lägger till eller tar bort gränssnitt - Det ändrar för vilka entitetstyper ett gränssnitt implementeras -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/sv/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/sv/subgraphs/developing/creating/assemblyscript-mappings.mdx index 259ae147af9f..3ed678be2a9a 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from "../generated/Gravity/Gravity"; @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Kodgenerering -För att göra det enkelt och typsäkert att arbeta med smarta kontrakt, händelser och entiteter kan Graph CLI generera AssemblyScript-typer från subgrafens GraphQL-schema och kontrakts-ABIn som ingår i datakällorna. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. Detta görs med @@ -80,7 +80,7 @@ Detta görs med graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -99,15 +99,15 @@ import { // The events classes: NewGravatar, UpdatedGravatar, -} from '../generated/Gravity/Gravity' +} from "../generated/Gravity/Gravity"; ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript -import { Gravatar } from '../generated/schema' +import { Gravatar } from "../generated/schema" ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/CHANGELOG.md index 5d90888ac378..5f964d3cbb78 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/CHANGELOG.md +++ b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -1,5 +1,11 @@ # @graphprotocol/graph-ts +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + ## 0.37.0 ### Minor Changes diff --git a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/api.mdx index dd9fb343dd68..d001175da07e 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: API för AssemblyScript --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,18 +27,18 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versioner -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. -| Version | Versionsanteckningar | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Versionsanteckningar | +| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Inbyggda typer @@ -163,7 +163,7 @@ _Math_ #### TypedMap ```typescript -import { TypedMap } from '@graphprotocol/graph-ts' +import { TypedMap } from "@graphprotocol/graph-ts"; ``` `TypedMap` can be used to store key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). @@ -179,7 +179,7 @@ The `TypedMap` class has the following API: #### Bytes ```typescript -import { Bytes } from '@graphprotocol/graph-ts' +import { Bytes } from "@graphprotocol/graph-ts"; ``` `Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32`, etc. @@ -205,7 +205,7 @@ _Operators_ #### Address ```typescript -import { Address } from '@graphprotocol/graph-ts' +import { Address } from "@graphprotocol/graph-ts"; ``` `Address` extends `Bytes` to represent Ethereum `address` values. @@ -218,12 +218,12 @@ It adds the following method on top of the `Bytes` API: ### Store API ```typescript -import { store } from '@graphprotocol/graph-ts' +import { store } from "@graphprotocol/graph-ts"; ``` The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Skapa entiteter @@ -231,24 +231,24 @@ Följande är ett vanligt mönster för att skapa entiteter från Ethereum-händ ```typescript // Importera händelseklassen Transfer som genererats från ERC20 ABI -import { Transfer as TransferEvent } from '../generated/ERC20/ERC20' +import { Transfer as TransferEvent } from "../generated/ERC20/ERC20"; // Importera entitetstypen Transfer som genererats från GraphQL-schemat -import { Transfer } from '../generated/schema' +import { Transfer } from "../generated/schema"; // Händelsehanterare för överföring export function handleTransfer(event: TransferEvent): void { // Skapa en Transfer-entitet, med transaktionshash som enhets-ID - let id = event.transaction.hash - let transfer = new Transfer(id) + let id = event.transaction.hash; + let transfer = new Transfer(id); // Ange egenskaper för entiteten med hjälp av händelseparametrarna - transfer.from = event.params.from - transfer.to = event.params.to - transfer.amount = event.params.amount + transfer.from = event.params.from; + transfer.to = event.params.to; + transfer.amount = event.params.amount; // Spara entiteten till lagret - transfer.save() + transfer.save(); } ``` @@ -263,10 +263,10 @@ Each entity must have a unique ID to avoid collisions with other entities. It is Om en entitet redan finns kan den laddas från lagret med följande: ```typescript -let id = event.transaction.hash // eller hur ID konstrueras -let transfer = Transfer.load(id) +let id = event.transaction.hash; // eller hur ID konstrueras +let transfer = Transfer.load(id); if (transfer == null) { - transfer = new Transfer(id) + transfer = new Transfer(id); } // Använd överföringsenheten som tidigare @@ -282,14 +282,14 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript -let id = event.transaction.hash // eller hur ID konstrueras -let transfer = Transfer.loadInBlock(id) +let id = event.transaction.hash; // eller hur ID konstrueras +let transfer = Transfer.loadInBlock(id); if (transfer == null) { - transfer = new Transfer(id) + transfer = new Transfer(id); } // Använd överföringsenheten som tidigare @@ -343,7 +343,7 @@ transfer.amount = ... Det är också möjligt att avaktivera egenskaper med en av följande två instruktioner: ```typescript -transfer.from.unset() +transfer.from.unset(); transfer.from = null ``` @@ -353,14 +353,14 @@ Updating array properties is a little more involved, as the getting an array fro ```typescript // Detta kommer inte att fungera -entity.numbers.push(BigInt.fromI32(1)) -entity.save() +entity.numbers.push(BigInt.fromI32(1)); +entity.save(); // Detta kommer att fungera -let numbers = entity.numbers -numbers.push(BigInt.fromI32(1)) -entity.numbers = numbers -entity.save() +let numbers = entity.numbers; +numbers.push(BigInt.fromI32(1)); +entity.numbers = numbers; +entity.save(); ``` #### Ta bort entiteter från lagret @@ -380,11 +380,11 @@ Ethereum API ger tillgång till smarta kontrakt, offentliga tillståndsvariabler #### Stöd för Ethereum-typer -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -Följande exempel illustrerar detta. Med en subgraph-schema som +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -398,12 +398,12 @@ type Transfer @entity { and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: ```typescript -let id = event.transaction.hash -let transfer = new Transfer(id) -transfer.from = event.params.from -transfer.to = event.params.to -transfer.amount = event.params.amount -transfer.save() +let id = event.transaction.hash; +let transfer = new Transfer(id); +transfer.from = event.params.from; +transfer.to = event.params.to; +transfer.amount = event.params.amount; +transfer.save(); ``` #### Händelser och Block/Transaktionsdata @@ -483,22 +483,25 @@ class Log { #### Åtkomst till Smart Contract-tillstånd -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. En vanlig mönster är att komma åt kontraktet från vilket en händelse härstammar. Detta uppnås med följande kod: ```typescript // Importera den genererade kontraktsklassen och den genererade klassen för överföringshändelser -import { ERC20Contract, Transfer as TransferEvent } from '../generated/ERC20Contract/ERC20Contract' +import { + ERC20Contract, + Transfer as TransferEvent, +} from "../generated/ERC20Contract/ERC20Contract"; // Importera den genererade entitetsklassen -import { Transfer } from '../generated/schema' +import { Transfer } from "../generated/schema"; export function handleTransfer(event: TransferEvent) { // Bind kontraktet till den adress som skickade händelsen - let contract = ERC20Contract.bind(event.address) + let contract = ERC20Contract.bind(event.address); // Åtkomst till tillståndsvariabler och funktioner genom att anropa dem - let erc20Symbol = contract.symbol() + let erc20Symbol = contract.symbol(); } ``` @@ -506,7 +509,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Andra kontrakt som är en del av subgraphen kan importeras från den genererade koden och bindas till en giltig adress. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Hantering av återkallade anrop @@ -515,12 +518,12 @@ If the read-only methods of your contract may revert, then you should handle tha - For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript -let gravitera = gravitera.bind(event.address) -let callResult = gravitera_gravatarToOwner(gravatar) +let gravitera = gravitera.bind(event.address); +let callResult = gravitera_gravatarToOwner(gravatar); if (callResult.reverted) { - log.info('getGravatar reverted', []) + log.info("getGravatar reverted", []); } else { - let owner = callResult.value + let owner = callResult.value; } ``` @@ -579,10 +582,10 @@ let isContract = ethereum.hasCode(eoa).inner // returns false ### API för loggning ```typescript -import { log } from '@graphprotocol/graph-ts' +import { log } from "@graphprotocol/graph-ts"; ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -590,12 +593,16 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. ```typescript -log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) +log.info("Message to be displayed: {}, {}, {}", [ + value.toString(), + anotherValue.toString(), + "already a string", +]); ``` #### Loggning av ett eller flera värden @@ -618,11 +625,11 @@ export function handleSomeEvent(event: SomeEvent): void { I exemplet nedan loggas endast det första värdet i argument arrayen, trots att arrayen innehåller tre värden. ```typescript -let myArray = ['A', 'B', 'C'] +let myArray = ["A", "B", "C"]; export function handleSomeEvent(event: SomeEvent): void { // Visar : "Mitt värde är: A" (Även om tre värden skickas till `log.info`) - log.info('Mitt värde är: {}', myArray) + log.info("Mitt värde är: {}", myArray); } ``` @@ -631,11 +638,14 @@ export function handleSomeEvent(event: SomeEvent): void { Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. ```typescript -let myArray = ['A', 'B', 'C'] +let myArray = ["A", "B", "C"]; export function handleSomeEvent(event: SomeEvent): void { // Visar: "Mitt första värde är: A, andra värdet är: B, tredje värdet är: C" - log.info('My first value is: {}, second value is: {}, third value is: {}', myArray) + log.info( + "My first value is: {}, second value is: {}, third value is: {}", + myArray + ); } ``` @@ -646,7 +656,7 @@ För att visa ett specifikt värde i arrayen måste det indexeras och tillhandah ```typescript export function handleSomeEvent(event: SomeEvent): void { // Visar : "Mitt tredje värde är C" - log.info('My third value is: {}', [myArray[2]]) + log.info("My third value is: {}", [myArray[2]]); } ``` @@ -655,21 +665,21 @@ export function handleSomeEvent(event: SomeEvent): void { I exemplet nedan loggas blocknummer, blockhash och transaktionshash från en händelse: ```typescript -import { log } from '@graphprotocol/graph-ts' +import { log } from "@graphprotocol/graph-ts"; export function handleSomeEvent(event: SomeEvent): void { - log.debug('Block number: {}, block hash: {}, transaction hash: {}', [ + log.debug("Block number: {}, block hash: {}, transaction hash: {}", [ event.block.number.toString(), // "47596000" event.block.hash.toHexString(), // "0x..." event.transaction.hash.toHexString(), // "0x..." - ]) + ]); } ``` ### IPFS API ```typescript -import { ipfs } from '@graphprotocol/graph-ts' +import { ipfs } from "@graphprotocol/graph-ts" ``` Smart contracts occasionally anchor IPFS files onchain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. @@ -678,13 +688,13 @@ För att läsa en fil från IPFS med en given IPFS-hash eller sökväg görs fö ```typescript // Placera detta i en händelsehanterare i mappningen -let hash = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D' -let data = ipfs.cat(hash) +let hash = "QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D"; +let data = ipfs.cat(hash); // Sökvägar som `QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile` // som inkluderar filer i kataloger stöds också -let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' -let data = ipfs.cat(path) +let path = "QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile"; +let data = ipfs.cat(path); ``` **Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. @@ -692,41 +702,41 @@ let data = ipfs.cat(path) It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: ```typescript -import { JSONValue, Value } from '@graphprotocol/graph-ts' +import { JSONValue, Value } from "@graphprotocol/graph-ts"; export function processItem(value: JSONValue, userData: Value): void { // Se JSONValue-dokumentationen för mer information om hur man hanterar // med JSON-värden - let obj = value.toObject() - let id = obj.get('id') - let title = obj.get('title') + let obj = value.toObject(); + let id = obj.get("id"); + let title = obj.get("title"); if (!id || !title) { - return + return; } // Callbacks kan också skapa enheter - let newItem = new Item(id) - newItem.title = title.toString() - newitem.parent = userData.toString() // Ange parent till "parentId" - newitem.save() + let newItem = new Item(id); + newItem.title = title.toString(); + newitem.parent = userData.toString(); // Ange parent till "parentId" + newitem.save(); } // Placera detta i en händelsehanterare i mappningen -ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) +ipfs.map("Qm...", "processItem", Value.fromString("parentId"), ["json"]); // Alternativt kan du använda `ipfs.mapJSON`. -ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) +ipfs.mapJSON("Qm...", "processItem", Value.fromString("parentId")); ``` The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API ```typescript -import { crypto } from '@graphprotocol/graph-ts' +import { crypto } from "@graphprotocol/graph-ts"; ``` The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: @@ -736,7 +746,7 @@ The `crypto` API makes a cryptographic functions available for use in mappings. ### JSON API ```typescript -import { json, JSONValueKind } from '@graphprotocol/graph-ts' +import { json, JSONValueKind } from "@graphprotocol/graph-ts" ``` JSON data can be parsed using the `json` API: @@ -836,7 +846,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -887,4 +897,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/common-issues.mdx index b1f7b27f220a..dd4d5e876a6a 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Vanliga problem med AssemblyScript --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/sv/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/sv/subgraphs/developing/creating/install-the-cli.mdx index 8905ec3abf61..21e3401cd8e9 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Installera Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Översikt -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Komma igång @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Skapa en Subgraf ### Från ett befintligt avtal -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### Från ett exempel på en undergraf -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is ABI-filerna måste matcha ditt/dina kontrakt. Det finns några olika sätt att få ABI-filer: - Om du bygger ditt eget projekt har du förmodligen tillgång till dina senaste ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. - -## SpecVersion Releases - -| Version | Versionsanteckningar | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/sv/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/sv/subgraphs/developing/creating/ql-schema.mdx index 426092a76eb4..fd4d6fe36903 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Översikt -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two The following scalars are supported in the GraphQL API: -| Typ | Beskrivning | -| --- | --- | -| `Bytes` | Bytematris, representerad som en hexadecimal sträng. Vanligt används för Ethereum-hashar och adresser. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Typ | Beskrivning | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Bytematris, representerad som en hexadecimal sträng. Vanligt används för Ethereum-hashar och adresser. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | ### Enums @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -För en-till-många-relationer bör relationen alltid lagras på 'en'-sidan, och 'många'-sidan bör alltid härledas. Att lagra relationen på detta sätt, istället för att lagra en array av entiteter på 'många'-sidan, kommer att resultera i dramatiskt bättre prestanda både för indexering och för frågning av subgraphen. Generellt sett bör lagring av arrayer av entiteter undvikas så mycket som är praktiskt möjligt. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Exempel @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -Detta mer avancerade sätt att lagra många-till-många-relationer kommer att leda till att mindre data lagras för subgrafen, och därför till en subgraf som ofta är dramatiskt snabbare att indexera och att fråga. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Lägga till kommentarer i schemat @@ -259,7 +259,12 @@ type _Schema_ name: "bandSearch" language: en algorithm: rank - include: [{ entity: "Band", fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] }] + include: [ + { + entity: "Band" + fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] + } + ] ) type Band @entity { @@ -287,7 +292,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Stödda språk @@ -295,24 +300,24 @@ Att välja ett annat språk kommer att ha en definitiv, om än ibland subtil, ef Stödda språkordböcker: -| Code | Ordbok | -| ----- | ------------ | -| enkel | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | Portugisiska | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| Code | Ordbok | +| ------ | ------------ | +| enkel | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | Portugisiska | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | ### Rankningsalgoritmer diff --git a/website/src/pages/sv/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/sv/subgraphs/developing/creating/starting-your-subgraph.mdx index 9f06ce8fcd1d..3c7846394f04 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,20 +4,32 @@ title: Starting Your Subgraph ## Översikt -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Versionsanteckningar | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/sv/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/sv/subgraphs/developing/creating/subgraph-manifest.mdx index e9bac4f876b1..53ee94fe9c8f 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Översikt -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,12 +24,12 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml -specVersion: 0.0.4 +specVersion: 1.3.0 description: Gravatar for Ethereum repository: https://github.com/graphprotocol/graph-tooling schema: @@ -54,7 +54,7 @@ dataSources: data: 'bar' mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). De viktiga posterna att uppdatera för manifestet är: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -131,7 +131,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -149,11 +149,11 @@ dataSources: ## Anropsbehandlare -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Anropsbehandlare utlöses endast i ett av två fall: när den specificerade funktionen anropas av ett konto som inte är kontraktet självt eller när den är markerad som extern i Solidity och anropas som en del av en annan funktion i samma kontrakt. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Definiera en Anropsbehandlare @@ -169,7 +169,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -186,18 +186,18 @@ The `function` is the normalized function signature to filter calls by. The `han ### Kartläggningsfunktion -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript -import { CreateGravatarCall } from '../generated/Gravity/Gravity' -import { Transaction } from '../generated/schema' +import { CreateGravatarCall } from "../generated/Gravity/Gravity"; +import { Transaction } from "../generated/schema"; export function handleCreateGravatar(call: CreateGravatarCall): void { - let id = call.transaction.hash - let transaction = new Transaction(id) - transaction.displayName = call.inputs._displayName - transaction.imageUrl = call.inputs._imageUrl - transaction.save() + let id = call.transaction.hash; + let transaction = new Transaction(id); + transaction.displayName = call.inputs._displayName; + transaction.imageUrl = call.inputs._imageUrl; + transaction.save(); } ``` @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Blockbehandlare -Förutom att prenumerera på kontrakts händelser eller funktionsanrop kan en subgraf vilja uppdatera sina data när nya block läggs till i kedjan. För att uppnå detta kan en subgraf köra en funktion efter varje block eller efter block som matchar en fördefinierad filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Stödda filter @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. Avsaknaden av ett filter för en blockhanterare kommer att säkerställa att hanteraren kallas för varje block. En datakälla kan endast innehålla en blockhanterare för varje filttyp. @@ -232,7 +232,7 @@ dataSources: abi: Gravity mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript entities: - Gravatar @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### En Gång Filter @@ -276,27 +276,27 @@ blockHandlers: kind: once ``` -Den definierade hanteraren med filtret once kommer att anropas endast en gång innan alla andra hanterare körs. Denna konfiguration gör det möjligt för subgrafen att använda hanteraren som en initialiseringshanterare, som utför specifika uppgifter i början av indexeringen. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { - let data = new InitialData(Bytes.fromUTF8('initial')) - data.data = 'Setup data here' - data.save() + let data = new InitialData(Bytes.fromUTF8("initial")); + data.data = "Setup data here"; + data.save(); } ``` ### Kartläggningsfunktion -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript -import { ethereum } from '@graphprotocol/graph-ts' +import { ethereum } from "@graphprotocol/graph-ts"; export function handleBlock(block: ethereum.Block): void { - let id = block.hash - let entity = new Block(id) - entity.save() + let id = block.hash; + let entity = new Block(id); + entity.save(); } ``` @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -360,7 +360,7 @@ dataSources: abi: Factory mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -390,7 +390,7 @@ templates: abi: Exchange mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/exchange.ts entities: @@ -414,12 +414,12 @@ templates: In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. ```typescript -import { Exchange } from '../generated/templates' +import { Exchange } from "../generated/templates"; export function handleNewExchange(event: NewExchange): void { // Start indexing the exchange; `event.params.exchange` is the // address of the new exchange contract - Exchange.create(event.params.exchange) + Exchange.create(event.params.exchange); } ``` @@ -432,29 +432,29 @@ export function handleNewExchange(event: NewExchange): void { Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: ```typescript -import { Exchange } from '../generated/templates' +import { Exchange } from "../generated/templates"; export function handleNewExchange(event: NewExchange): void { - let context = new DataSourceContext() - context.setString('tradingPair', event.params.tradingPair) - Exchange.createWithContext(event.params.exchange, context) + let context = new DataSourceContext(); + context.setString("tradingPair", event.params.tradingPair); + Exchange.createWithContext(event.params.exchange, context); } ``` Inside a mapping of the `Exchange` template, the context can then be accessed: ```typescript -import { dataSource } from '@graphprotocol/graph-ts' +import { dataSource } from "@graphprotocol/graph-ts"; -let context = dataSource.context() -let tradingPair = context.getString('tradingPair') +let context = dataSource.context(); +let tradingPair = context.getString("tradingPair") ``` There are setters and getters like `setString` and `getString` for all value types. ## Startblock -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -467,7 +467,7 @@ dataSources: startBlock: 6627917 mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/mappings/factory.ts entities: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -532,3 +532,18 @@ To preserve the complete history of entity states: indexerHints: prune: never ``` + +## SpecVersion Releases + +| Version | Versionsanteckningar | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/sv/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/sv/subgraphs/developing/creating/unit-testing-framework.mdx index 49aea6a7f4da..83b346d47707 100644 --- a/website/src/pages/sv/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/sv/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Enhetsprovningsramverk --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Komma igång @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI alternativ @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,17 +145,17 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo undergraf +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Handledning för video -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure -_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_ +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ ### describe() @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im Så där har vi skapat vårt första test! 👏 -För att köra våra tester behöver du helt enkelt köra följande i din subgrafs rotmapp: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. -NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow: +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: `.test.ts` file: @@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' import { ipfs } from '@graphprotocol/graph-ts' import { gravatarFromIpfs } from './utils' -// Export ipfs.map() callback in order for matchstck to detect it +// Export ipfs.map() callback in order for matchstick to detect it export { processGravatar } from './utils' test('ipfs.cat', () => { @@ -1172,7 +1172,7 @@ templates: network: mainnet mapping: kind: ethereum/events - apiVersion: 0.0.6 + apiVersion: 0.0.9 language: wasm/assemblyscript file: ./src/token-lock-wallet.ts handler: handleMetadata @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Testtäckning -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Ytterligare resurser -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Respons diff --git a/website/src/pages/sv/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/sv/subgraphs/developing/deploying/multiple-networks.mdx index 8be847bc8fab..b45b0701bfdd 100644 --- a/website/src/pages/sv/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/sv/subgraphs/developing/deploying/multiple-networks.mdx @@ -1,12 +1,13 @@ --- title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Distribuera undergrafen till flera nätverk +## Deploying the Subgraph to multiple networks -I vissa fall vill du distribuera samma undergraf till flera nätverk utan att duplicera all dess kod. Den största utmaningen med detta är att kontraktsadresserna på dessa nätverk är olika. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -20,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -96,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraf arkivpolitik +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Varje subgraf som påverkas av denna policy har en möjlighet att ta tillbaka versionen i fråga. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Kontroll av undergrafens hälsa +## Checking Subgraph health -Om en subgraf synkroniseras framgångsrikt är det ett gott tecken på att den kommer att fortsätta att fungera bra för alltid. Nya triggers i nätverket kan dock göra att din subgraf stöter på ett otestat feltillstånd eller så kan den börja halka efter på grund av prestandaproblem eller problem med nodoperatörerna. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/sv/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/sv/subgraphs/developing/deploying/using-subgraph-studio.mdx index cf6d67e5bb9d..dc1facd6d5cb 100644 --- a/website/src/pages/sv/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/sv/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Skapa och hantera API nycklar för specifika undergrafer +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### Hur man skapar en subgraf i Subgraf Studio @@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Kompatibilitet mellan undergrafer och grafnätet -In order to be supported by Indexers on The Graph Network, subgraphs must: - -- Index a [supported network](/supported-networks/) -- Får inte använda någon av följande egenskaper: - - ipfs.cat & ipfs.map - - Icke dödliga fel - - Ympning +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Auth för grafer -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -91,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatisk arkivering av versioner av undergrafer -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/sv/subgraphs/developing/developer-faq.mdx b/website/src/pages/sv/subgraphs/developing/developer-faq.mdx index 347f3caa9805..36942bf1dce7 100644 --- a/website/src/pages/sv/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/sv/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. Vad är en subgraf? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Kan jag ändra det GitHub-konto som är kopplat till min subgraf? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -Du måste distribuera om subgrafen, men om subgrafens ID (IPFS-hash) inte ändras behöver den inte synkroniseras från början. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Inom en subgraf behandlas händelser alltid i den ordning de visas i blocken, oavsett om det är över flera kontrakt eller inte. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -94,7 +94,7 @@ You can find the list of the supported networks [here](/supported-networks/). Yes. You can do this by importing `graph-ts` as per the example below: ```javascript -import { dataSource } from '@graphprotocol/graph-ts' +import { dataSource } from "@graphprotocol/graph-ts" dataSource.network() dataSource.address() @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Ja! Prova följande kommando och ersätt "organization/subgraphName" med organisationen under vilken den är publicerad och namnet på din subgraf: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/sv/subgraphs/developing/introduction.mdx b/website/src/pages/sv/subgraphs/developing/introduction.mdx index bf5f1bb0f311..c4e9fbd9c78a 100644 --- a/website/src/pages/sv/subgraphs/developing/introduction.mdx +++ b/website/src/pages/sv/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/sv/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/sv/subgraphs/developing/managing/deleting-a-subgraph.mdx index ae778febe161..b8c2330ca49d 100644 --- a/website/src/pages/sv/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/sv/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,30 +2,30 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Kuratorer kommer inte längre kunna signalera på subgrafet. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/sv/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/sv/subgraphs/developing/managing/transferring-a-subgraph.mdx index 0fc6632cbc40..e80bde3fa6d2 100644 --- a/website/src/pages/sv/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/sv/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/sv/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/sv/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 24079d30b9b4..e13f4a7f9f7c 100644 --- a/website/src/pages/sv/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/sv/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -1,10 +1,11 @@ --- title: Publicera en Subgraph på Det Decentraliserade Nätverket +sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Uppdatera metadata för en publicerad subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -61,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. > -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/sv/subgraphs/developing/subgraphs.mdx b/website/src/pages/sv/subgraphs/developing/subgraphs.mdx index a6fa5ca3a4f6..9ad42542beda 100644 --- a/website/src/pages/sv/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/sv/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgrafer ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Livscykel för undergrafer -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/sv/subgraphs/explorer.mdx b/website/src/pages/sv/subgraphs/explorer.mdx index 9dfc11588323..87b670a3247d 100644 --- a/website/src/pages/sv/subgraphs/explorer.mdx +++ b/website/src/pages/sv/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graf Utforskaren --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Översikt -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signalera/Sluta signalera på subgraffar +- Signal/Un-signal on Subgraphs - Visa mer detaljer som diagram, aktuell distributions-ID och annan metadata -- Växla versioner för att utforska tidigare iterationer av subgraffen -- Fråga subgraffar via GraphQL -- Testa subgraffar i lekplatsen -- Visa indexerare som indexerar på en viss subgraff +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraffstatistik (tilldelningar, kuratorer, etc.) -- Visa enheten som publicerade subgraffen +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Maximal delegeringskapacitet - den maximala mängden delegerad insats som indexeraren produktivt kan acceptera. Överskjuten delegerad insats kan inte användas för tilldelningar eller beräkningar av belöningar. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Kuratorer -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraffar-fliken -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexeringstabell -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. I det här avsnittet hittar du också information om dina nettobelöningar som indexerare och nettovärdaravgifter. Du kommer att se följande metriker: @@ -223,13 +223,13 @@ Kom ihåg att denna tabell kan rullas horisontellt, så om du rullar hela vägen ### Kureringstabell -I Kureringstabellen hittar du alla subgraffar du signalerar på (vilket gör det möjligt för dig att ta emot frågeavgifter). Signalering gör att kuratorer kan informera indexerare om vilka subgraffar som är värdefulla och pålitliga, vilket signalerar att de bör indexerats. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Inom den här fliken hittar du en översikt över: -- Alla subgraffar du signalerar på med signaldetaljer -- Andelar totalt per subgraff -- Frågebelöningar per subgraff +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Uppdaterade datumdetaljer ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/sv/subgraphs/guides/arweave.mdx b/website/src/pages/sv/subgraphs/guides/arweave.mdx new file mode 100644 index 000000000000..4a5591b45c72 --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Bygga subgrafer på Arweave +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +I den här guiden kommer du att lära dig hur du bygger och distribuerar subgrafer för att indexera Weaver-blockkedjan. + +## Vad är Arweave? + +Arweave-protokollet tillåter utvecklare att lagra data permanent och det är den största skillnaden mellan Arweave och IPFS, där IPFS saknar funktionen; beständighet och filer lagrade på Arweave kan inte ändras eller raderas. + +Arweave har redan byggt ett flertal bibliotek för att integrera protokollet i ett antal olika programmeringsspråk. För mer information kan du kolla: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## Vad är Arweave-subgrafer? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Bygga en Arweave-subgraf + +För att kunna bygga och distribuera Arweave Subgraphs behöver du två paket: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## Subgraphs komponenter + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +Definierar datakällorna av intresse och hur de ska behandlas. Arweave är en ny typ av datakälla. + +### 2. Schema - `schema.graphql` + +Här definierar du vilken data du vill kunna fråga efter att du har indexerat din subgrafer med GraphQL. Detta liknar faktiskt en modell för ett API, där modellen definierar strukturen för en begäran. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +Detta är logiken som avgör hur data ska hämtas och lagras när någon interagerar med datakällorna du lyssnar på. Data översätts och lagras utifrån det schema du har listat. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## Definition av subgraf manifestet + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave datakällor introducerar ett valfritt source.owner fält, som är den publika nyckeln till en Arweave plånbok + +Arweave datakällor stöder två typer av hanterare: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> De source.owner kan vara ägarens adress eller deras publika nyckel. +> +> Transaktioner är byggstenarna i Arweave permaweb och de är objekt skapade av slutanvändare. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript mappningar + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Fråga efter en Arweave-subgraf + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Exempel på subgrafer + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Kan jag indexera de lagrade filerna på Arweave? + +För närvarande indexerar The Graph bara Arweave som en blockkedja (dess block och transaktioner). + +### Can I identify Bundlr bundles in my Subgraph? + +Detta stöds inte för närvarande. + +### Hur kan jag filtrera transaktioner till ett specifikt konto? + +Source.owner kan vara användarens publika nyckel eller kontoadress. + +### Vad är det aktuella krypteringsformatet? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/sv/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/sv/subgraphs/guides/contract-analyzer.mdx new file mode 100644 index 000000000000..7da81474c9ad --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/contract-analyzer.mdx @@ -0,0 +1,117 @@ +--- +title: Smart Contract Analysis with Cana CLI +--- + +Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains. + +## Översikt + +**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more. + +### Key Features + +With Cana CLI, you can: + +- Detect deployment blocks +- Verify source code +- Extract ABIs & event signatures +- Identify proxy and implementation contracts +- Support multiple chains + +### Prerequisites + +Before installing Cana CLI, make sure you have: + +- [Node.js v16+](https://nodejs.org/en) +- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install) +- Block explorer API keys + +### Installation & Setup + +1. Install Cana CLI + +Use npm to install it globally: + +```bash +npm install -g contract-analyzer +``` + +2. Configure Cana CLI + +Set up a blockchain environment for analysis: + +```bash +cana setup +``` + +During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL. + +After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use. + +### Steps: Using Cana CLI for Smart Contract Analysis + +#### 1. Select a Chain + +Cana CLI supports multiple EVM-compatible chains. + +For a list of chains added run this command: + +```bash +cana chains +``` + +Then select a chain with this command: + +```bash +cana chains --switch +``` + +Once a chain is selected, all subsequent contract analyses will continue on that chain. + +#### 2. Basic Contract Analysis + +Run the following command to analyze a contract: + +```bash +cana analyze 0xContractAddress +``` + +eller + +```bash +cana -a 0xContractAddress +``` + +This command fetches and displays essential contract information in the terminal using a clear, organized format. + +#### 3. Understanding the Output + +Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved: + +``` +contracts-analyzed/ +└── ContractName_chainName_YYYY-MM-DD/ + ├── contract/ # Folder for individual contract files + ├── abi.json # Contract ABI + └── event-information.json # Event signatures and examples +``` + +This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development. + +#### 4. Chain Management + +Add and manage chains: + +```bash +cana setup # Add a new chain +cana chains # List configured chains +cana chains -s # Switch chains +``` + +### Troubleshooting + +Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions. + +### Conclusion + +With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease. diff --git a/website/src/pages/sv/subgraphs/guides/enums.mdx b/website/src/pages/sv/subgraphs/guides/enums.mdx new file mode 100644 index 000000000000..3b90caab564e --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Ytterligare resurser + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/sv/subgraphs/guides/grafting.mdx b/website/src/pages/sv/subgraphs/guides/grafting.mdx new file mode 100644 index 000000000000..d88057cdac80 --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Byt ut ett kontrakt och behåll dess historia med ympning +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## Vad är ympning? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- Den lägger till eller tar bort entitetstyper +- Det tar bort attribut från entitetstyper +- Det tar bort attribut från entitetstyper +- Det förvandlar icke-nullbara attribut till nullbara attribut +- Det lägger till värden till enums +- Den lägger till eller tar bort gränssnitt +- Det ändrar för vilka entitetstyper ett gränssnitt implementeras + +För mer information kan du kontrollera: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Viktig anmärkning om ympning vid uppgradering till nätverket + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Varför är detta viktigt? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Bästa praxis + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +Genom att följa dessa riktlinjer minimerar du riskerna och säkerställer en smidigare migreringsprocess. + +## Bygga en befintlig subgraf + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## Definition av subgraf manifestet + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Ympnings manifest Definition + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## Distribuera Bas Subgraf + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Den returnerar ungefär så här: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## Utplacering av ympning subgraf + +Transplantatersättningen subgraph.yaml kommer att ha en ny kontraktsadress. Detta kan hända när du uppdaterar din dapp, omdisponerar ett kontrakt, etc. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +Det bör returnera följande: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## Ytterligare resurser + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/sv/subgraphs/guides/near.mdx b/website/src/pages/sv/subgraphs/guides/near.mdx new file mode 100644 index 000000000000..d766a44ad511 --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/near.mdx @@ -0,0 +1,283 @@ +--- +title: Bygger subgrafer på NEAR +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## Vad är NEAR? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- Blockhanterare: dessa körs på varje nytt block +- Kvittohanterare: körs varje gång ett meddelande körs på ett angivet konto + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> Ett kvitto är det enda handlingsbara objektet i systemet. När vi pratar om att "bearbeta en transaktion" på NEAR plattformen betyder det så småningom att "tillämpa kvitton" någon gång. + +## Att bygga en NEAR Subgraf + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### Definition av subgraf manifestet + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +NEAR datakällor stöder två typer av hanterare: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript mappningar + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## Utplacera en NEAR Subgraf + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraf Studion + +```sh +graph auth +graph deploy +``` + +### Lokal graf nod (baserat på standardkonfiguration) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexering av NEAR med en lokal grafnod + +Att köra en Graph Node som indexerar NEAR har följande operativa krav: + +- NEAR Indexer Framework med Firehose-instrumentering +- NEAR Brandslangskomponent(er) +- Graf Nod med Firehose ändpunkt konfigurerad + +Vi kommer snart att ge mer information om hur du kör ovanstående komponenter. + +## Fråga efter en NEAR subgraf + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Exempel på subgrafer + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### Hur fungerar betan? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +För närvarande stöds endast blockerings- och kvittoutlösare. Vi undersöker utlösare för funktionsanrop till ett specificerat konto. Vi är också intresserade av att stödja eventutlösare, när NEAR har inbyggt eventsupport. + +### Kommer kvittohanterare att utlösa för konton och deras underkonton? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +Detta stöds inte. Vi utvärderar om denna funktionalitet krävs för indexering. + +### Can I use data source templates in my NEAR Subgraph? + +Detta stöds inte för närvarande. Vi utvärderar om denna funktionalitet krävs för indexering. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## Referenser + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/sv/subgraphs/guides/polymarket.mdx b/website/src/pages/sv/subgraphs/guides/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/sv/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/sv/subgraphs/guides/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..f90b30ccdd8c --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## Översikt + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/sv/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/sv/subgraphs/guides/subgraph-composition.mdx new file mode 100644 index 000000000000..805a904c7ba9 --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/subgraph-composition.mdx @@ -0,0 +1,132 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +## Introduktion + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +## Prerequisites + +### Source Subgraphs + +- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs) +- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0 +- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed +- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of +- Source Subgraphs cannot use grafting on top of existing entities +- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly + +### Composed Subgraphs + +- You can only compose up to a **maximum of 5 source Subgraphs** +- Composed Subgraphs can only use **datasources from the same chain** +- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time +- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly +- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph) + +Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs + +## Komma igång + +The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Ytterligare resurser + +- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/sv/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/sv/subgraphs/guides/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..75bff8ee89a8 --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Snabb och enkel subgraf felsökning med gafflar +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## Ok, vad är det? + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## Vad?! Hur? + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## Snälla, visa mig lite kod! + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +Det vanliga sättet att försöka fixa är: + +1. Gör en förändring i mappningskällan, som du tror kommer att lösa problemet (även om jag vet att det inte kommer att göra det). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. Vänta tills det synkroniseras. +4. Om den går sönder igen gå tillbaka till 1, annars: Hurra! + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. Gör en ändring i mappningskällan som du tror kommer att lösa problemet. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. Om den går sönder igen, gå tillbaka till 1, annars: Hurra! + +Nu kanske du har 2 frågor: + +1. gaffelbas vad??? +2. Forking vem?! + +Och jag svarar: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. Gaffling är lätt, du behöver inte svettas: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +Så här är vad jag gör: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/sv/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/sv/subgraphs/guides/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..9b0652bf1a85 --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Säker subgraf kodgenerator +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Varför integrera med Subgraf Uncrashable? + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- Ramverket innehåller också ett sätt (via konfigurationsfilen) att skapa anpassade, men säkra, sätterfunktioner för grupper av entitetsvariabler. På så sätt är det omöjligt för användaren att ladda/använda en inaktuell grafenhet och det är också omöjligt att glömma att spara eller ställa in en variabel som krävs av funktionen. + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +Subgraph Uncrashable kan köras som en valfri flagga med kommandot Graph CLI codegen. + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/sv/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/sv/subgraphs/guides/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..de3e762e2d40 --- /dev/null +++ b/website/src/pages/sv/subgraphs/guides/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfer to The Graph +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### Exempel + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### Ytterligare resurser + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/sv/subgraphs/querying/best-practices.mdx b/website/src/pages/sv/subgraphs/querying/best-practices.mdx index 906948273d5f..715ae5a81b6a 100644 --- a/website/src/pages/sv/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/sv/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Bästa praxis för förfrågningar The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Hantering av subgrafer över olika blockkedjor: Frågehantering från flera subgrafer i en enda fråga +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fullt typad resultat @@ -79,7 +79,7 @@ However, as mentioned in ["Querying from an Application"](/subgraphs/querying/fr Here's how to query The Graph with `graph-client`: ```tsx -import { execute } from '../.graphclient' +import { execute } from "../.graphclient"; const query = ` query GetToken($id: ID!) { @@ -88,13 +88,13 @@ query GetToken($id: ID!) { owner } } -` +`; const variables = { id: '1' } async function main() { - const result = await execute(query, variables) + const result = await execute(query, variables); // `result` är fullständigt typad! - console.log(result) + console.log(result); } main() @@ -111,15 +111,15 @@ More GraphQL client alternatives are covered in ["Querying from an Application"] A common (bad) practice is to dynamically build query strings as follows: ```tsx -const id = params.id -const fields = ['id', 'owner'] +const id = params.id; +const fields = ["id", "owner"]; const query = ` query GetToken { token(id: ${id}) { - ${fields.join('\n')} + ${fields.join("\n")} } } -` +`; // Execute query... ``` @@ -134,9 +134,9 @@ While the above snippet produces a valid GraphQL query, **it has many drawbacks* For this reason, it is recommended to always write queries as static strings: ```tsx -import { execute } from 'your-favorite-graphql-client' +import { execute } from "your-favorite-graphql-client"; -const id = params.id +const id = params.id; const query = ` query GetToken($id: ID!) { token(id: $id) { @@ -144,7 +144,7 @@ query GetToken($id: ID!) { owner } } -` +`; const result = await execute(query, { variables: { @@ -167,9 +167,9 @@ You might want to include the `owner` field only on a particular condition. For this, you can leverage the `@include(if:...)` directive as follows: ```tsx -import { execute } from 'your-favorite-graphql-client' +import { execute } from "your-favorite-graphql-client"; -const id = params.id +const id = params.id; const query = ` query GetToken($id: ID!, $includeOwner: Boolean) { token(id: $id) { @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/sv/subgraphs/querying/from-an-application.mdx b/website/src/pages/sv/subgraphs/querying/from-an-application.mdx index ee0bb7a2fabe..0784e371cab0 100644 --- a/website/src/pages/sv/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/sv/subgraphs/querying/from-an-application.mdx @@ -1,5 +1,6 @@ --- title: Att göra förfrågningar från en Applikation +sidebarTitle: Querying from an App --- Learn how to query The Graph from your application. @@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Hantering av subgrafer över olika blockkedjor: Frågehantering från flera subgrafer i en enda fråga +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fullt typad resultat @@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Steg 1 @@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Steg 1 @@ -257,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Steg 1 diff --git a/website/src/pages/sv/subgraphs/querying/graph-client/README.md b/website/src/pages/sv/subgraphs/querying/graph-client/README.md index 416cadc13c6f..ae01284970a6 100644 --- a/website/src/pages/sv/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/sv/subgraphs/querying/graph-client/README.md @@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for | Status | Feature | Notes | | :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| ✅ | Multiple indexers | based on fetch strategies | -| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | -| ✅ | Build time validations & optimizations | | -| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | -| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | -| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | -| ✅ | Local (client-side) Mutations | | -| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | -| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | -| ✅ | Integration with `@apollo/client` | | -| ✅ | Integration with `urql` | | -| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | -| ✅ | [`@live` queries](./live.md) | Based on polling | +| ✅ | Multiple indexers | based on fetch strategies | +| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue | +| ✅ | Build time validations & optimizations | | +| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) | +| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source | +| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client | +| ✅ | Local (client-side) Mutations | | +| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) | +| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit | +| ✅ | Integration with `@apollo/client` | | +| ✅ | Integration with `urql` | | +| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` | +| ✅ | [`@live` queries](./live.md) | Based on polling | > You can find an [extended architecture design here](./architecture.md) -## Getting Started +## Komma igång You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client: @@ -138,7 +138,7 @@ graphclient serve-dev And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳 -#### Examples +#### Exempel You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples: @@ -308,8 +308,8 @@ sources:
`highestValue` - - This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. + +This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated. This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources. diff --git a/website/src/pages/sv/subgraphs/querying/graph-client/live.md b/website/src/pages/sv/subgraphs/querying/graph-client/live.md index e6f726cb4352..00053b724be0 100644 --- a/website/src/pages/sv/subgraphs/querying/graph-client/live.md +++ b/website/src/pages/sv/subgraphs/querying/graph-client/live.md @@ -2,7 +2,7 @@ Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data. -## Getting Started +## Komma igång Start by adding the following configuration to your `.graphclientrc.yml` file: diff --git a/website/src/pages/sv/subgraphs/querying/graphql-api.mdx b/website/src/pages/sv/subgraphs/querying/graphql-api.mdx index e4c1fbcb94b3..17995ee09698 100644 --- a/website/src/pages/sv/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/sv/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -Detta kan vara användbart om du bara vill hämta enheter som har ändrats, till exempel sedan den senaste gången du pollade. Eller alternativt kan det vara användbart för att undersöka eller felsöka hur enheter förändras i din undergraf (om det kombineras med ett blockfilter kan du isolera endast enheter som ändrades i ett visst block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltextsökförfrågningar -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. Fulltextsökoperatorer: -| Symbol | Operatör | Beskrivning | -| --- | --- | --- | -| `&` | `And` | För att kombinera flera söktermer till ett filter för entiteter som inkluderar alla de angivna termerna | -| | | `Or` | Förfrågningar med flera söktermer separerade av ellipsen kommer att returnera alla entiteter med en matchning från någon av de angivna termerna | -| `<->` | `Follow by` | Ange avståndet mellan två ord. | -| `:*` | `Prefix` | Använd prefixsöktermen för att hitta ord vars prefix matchar (2 tecken krävs.) | +| Symbol | Operatör | Beskrivning | +| ------ | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | För att kombinera flera söktermer till ett filter för entiteter som inkluderar alla de angivna termerna | +| | | `Or` | Förfrågningar med flera söktermer separerade av ellipsen kommer att returnera alla entiteter med en matchning från någon av de angivna termerna | +| `<->` | `Follow by` | Ange avståndet mellan två ord. | +| `:*` | `Prefix` | Använd prefixsöktermen för att hitta ord vars prefix matchar (2 tecken krävs.) | #### Exempel @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Metadata för undergrafer -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -Om ett block anges är metadata från det blocket, om inte används det senast indexerade blocket. Om det anges måste blocket vara efter undergrafens startblock och mindre än eller lika med det senast indexerade blocket. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ Om ett block anges är metadata från det blocket, om inte används det senast i - hash: blockets hash - nummer: blockets nummer -- timestamp: blockets timestamp, om tillgänglig (detta är för närvarande endast tillgängligt för undergrafer som indexerar EVM-nätverk) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/sv/subgraphs/querying/introduction.mdx b/website/src/pages/sv/subgraphs/querying/introduction.mdx index 5434f06414fb..7b3c151bdbbd 100644 --- a/website/src/pages/sv/subgraphs/querying/introduction.mdx +++ b/website/src/pages/sv/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Översikt -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/sv/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/sv/subgraphs/querying/managing-api-keys.mdx index 3c3ad4ba152e..594527795da0 100644 --- a/website/src/pages/sv/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/sv/subgraphs/querying/managing-api-keys.mdx @@ -1,14 +1,14 @@ --- -title: Hantera dina API-nycklar +title: Managing API keys --- ## Översikt -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Mängd GRT spenderad 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - Visa och hantera domännamn som har auktoriserats att använda din API-nyckel - - Koppla subgrafer som kan frågas med din API-nyckel + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/sv/subgraphs/querying/python.mdx b/website/src/pages/sv/subgraphs/querying/python.mdx index 213b45f144b3..3a987546c454 100644 --- a/website/src/pages/sv/subgraphs/querying/python.mdx +++ b/website/src/pages/sv/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/sv/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/sv/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/sv/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/sv/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/sv/subgraphs/quick-start.mdx b/website/src/pages/sv/subgraphs/quick-start.mdx index b959329363d9..f3fba67ef0d7 100644 --- a/website/src/pages/sv/subgraphs/quick-start.mdx +++ b/website/src/pages/sv/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Snabbstart --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Installera Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,42 +51,42 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -Se följande skärmdump för ett exempel för vad du kan förvänta dig när du initierar din subgraf: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -När din subgraf är skriven, kör följande kommandon: +Once your Subgraph is written, run the following commands: ```` ```sh @@ -94,7 +94,7 @@ graph codegen && graph build ``` ```` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -109,37 +109,37 @@ graph deploy The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -157,32 +157,32 @@ graph publish ``` ```` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/sv/substreams/developing/dev-container.mdx b/website/src/pages/sv/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/sv/substreams/developing/dev-container.mdx +++ b/website/src/pages/sv/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/sv/substreams/developing/sinks.mdx b/website/src/pages/sv/substreams/developing/sinks.mdx index 5ff37a31d943..36acd969b476 100644 --- a/website/src/pages/sv/substreams/developing/sinks.mdx +++ b/website/src/pages/sv/substreams/developing/sinks.mdx @@ -1,5 +1,5 @@ --- -title: Official Sinks +title: Sink your Substreams --- Choose a sink that meets your project's needs. @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks @@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de ### Official -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | -| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | -| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | -| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | -| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | -| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | -| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | -| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- | +| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) | +| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) | +| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) | +| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) | +| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) | +| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) | +| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) | ### Community -| Name | Support | Maintainer | Source Code | -| --- | --- | --- | --- | -| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | -| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | -| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | -| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | +| Name | Support | Maintainer | Source Code | +| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- | +| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) | +| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) | +| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) | +| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) | - O = Official Support (by one of the main Substreams providers) - C = Community Support diff --git a/website/src/pages/sv/substreams/developing/solana/account-changes.mdx b/website/src/pages/sv/substreams/developing/solana/account-changes.mdx index 7e45ea961e5e..37c0b7d5abcb 100644 --- a/website/src/pages/sv/substreams/developing/solana/account-changes.mdx +++ b/website/src/pages/sv/substreams/developing/solana/account-changes.mdx @@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu > NOTE: History for the Solana Account Changes dates as of 2025, block 310629601. -For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). +For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes). > NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`. diff --git a/website/src/pages/sv/substreams/developing/solana/transactions.mdx b/website/src/pages/sv/substreams/developing/solana/transactions.mdx index b6f8cbc3b345..dcd19e9de276 100644 --- a/website/src/pages/sv/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/sv/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraf 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/sv/substreams/introduction.mdx b/website/src/pages/sv/substreams/introduction.mdx index 1c263c32d747..c12627982ad6 100644 --- a/website/src/pages/sv/substreams/introduction.mdx +++ b/website/src/pages/sv/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/sv/substreams/publishing.mdx b/website/src/pages/sv/substreams/publishing.mdx index 0d0bb4856073..21989ed9b73b 100644 --- a/website/src/pages/sv/substreams/publishing.mdx +++ b/website/src/pages/sv/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package @@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data ![confirm](/img/4_confirm.png) -That's it! You have succesfully published a package in the Substreams registry. +That's it! You have successfully published a package in the Substreams registry. ![success](/img/5_success.png) diff --git a/website/src/pages/sv/supported-networks.mdx b/website/src/pages/sv/supported-networks.mdx index 36a51d6fdd9d..01776006c980 100644 --- a/website/src/pages/sv/supported-networks.mdx +++ b/website/src/pages/sv/supported-networks.mdx @@ -1,22 +1,28 @@ --- title: Nätverk som stöds hideTableOfContents: true +hideContentHeader: true --- -import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps' -import { SupportedNetworksTable } from '@/supportedNetworks' +import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks' +import { Heading } from '@/components' +import { useI18n } from '@/i18n' -export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) +export const getStaticProps = getSupportedNetworksStaticProps + + + {useI18n().t('index.supportedNetworks.title')} + - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support. diff --git a/website/src/pages/sv/token-api/_meta-titles.json b/website/src/pages/sv/token-api/_meta-titles.json new file mode 100644 index 000000000000..7ed31e0af95d --- /dev/null +++ b/website/src/pages/sv/token-api/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "mcp": "MCP", + "evm": "EVM Endpoints", + "monitoring": "Monitoring Endpoints", + "faq": "FAQ" +} diff --git a/website/src/pages/sv/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/sv/token-api/evm/get-balances-evm-by-address.mdx new file mode 100644 index 000000000000..3386fd078059 --- /dev/null +++ b/website/src/pages/sv/token-api/evm/get-balances-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Balances by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getBalancesEvmByAddress +--- + +The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain. diff --git a/website/src/pages/sv/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/sv/token-api/evm/get-holders-evm-by-contract.mdx new file mode 100644 index 000000000000..0bb79e41ed54 --- /dev/null +++ b/website/src/pages/sv/token-api/evm/get-holders-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getHoldersEvmByContract +--- + +The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract. diff --git a/website/src/pages/sv/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/sv/token-api/evm/get-ohlc-prices-evm-by-contract.mdx new file mode 100644 index 000000000000..d1558ddd6e78 --- /dev/null +++ b/website/src/pages/sv/token-api/evm/get-ohlc-prices-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token OHLCV prices by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getOhlcPricesEvmByContract +--- + +The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format. diff --git a/website/src/pages/sv/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/sv/token-api/evm/get-tokens-evm-by-contract.mdx new file mode 100644 index 000000000000..b6fab8011fc2 --- /dev/null +++ b/website/src/pages/sv/token-api/evm/get-tokens-evm-by-contract.mdx @@ -0,0 +1,9 @@ +--- +title: Token Holders and Supply by Contract Address +template: + type: openApi + apiId: tokenApi + operationId: getTokensEvmByContract +--- + +The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more. diff --git a/website/src/pages/sv/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/sv/token-api/evm/get-transfers-evm-by-address.mdx new file mode 100644 index 000000000000..604c185588ea --- /dev/null +++ b/website/src/pages/sv/token-api/evm/get-transfers-evm-by-address.mdx @@ -0,0 +1,9 @@ +--- +title: Token Transfers by Wallet Address +template: + type: openApi + apiId: tokenApi + operationId: getTransfersEvmByAddress +--- + +The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time. diff --git a/website/src/pages/sv/token-api/faq.mdx b/website/src/pages/sv/token-api/faq.mdx new file mode 100644 index 000000000000..8a5f3bbd358a --- /dev/null +++ b/website/src/pages/sv/token-api/faq.mdx @@ -0,0 +1,109 @@ +--- +title: Token API FAQ +--- + +Get fast answers to easily integrate and scale with The Graph's high-performance Token API. + +## Allmän + +### What blockchains does the Token API support? + +Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One. + +### Why isn't my API key from The Graph Market working? + +Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key. + +### How current is the data provided by the API relative to the blockchain? + +The API provides data up to the latest finalized block. + +### How do I authenticate requests to the Token API? + +Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported. + +### Does the Token API provide a client SDK? + +While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional blockchains in the future? + +Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to offer data closer to the chain head? + +Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol). + +### Are there plans to support additional use cases such as NFTs? + +The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol). + +## MCP / LLM / AI Topics + +### Is there a time limit for LLM queries? + +Yes. The maximum time limit for LLM queries is 10 seconds. + +### Is there a known list of LLMs that work with the API? + +Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server. + +Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter). + +### Where can I find the MCP client? + +You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client). + +## Advanced Topics + +### I'm getting 403/401 errors. What's wrong? + +Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market. + +### Are there rate limits or usage costs?\*\* + +During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta. + +### What networks are supported, and how do I specify them? + +You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet. + +### Why do I only see 10 results? How can I get more data? + +Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100). + +### How do I fetch older transfer history? + +The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call. + +### What does an empty `"data": []` array mean? + +An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error. + +### Why is the JSON response wrapped in a `"data"` array? + +All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`). + +### Why are token amounts returned as strings? + +Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values. + +### What format should addresses be in? + +The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address. + +### Do I need special headers besides authentication? + +While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`). + +### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this? + +For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`. + +### Is the Token API part of The Graph's GraphQL service? + +No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints. + +### Do I need to use MCP or tools like Claude, Cline, or Cursor? + +No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required. diff --git a/website/src/pages/sv/token-api/mcp/claude.mdx b/website/src/pages/sv/token-api/mcp/claude.mdx new file mode 100644 index 000000000000..bc3dbe28ecb3 --- /dev/null +++ b/website/src/pages/sv/token-api/mcp/claude.mdx @@ -0,0 +1,58 @@ +--- +title: Using Claude Desktop to Access the Token API via MCP +sidebarTitle: Claude Desktop +--- + +## Prerequisites + +- [Claude Desktop](https://claude.ai/download) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Claude Desktop's settings panel showing the MCP server configuration option.](/img/claude-preview-token-api.png) + +## Konfiguration + +Create or edit your `claude_desktop_config.json` file. + +> **Settings** > **Developer** > **Edit Config** + +- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json` +- Windows: `%APPDATA%\Claude\claude_desktop_config.json` +- Linux: `.config/Claude/claude_desktop_config.json` + +```json label="claude_desktop_config.json" +{ + "mcpServers": { + "token-api": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Error dialog in Claude Desktop showing 'ENOENT' system error, indicating the npx/bunx command wasn't found in the system path.](/img/claude-ENOENT.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Connection error notification in Claude Desktop displaying 'Server disconnected' message.](/img/claude-server-disconnect.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. + +> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details. diff --git a/website/src/pages/sv/token-api/mcp/cline.mdx b/website/src/pages/sv/token-api/mcp/cline.mdx new file mode 100644 index 000000000000..15c9980df7a6 --- /dev/null +++ b/website/src/pages/sv/token-api/mcp/cline.mdx @@ -0,0 +1,52 @@ +--- +title: Using Cline to Access the Token API via MCP +sidebarTitle: Cline +--- + +## Prerequisites + +- [Cline](https://cline.bot/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cline's MCP server configuration interface displaying the JSON settings file with mcp-pinax server details visible.](/img/cline-preview-token-api.png) + +## Konfiguration + +Create or edit your `cline_mcp_settings.json` file. + +> **MCP Servers** > **Installed** > **Configure MCP Servers** + +```json label="cline_mcp_settings.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +![Cline error dialog showing 'ENOENT' system alert.](/img/cline-error.png) + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +![Cline connection error notification displaying server disconnection warning.](/img/cline-missing-variables.png) + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/sv/token-api/mcp/cursor.mdx b/website/src/pages/sv/token-api/mcp/cursor.mdx new file mode 100644 index 000000000000..1364cca2cca5 --- /dev/null +++ b/website/src/pages/sv/token-api/mcp/cursor.mdx @@ -0,0 +1,50 @@ +--- +title: Using Cursor to Access the Token API via MCP +sidebarTitle: Cursor +--- + +## Prerequisites + +- [Cursor](https://www.cursor.com/) installed. +- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/). +- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path. +- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version. + +![Screenshot of Cursor's MCP configuration panel.](/img/cursor-preview-token-api.png) + +## Konfiguration + +Create or edit your `~/.cursor/mcp.json` file. + +> **Cursor Settings** > **MCP** > **Add new global MCP Server** + +```json label="mcp.json" +{ + "mcpServers": { + "mcp-pinax": { + "command": "npx", + "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"], + "env": { + "ACCESS_TOKEN": "" + } + } + } +} +``` + +## Troubleshooting + +![Cursor IDE error notification that reads, "Failed to create client"](/img/cursor-error.png) + +To enable logs for the MCP, use the `--verbose true` option. + +### ENOENT + +Try to use the full path of the command instead: + +- Run `which npx` or `which bunx` to get the path of the command. +- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`). + +### Server disconnected + +Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable. diff --git a/website/src/pages/sv/token-api/monitoring/get-health.mdx b/website/src/pages/sv/token-api/monitoring/get-health.mdx new file mode 100644 index 000000000000..57a827b3343b --- /dev/null +++ b/website/src/pages/sv/token-api/monitoring/get-health.mdx @@ -0,0 +1,7 @@ +--- +title: Get health status of the API +template: + type: openApi + apiId: tokenApi + operationId: getHealth +--- diff --git a/website/src/pages/sv/token-api/monitoring/get-networks.mdx b/website/src/pages/sv/token-api/monitoring/get-networks.mdx new file mode 100644 index 000000000000..0ea3c485ddb9 --- /dev/null +++ b/website/src/pages/sv/token-api/monitoring/get-networks.mdx @@ -0,0 +1,7 @@ +--- +title: Get supported networks of the API +template: + type: openApi + apiId: tokenApi + operationId: getNetworks +--- diff --git a/website/src/pages/sv/token-api/monitoring/get-version.mdx b/website/src/pages/sv/token-api/monitoring/get-version.mdx new file mode 100644 index 000000000000..0be6b7e92d04 --- /dev/null +++ b/website/src/pages/sv/token-api/monitoring/get-version.mdx @@ -0,0 +1,7 @@ +--- +title: Get the version of the API +template: + type: openApi + apiId: tokenApi + operationId: getVersion +--- diff --git a/website/src/pages/sv/token-api/quick-start.mdx b/website/src/pages/sv/token-api/quick-start.mdx new file mode 100644 index 000000000000..db512ba0d7f8 --- /dev/null +++ b/website/src/pages/sv/token-api/quick-start.mdx @@ -0,0 +1,79 @@ +--- +title: Token API Quick Start +sidebarTitle: Snabbstart +--- + +![The Graph Token API Quick Start banner](/img/token-api-quickstart-banner.jpg) + +> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol). + +The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application. + +The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude. + +## Prerequisites + +Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu. + +## Authentication + +All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `. + +```json +{ + "headers": { + "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA" + } +} +``` + +## Using JavaScript + +Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example: + +```js label="index.js" +const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208' +const options = { + method: 'GET', + headers: { + Accept: 'application/json', + Authorization: 'Bearer ', + }, +} + +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => response.json()) + .then((response) => console.log(response)) + .catch((err) => console.error(err)) +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +## Using cURL (Command Line) + +To make an API request using **cURL**, open your command line and run the following command. + +```curl +curl --request GET \ + --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \ + --header 'Accept: application/json' \ + --header 'Authorization: Bearer ' +``` + +Make sure to replace `` with the JWT Token generated from your API key. + +> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL. + +## Troubleshooting + +If the API call fails, try printing out the full response object for additional error details. For example: + +```js label="index.js" +fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options) + .then((response) => { + console.log('Status Code:', response.status) + return response.json() + }) + .then((data) => console.log(data)) + .catch((err) => console.error('Error:', err)) +``` diff --git a/website/src/pages/sw/about.mdx b/website/src/pages/sw/about.mdx new file mode 100644 index 000000000000..833b097673d2 --- /dev/null +++ b/website/src/pages/sw/about.mdx @@ -0,0 +1,67 @@ +--- +title: About The Graph +--- + +## What is The Graph? + +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/resources/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. + +## The Graph Provides a Solution + +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. + +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. + +### How The Graph Functions + +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. + +#### Specifics + +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. + +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. + +- When creating a Subgraph, you need to write a Subgraph manifest. + +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. + +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. + +![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) + +The flow follows these steps: + +1. A dapp adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. + +## Next Steps + +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. + +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/sw/archived/_meta-titles.json b/website/src/pages/sw/archived/_meta-titles.json new file mode 100644 index 000000000000..9501304a4305 --- /dev/null +++ b/website/src/pages/sw/archived/_meta-titles.json @@ -0,0 +1,3 @@ +{ + "arbitrum": "Scaling with Arbitrum" +} diff --git a/website/src/pages/sw/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/sw/archived/arbitrum/arbitrum-faq.mdx new file mode 100644 index 000000000000..d121f5a2d0f3 --- /dev/null +++ b/website/src/pages/sw/archived/arbitrum/arbitrum-faq.mdx @@ -0,0 +1,80 @@ +--- +title: Arbitrum FAQ +--- + +Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. + +## Why did The Graph implement an L2 Solution? + +By scaling The Graph on L2, network participants can now benefit from: + +- Upwards of 26x savings on gas fees + +- Faster transaction speed + +- Security inherited from Ethereum + +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. + +The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. + +## What do I need to do to use The Graph on L2? + +The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. + +Consequently, to pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: + +- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges: + + - [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) + - [TransferTo](https://transferto.xyz/swap) + +- If you have other assets on Arbitrum, you can swap them for GRT through a swapping protocol like Uniswap. + +- Alternatively, you can acquire GRT directly on Arbitrum through a decentralized exchange. + +Once you have GRT on Arbitrum, you can add it to your billing balance. + +To take advantage of using The Graph on L2, use this dropdown switcher to toggle between chains. + +![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) + +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? + +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. + +All indexing rewards are now entirely on Arbitrum. + +## Were there any risks associated with scaling the network to L2? + +All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). + +Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). + +## Are existing Subgraphs on Ethereum working? + +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. + +## Does GRT have a new smart contract deployed on Arbitrum? + +Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. + +## Billing on Arbitrum FAQs + +## What do I need to do about the GRT in my billing balance? + +Nothing! Your GRT has been securely migrated to Arbitrum and is being used to pay for queries as you read this. + +## How do I know my funds have migrated securely to Arbitrum? + +All GRT billing balances have already been successfully migrated to Arbitrum. You can view the billing contract on Arbitrum [here](https://arbiscan.io/address/0x1B07D3344188908Fb6DEcEac381f3eE63C48477a). + +## How do I know the Arbitrum bridge is secure? + +The bridge has been [heavily audited](https://code4rena.com/contests/2022-10-the-graph-l2-bridge-contest) to ensure safety and security for all users. + +## What do I need to do if I'm adding fresh GRT from my Ethereum mainnet wallet? + +Adding GRT to your Arbitrum billing balance can be done with a one-click experience in [Subgraph Studio](https://thegraph.com/studio/). You'll be able to easily bridge your GRT to Arbitrum and fill your API keys in one transaction. + +Visit the [Billing page](/subgraphs/billing/) for more detailed instructions on adding, withdrawing, or acquiring GRT. diff --git a/website/src/pages/sw/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/sw/archived/arbitrum/l2-transfer-tools-faq.mdx new file mode 100644 index 000000000000..7edde3d0cbcd --- /dev/null +++ b/website/src/pages/sw/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -0,0 +1,414 @@ +--- +title: L2 Transfer Tools FAQ +--- + +## General + +### What are L2 Transfer Tools? + +The Graph has made it 26x cheaper for contributors to participate in the network by deploying the protocol to Arbitrum One. The L2 Transfer Tools were created by core devs to make it easy to move to L2. + +For each network participant, a set of L2 Transfer Tools are available to make the experience seamless when moving to L2, avoiding thawing periods or having to manually withdraw and bridge GRT. + +These tools will require you to follow a specific set of steps depending on what your role is within The Graph and what you are transferring to L2. + +### Can I use the same wallet I use on Ethereum mainnet? + +If you are using an [EOA](https://ethereum.org/en/developers/docs/accounts/#types-of-account) wallet you can use the same address. If your Ethereum mainnet wallet is a contract (e.g. a multisig) then you must specify an [Arbitrum wallet address](/archived/arbitrum/arbitrum-faq/#what-do-i-need-to-do-to-use-the-graph-on-l2) where your transfer will be sent. Please check the address carefully as any transfers to an incorrect address can result in permanent loss. If you'd like to use a multisig on L2, make sure you deploy a multisig contract on Arbitrum One. + +Wallets on EVM blockchains like Ethereum and Arbitrum are a pair of keys (public and private), that you create without any need to interact with the blockchain. So any wallet that was created for Ethereum will also work on Arbitrum without having to do anything else. + +The exception is with smart contract wallets like multisigs: these are smart contracts that are deployed separately on each chain, and get their address when they are deployed. If a multisig was deployed to Ethereum, it won't exist with the same address on Arbitrum. A new multisig must be created first on Arbitrum, and may get a different address. + +### What happens if I don’t finish my transfer in 7 days? + +The L2 Transfer Tools use Arbitrum’s native mechanism to send messages from L1 to L2. This mechanism is called a “retryable ticket” and is used by all native token bridges, including the Arbitrum GRT bridge. You can read more about retryable tickets in the [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). + +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). + +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. + +### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? + +If you don't see a banner on your profile asking you to finish the transfer, then it's likely the transaction made it safely to L2 and no more action is needed. If in doubt, you can check if Explorer shows your delegation, stake or curation on Arbitrum One. + +If you have the L1 transaction hash (which you can find by looking at the recent transactions in your wallet), you can also confirm if the "retryable ticket" that carried the message to L2 was redeemed here: https://retryable-dashboard.arbitrum.io/ - if the auto-redeem failed, you can also connect your wallet there and redeem it. Rest assured that core devs are also monitoring for messages that get stuck, and will attempt to redeem them before they expire. + +## Subgraph Transfer + +### How do I transfer my Subgraph? + + + +To transfer your Subgraph, you will need to complete the following steps: + +1. Initiate the transfer on Ethereum mainnet + +2. Wait 20 minutes for confirmation + +3. Confirm Subgraph transfer on Arbitrum\* + +4. Finish publishing Subgraph on Arbitrum + +5. Update Query URL (recommended) + +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). + +### Where should I initiate my transfer from? + +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. + +### How long do I need to wait until my Subgraph is transferred + +The transfer time takes approximately 20 minutes. The Arbitrum bridge is working in the background to complete the bridge transfer automatically. In some cases, gas costs may spike and you will need to confirm the transaction again. + +### Will my Subgraph still be discoverable after I transfer it to L2? + +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. + +### Does my Subgraph need to be published to transfer it? + +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. + +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? + +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. + +### After I transfer, do I also need to re-publish on Arbitrum? + +After the 20 minute transfer window, you will need to confirm the transfer with a transaction in the UI to finish the transfer, but the transfer tool will guide you through this. Your L1 endpoint will continue to be supported during the transfer window and a grace period after. It is encouraged that you update your endpoint when convenient for you. + +### Will my endpoint experience downtime while re-publishing? + +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. + +### Is publishing and versioning the same on L2 as Ethereum Ethereum mainnet? + +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. + +### Will my Subgraph's curation move with my Subgraph? + +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. + +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. + +### Can I move my Subgraph back to Ethereum mainnet after I transfer? + +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. + +### Why do I need bridged ETH to complete my transfer? + +Gas fees on Arbitrum One are paid using bridged ETH (i.e. ETH that has been bridged to Arbitrum One). However, the gas fees are significantly lower when compared to Ethereum mainnet. + +## Delegation + +### How do I transfer my delegation? + + + +To transfer your delegation, you will need to complete the following steps: + +1. Initiate delegation transfer on Ethereum mainnet +2. Wait 20 minutes for confirmation +3. Confirm delegation transfer on Arbitrum + +\*\*\*\*You must confirm the transaction to complete the delegation transfer on Arbitrum. This step must be completed within 7 days or the delegation could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). + +### What happens to my rewards if I initiate a transfer with an open allocation on Ethereum mainnet? + +If the Indexer to whom you're delegating is still operating on L1, when you transfer to Arbitrum you will forfeit any delegation rewards from open allocations on Ethereum mainnet. This means that you will lose the rewards from, at most, the last 28-day period. If you time the transfer right after the Indexer has closed allocations you can make sure this is the least amount possible. If you have a communication channel with your Indexer(s), consider discussing with them to find the best time to do your transfer. + +### What happens if the Indexer I currently delegate to isn't on Arbitrum One? + +The L2 transfer tool will only be enabled if the Indexer you have delegated to has transferred their own stake to Arbitrum. + +### Do Delegators have the option to delegate to another Indexer? + +If you wish to delegate to another Indexer, you can transfer to the same Indexer on Arbitrum, then undelegate and wait for the thawing period. After this, you can select another active Indexer to delegate to. + +### What if I can't find the Indexer I'm delegating to on L2? + +The L2 transfer tool will automatically detect the Indexer you previously delegated to. + +### Will I be able to mix and match or 'spread' my delegation across new or several Indexers instead of the prior Indexer? + +The L2 transfer tool will always move your delegation to the same Indexer you delegated to previously. Once you have moved to L2, you can undelegate, wait for the thawing period, and decide if you'd like to split up your delegation. + +### Am I subject to the cooldown period or can I withdraw immediately after using the L2 delegation transfer tool? + +The transfer tool allows you to immediately move to L2. If you would like to undelegate you will have to wait for the thawing period. However, if an Indexer has transferred all of their stake to L2, you can withdraw on Ethereum mainnet immediately. + +### Can my rewards be negatively impacted if I do not transfer my delegation? + +It is anticipated that all network participation will move to Arbitrum One in the future. + +### How long does it take to complete the transfer of my delegation to L2? + +A 20-minute confirmation is required for delegation transfer. Please note that after the 20-minute period, you must come back and complete step 3 of the transfer process within 7 days. If you fail to do this, then your delegation may be lost. Note that in most cases the transfer tool will complete this step for you automatically. In case of a failed auto-attempt, you will need to complete it manually. If any issues arise during this process, don't worry, we'll be here to help: contact us at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). + +### Can I transfer my delegation if I'm using a GRT vesting contract/token lock wallet? + +Yes! The process is a bit different because vesting contracts can't forward the ETH needed to pay for the L2 gas, so you need to deposit it beforehand. If your vesting contract is not fully vested, you will also have to first initialize a counterpart vesting contract on L2 and will only be able to transfer the delegation to this L2 vesting contract. The UI on Explorer can guide you through this process when you've connected to Explorer using the vesting lock wallet. + +### Does my Arbitrum vesting contract allow releasing GRT just like on mainnet? + +No, the vesting contract that is created on Arbitrum will not allow releasing any GRT until the end of the vesting timeline, i.e. until your contract is fully vested. This is to prevent double spending, as otherwise it would be possible to release the same amounts on both layers. + +If you'd like to release GRT from the vesting contract, you can transfer them back to the L1 vesting contract using Explorer: in your Arbitrum One profile, you will see a banner saying you can transfer GRT back to the mainnet vesting contract. This requires a transaction on Arbitrum One, waiting 7 days, and a final transaction on mainnet, as it uses the same native bridging mechanism from the GRT bridge. + +### Is there any delegation tax? + +No. Received tokens on L2 are delegated to the specified Indexer on behalf of the specified Delegator without charging a delegation tax. + +### Will my unrealized rewards be transferred when I transfer my delegation? + +​Yes! The only rewards that can't be transferred are the ones for open allocations, as those won't exist until the Indexer closes the allocations (usually every 28 days). If you've been delegating for a while, this is likely only a small fraction of rewards. + +At the smart contract level, unrealized rewards are already part of your delegation balance, so they will be transferred when you transfer your delegation to L2. ​ + +### Is moving delegations to L2 mandatory? Is there a deadline? + +​Moving delegation to L2 is not mandatory, but indexing rewards are increasing on L2 following the timeline described in [GIP-0052](https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193). Eventually, if the Council keeps approving the increases, all rewards will be distributed in L2 and there will be no indexing rewards for Indexers and Delegators on L1. ​ + +### If I am delegating to an Indexer that has already transferred stake to L2, do I stop receiving rewards on L1? + +​Many Indexers are transferring stake gradually so Indexers on L1 will still be earning rewards and fees on L1, which are then shared with Delegators. Once an Indexer has transferred all of their stake, then they will stop operating on L1, so Delegators will not receive any more rewards unless they transfer to L2. + +Eventually, if the Council keeps approving the indexing rewards increases in L2, all rewards will be distributed on L2 and there will be no indexing rewards for Indexers and Delegators on L1. ​ + +### I don't see a button to transfer my delegation. Why is that? + +​Your Indexer has probably not used the L2 transfer tools to transfer stake yet. + +If you can contact the Indexer, you can encourage them to use the L2 Transfer Tools so that Delegators can transfer delegations to their L2 Indexer address. ​ + +### My Indexer is also on Arbitrum, but I don't see a button to transfer the delegation in my profile. Why is that? + +​It is possible that the Indexer has set up operations on L2, but hasn't used the L2 transfer tools to transfer stake. The L1 smart contracts will therefore not know about the Indexer's L2 address. If you can contact the Indexer, you can encourage them to use the transfer tool so that Delegators can transfer delegations to their L2 Indexer address. ​ + +### Can I transfer my delegation to L2 if I have started the undelegating process and haven't withdrawn it yet? + +​No. If your delegation is thawing, you have to wait the 28 days and withdraw it. + +The tokens that are being undelegated are "locked" and therefore cannot be transferred to L2. + +## Curation Signal + +### How do I transfer my curation? + +To transfer your curation, you will need to complete the following steps: + +1. Initiate signal transfer on Ethereum mainnet + +2. Specify an L2 Curator address\* + +3. Wait 20 minutes for confirmation + +\*If necessary - i.e. you are using a contract address. + +### How will I know if the Subgraph I curated has moved to L2? + +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. + +### What if I do not wish to move my curation to L2? + +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. + +### How do I know my curation successfully transferred? + +Signal details will be accessible via Explorer approximately 20 minutes after the L2 transfer tool is initiated. + +### Can I transfer my curation on more than one Subgraph at a time? + +There is no bulk transfer option at this time. + +## Indexer Stake + +### How do I transfer my stake to Arbitrum? + +> Disclaimer: If you are currently unstaking any portion of your GRT on your Indexer, you will not be able to use L2 Transfer Tools. + + + +To transfer your stake, you will need to complete the following steps: + +1. Initiate stake transfer on Ethereum mainnet + +2. Wait 20 minutes for confirmation + +3. Confirm stake transfer on Arbitrum + +\*Note that you must confirm the transfer within 7 days otherwise your stake may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). + +### Will all of my stake transfer? + +You can choose how much of your stake to transfer. If you choose to transfer all of your stake at once, you will need to close any open allocations first. + +If you plan on transferring parts of your stake over multiple transactions, you must always specify the same beneficiary address. + +Note: You must meet the minimum stake requirements on L2 the first time you use the transfer tool. Indexers must send the minimum 100k GRT (when calling this function the first time). If leaving a portion of stake on L1, it must also be over the 100k GRT minimum and be sufficient (together with your delegations) to cover your open allocations. + +### How much time do I have to confirm my stake transfer to Arbitrum? + +\*\*\* You must confirm your transaction to complete the stake transfer on Arbitrum. This step must be completed within 7 days or stake could be lost. + +### What if I have open allocations? + +If you are not sending all of your stake, the L2 transfer tool will validate that at least the minimum 100k GRT remains in Ethereum mainnet and your remaining stake and delegation is enough to cover any open allocations. You may need to close open allocations if your GRT balance does not cover the minimums + open allocations. + +### Using the transfer tools, is it necessary to wait 28 days to unstake on Ethereum mainnet before transferring? + +No, you can transfer your stake to L2 immediately, there's no need to unstake and wait before using the transfer tool. The 28-day wait only applies if you'd like to withdraw the stake back to your wallet, on Ethereum mainnet or L2. + +### How long will it take to transfer my stake? + +It will take approximately 20 minutes for the L2 transfer tool to complete transferring your stake. + +### Do I have to index on Arbitrum before I transfer my stake? + +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. + +### Can Delegators move their delegation before I move my indexing stake? + +No, in order for Delegators to transfer their delegated GRT to Arbitrum, the Indexer they are delegating to must be active on L2. + +### Can I transfer my stake if I'm using a GRT vesting contract / token lock wallet? + +Yes! The process is a bit different, because vesting contracts can't forward the ETH needed to pay for the L2 gas, so you need to deposit it beforehand. If your vesting contract is not fully vested, you will also have to first initialize a counterpart vesting contract on L2 and will only be able to transfer the stake to this L2 vesting contract. The UI on Explorer can guide you through this process when you've connected to Explorer using the vesting lock wallet. + +### I already have stake on L2. Do I still need to send 100k GRT when I use the transfer tools the first time? + +​Yes. The L1 smart contracts will not be aware of your L2 stake, so they will require you to transfer at least 100k GRT when you transfer for the first time. ​ + +### Can I transfer my stake to L2 if I am in the process of unstaking GRT? + +​No. If any fraction of your stake is thawing, you have to wait the 28 days and withdraw it before you can transfer stake. The tokens that are being staked are "locked" and will prevent any transfers or stake to L2. + +## Vesting Contract Transfer + +### How do I transfer my vesting contract? + +To transfer your vesting, you will need to complete the following steps: + +1. Initiate the vesting transfer on Ethereum mainnet + +2. Wait 20 minutes for confirmation + +3. Confirm vesting transfer on Arbitrum + +### How do I transfer my vesting contract if I am only partially vested? + + + +1. Deposit some ETH into the transfer tool contract (UI can help estimate a reasonable amount) + +2. Send some locked GRT through the transfer tool contract, to L2 to initialize the L2 vesting lock. This will also set their L2 beneficiary address. + +3. Send their stake/delegation to L2 through the "locked" transfer tool functions in the L1Staking contract. + +4. Withdraw any remaining ETH from the transfer tool contract + +### How do I transfer my vesting contract if I am fully vested? + + + +For those that are fully vested, the process is similar: + +1. Deposit some ETH into the transfer tool contract (UI can help estimate a reasonable amount) + +2. Set your L2 address with a call to the transfer tool contract + +3. Send your stake/delegation to L2 through the "locked" transfer tool functions in the L1 Staking contract. + +4. Withdraw any remaining ETH from the transfer tool contract + +### Can I transfer my vesting contract to Arbitrum? + +You can transfer your vesting contract's GRT balance to a vesting contract in L2. This is a prerequisite for transferring stake or delegation from your vesting contract to L2. The vesting contract must hold a nonzero amount of GRT (you can transfer a small amount like 1 GRT to it if needed). + +When you transfer GRT from your L1 vesting contract to L2, you can choose the amount to send and you can do this as many times as you like. The L2 vesting contract will be initialized the first time you transfer GRT. + +The transfers are done using a Transfer Tool that will be visible on your Explorer profile when you connect with the vesting contract account. + +Please note that you will not be able to release/withdraw GRT from the L2 vesting contract until the end of your vesting timeline when your contract is fully vested. If you need to release GRT before then, you can transfer the GRT back to the L1 vesting contract using another transfer tool that is available for that purpose. + +If you haven't transferred any vesting contract balance to L2, and your vesting contract is fully vested, you should not transfer your vesting contract to L2. Instead, you can use the transfer tools to set an L2 wallet address, and directly transfer your stake or delegation to this regular wallet on L2. + +### I'm using my vesting contract to stake on mainnet. Can I transfer my stake to Arbitrum? + +Yes, but if your contract is still vesting, you can only transfer the stake so that it is owned by your L2 vesting contract. You must first initialize this L2 contract by transferring some GRT balance using the vesting contract transfer tool on Explorer. If your contract is fully vested, you can transfer your stake to any address in L2, but you must set it beforehand and deposit some ETH for the L2 transfer tool to pay for L2 gas. + +### I'm using my vesting contract to delegate on mainnet. Can I transfer my delegations to Arbitrum? + +Yes, but if your contract is still vesting, you can only transfer the delegation so that it is owned by your L2 vesting contract. You must first initialize this L2 contract by transferring some GRT balance using the vesting contract transfer tool on Explorer. If your contract is fully vested, you can transfer your delegation to any address in L2, but you must set it beforehand and deposit some ETH for the L2 transfer tool to pay for L2 gas. + +### Can I specify a different beneficiary for my vesting contract on L2? + +Yes, the first time you transfer a balance and set up your L2 vesting contract, you can specify an L2 beneficiary. Make sure this beneficiary is a wallet that can perform transactions on Arbitrum One, i.e. it must be an EOA or a multisig deployed to Arbitrum One. + +If your contract is fully vested, you will not set up a vesting contract on L2; instead, you will set an L2 wallet address and this will be the receiving wallet for your stake or delegation on Arbitrum. + +### My contract is fully vested. Can I transfer my stake or delegation to another address that is not an L2 vesting contract? + +Yes. If you haven't transferred any vesting contract balance to L2, and your vesting contract is fully vested, you should not transfer your vesting contract to L2. Instead, you can use the transfer tools to set an L2 wallet address, and directly transfer your stake or delegation to this regular wallet on L2. + +This allows you to transfer your stake or delegation to any L2 address. + +### My vesting contract is still vesting. How do I transfer my vesting contract balance to L2? + +These steps only apply if your contract is still vesting, or if you've used this process before when your contract was still vesting. + +To transfer your vesting contract to L2, you will send any GRT balance to L2 using the transfer tools, which will initialize your L2 vesting contract: + +1. Deposit some ETH into the transfer tool contract (this will be used to pay for L2 gas) + +2. Revoke protocol access to the vesting contract (needed for the next step) + +3. Give protocol access to the vesting contract (will allow your contract to interact with the transfer tool) + +4. Specify an L2 beneficiary address\* and initiate the balance transfer on Ethereum mainnet + +5. Wait 20 minutes for confirmation + +6. Confirm the balance transfer on L2 + +\*If necessary - i.e. you are using a contract address. + +\*\*\*\*You must confirm your transaction to complete the balance transfer on Arbitrum. This step must be completed within 7 days or the balance could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). + +### My vesting contract shows 0 GRT so I cannot transfer it, why is this and how do I fix it? + +​To initialize your L2 vesting contract, you need to transfer a nonzero amount of GRT to L2. This is required by the Arbitrum GRT bridge that is used by the L2 Transfer Tools. The GRT must come from the vesting contract's balance, so it does not include staked or delegated GRT. + +If you've staked or delegated all your GRT from the vesting contract, you can manually send a small amount like 1 GRT to the vesting contract address from anywhere else (e.g. from another wallet, or an exchange). ​ + +### I am using a vesting contract to stake or delegate, but I don't see a button to transfer my stake or delegation to L2, what do I do? + +​If your vesting contract hasn't finished vesting, you need to first create an L2 vesting contract that will receive your stake or delegation on L2. This vesting contract will not allow releasing tokens in L2 until the end of the vesting timeline, but will allow you to transfer GRT back to the L1 vesting contract to be released there. + +When connected with the vesting contract on Explorer, you should see a button to initialize your L2 vesting contract. Follow that process first, and you will then see the buttons to transfer your stake or delegation in your profile. ​ + +### If I initialize my L2 vesting contract, will this also transfer my delegation to L2 automatically? + +​No, initializing your L2 vesting contract is a prerequisite for transferring stake or delegation from the vesting contract, but you still need to transfer these separately. + +You will see a banner on your profile prompting you to transfer your stake or delegation after you have initialized your L2 vesting contract. + +### Can I move my vesting contract back to L1? + +There is no need to do so because your vesting contract is still in L1. When you use the transfer tools, you just create a new contract in L2 that is connected with your L1 vesting contract, and you can send GRT back and forth between the two. + +### Why do I need to move my vesting contract to begin with? + +You need to set up an L2 vesting contract so that this account can own your stake or delegation on L2. Otherwise, there'd be no way for you to transfer the stake/delegation to L2 without "escaping" the vesting contract. + +### What happens if I try to cash out my contract when it is only partially vested? Is this possible? + +This is not a possibility. You can move funds back to L1 and withdraw them there. + +### What if I don't want to move my vesting contract to L2? + +You can keep staking/delegating on L1. Over time, you may want to consider moving to L2 to enable rewards there as the protocol scales on Arbitrum. Note that these transfer tools are for vesting contracts that are allowed to stake and delegate in the protocol. If your contract does not allow staking or delegating, or is revocable, then there is no transfer tool available. You will still be able to withdraw your GRT from L1 when available. diff --git a/website/src/pages/sw/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/sw/archived/arbitrum/l2-transfer-tools-guide.mdx new file mode 100644 index 000000000000..4a34da9bad0e --- /dev/null +++ b/website/src/pages/sw/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -0,0 +1,165 @@ +--- +title: L2 Transfer Tools Guide +--- + +The Graph has made it easy to move to L2 on Arbitrum One. For each protocol participant, there are a set of L2 Transfer Tools to make transferring to L2 seamless for all network participants. These tools will require you to follow a specific set of steps depending on what you are transferring. + +Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. + +## How to transfer your Subgraph to Arbitrum (L2) + + + +## Benefits of transferring your Subgraphs + +The Graph's community and core devs have [been preparing](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) to move to Arbitrum over the past year. Arbitrum, a layer 2 or "L2" blockchain, inherits the security from Ethereum but provides drastically lower gas fees. + +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. + +## Understanding what happens with signal, your L1 Subgraph and query URLs + +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. + +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. + +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. + +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. + +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. + +## Choosing your L2 wallet + +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. + +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. + +If you're using a "regular" wallet like MetaMask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same owner address as in L1. + +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. + +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** + +## Preparing for the transfer: bridging some ETH + +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. + +You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Since gas fees on Arbitrum are lower, you should only need a small amount. It is recommended that you start at a low threshold (0.e.g. 01 ETH) for your transaction to be approved. + +## Finding the Subgraph Transfer Tool + +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: + +![transfer tool](/img/L2-transfer-tool1.png) + +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: + +![Transferring to L2](/img/transferToL2.png) + +Clicking on the Transfer to L2 button will open the transfer tool where you can start the transfer process. + +## Step 1: Starting the transfer + +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). + +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). + +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). + +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. + +![Start the transfer to L2](/img/startTransferL2.png) + +## Step 2: Waiting for the Subgraph to get to L2 + +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). + +Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. + +![Wait screen](/img/screenshotOfWaitScreenL2.png) + +## Step 3: Confirming the transfer + +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. + +If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. + +![Confirm the transfer to L2](/img/confirmTransferToL2.png) + +## Step 4: Finishing the transfer on L2 + +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." + +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) + +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) + +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. + +## Step 5: Updating the query URL + +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : + +`https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` + +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. + +## How to transfer your curation to Arbitrum (L2) + +## Understanding what happens to curation on Subgraph transfers to L2 + +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. + +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. + +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. + +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. + +## Choosing your L2 wallet + +If you decide to transfer your curated GRT to L2, you can choose a different wallet that will own the curation signal on L2. + +If you're using a "regular" wallet like Metamask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same Curator address as in L1. + +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 receiving wallet address. + +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum, as otherwise the curation will be lost and cannot be recovered.** + +## Sending curation to L2: Step 1 + +Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. + +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. + +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. + +![Transfer signal](/img/transferSignalL2TransferTools.png) + +After opening the Transfer Tool, you may be prompted to add some ETH to your wallet if you don't have any. Then you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Signal will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer. + +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retryable tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. + +## Sending curation to L2: step 2 + +Starting the transfer: + +![Send signal to L2](/img/sendingCurationToL2Step2First.png) + +After you start the transfer, the message that sends your L1 curation to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). + +Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. + +![Sending curation signal to L2](/img/sendingCurationToL2Step2Second.png) + +## Sending curation to L2: step 3 + +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the curation on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your curation to L2 will be pending and require a retry within 7 days. + +If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. + +![Send signal to L2](/img/L2TransferToolsFinalCurationImage.png) + +## Withdrawing your curation on L1 + +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/sw/archived/sunrise.mdx b/website/src/pages/sw/archived/sunrise.mdx new file mode 100644 index 000000000000..71262f22e7d8 --- /dev/null +++ b/website/src/pages/sw/archived/sunrise.mdx @@ -0,0 +1,80 @@ +--- +title: Post-Sunrise + Upgrading to The Graph Network FAQ +sidebarTitle: Post-Sunrise Upgrade FAQ +--- + +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. + +## What was the Sunrise of Decentralized Data? + +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. + +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. + +### What happened to the hosted service? + +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. + +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. + +### Was Subgraph Studio impacted by this upgrade? + +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. + +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? + +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) + +## About the Upgrade Indexer + +> The upgrade Indexer is currently active. + +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. + +### What does the upgrade Indexer do? + +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. + +### Why is Edge & Node running the upgrade Indexer? + +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. + +### What does the upgrade indexer mean for existing Indexers? + +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. + +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. + +### What does this mean for Delegators? + +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. + +### Did the upgrade Indexer compete with existing Indexers for rewards? + +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. + +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. + +### How does this affect Subgraph developers? + +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. + +### How does the upgrade Indexer benefit data consumers? + +The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. + +### How does the upgrade Indexer price queries? + +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. + +### When will the upgrade Indexer stop supporting a Subgraph? + +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. + +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. + +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/sw/contracts.json b/website/src/pages/sw/contracts.json new file mode 100644 index 000000000000..134799f3dd0f --- /dev/null +++ b/website/src/pages/sw/contracts.json @@ -0,0 +1,4 @@ +{ + "contract": "Contract", + "address": "Address" +} diff --git a/website/src/pages/sw/contracts.mdx b/website/src/pages/sw/contracts.mdx new file mode 100644 index 000000000000..3938844149c1 --- /dev/null +++ b/website/src/pages/sw/contracts.mdx @@ -0,0 +1,29 @@ +--- +title: Protocol Contracts +--- + +import { ProtocolContractsTable } from '@/contracts' + +Below are the deployed contracts which power The Graph Network. Visit the official [contracts repository](https://github.com/graphprotocol/contracts) to learn more. + +## Arbitrum + +This is the principal deployment of The Graph Network. + + + +## Mainnet + +This was the original deployment of The Graph Network. [Learn more](/archived/arbitrum/arbitrum-faq/) about The Graph's scaling with Arbitrum. + + + +## Arbitrum Sepolia + +This is the primary testnet for The Graph Network. Testnet is predominantly used by core developers and ecosystem participants for testing purposes. There are no guarantees of service or availability on The Graph's testnets. + + + +## Sepolia + + diff --git a/website/src/pages/sw/docsearch.json b/website/src/pages/sw/docsearch.json new file mode 100644 index 000000000000..8cfff967936d --- /dev/null +++ b/website/src/pages/sw/docsearch.json @@ -0,0 +1,42 @@ +{ + "button": { + "buttonText": "Search", + "buttonAriaLabel": "Search" + }, + "modal": { + "searchBox": { + "resetButtonTitle": "Clear the query", + "resetButtonAriaLabel": "Clear the query", + "cancelButtonText": "Cancel", + "cancelButtonAriaLabel": "Cancel" + }, + "startScreen": { + "recentSearchesTitle": "Recent", + "noRecentSearchesText": "No recent searches", + "saveRecentSearchButtonTitle": "Save this search", + "removeRecentSearchButtonTitle": "Remove this search from history", + "favoriteSearchesTitle": "Favorite", + "removeFavoriteSearchButtonTitle": "Remove this search from favorites" + }, + "errorScreen": { + "titleText": "Unable to fetch results", + "helpText": "You might want to check your network connection." + }, + "footer": { + "selectText": "to select", + "selectKeyAriaLabel": "Enter key", + "navigateText": "to navigate", + "navigateUpKeyAriaLabel": "Arrow up", + "navigateDownKeyAriaLabel": "Arrow down", + "closeText": "to close", + "closeKeyAriaLabel": "Escape key", + "searchByText": "Search by" + }, + "noResultsScreen": { + "noResultsText": "No results for", + "suggestedQueryText": "Try searching for", + "reportMissingResultsText": "Believe this query should return results?", + "reportMissingResultsLinkText": "Let us know." + } + } +} diff --git a/website/src/pages/sw/global.json b/website/src/pages/sw/global.json new file mode 100644 index 000000000000..f0bd80d9715b --- /dev/null +++ b/website/src/pages/sw/global.json @@ -0,0 +1,35 @@ +{ + "navigation": { + "title": "Main navigation", + "show": "Show navigation", + "hide": "Hide navigation", + "subgraphs": "Subgraphs", + "substreams": "Substreams", + "sps": "Substreams-Powered Subgraphs", + "indexing": "Indexing", + "resources": "Resources", + "archived": "Archived" + }, + "page": { + "lastUpdated": "Last updated", + "readingTime": { + "title": "Reading time", + "minutes": "minutes" + }, + "previous": "Previous page", + "next": "Next page", + "edit": "Edit on GitHub", + "onThisPage": "On this page", + "tableOfContents": "Table of contents", + "linkToThisSection": "Link to this section" + }, + "content": { + "note": "Note", + "video": "Video" + }, + "notFound": { + "title": "Oops! This page was lost in space...", + "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.", + "back": "Go Home" + } +} diff --git a/website/src/pages/sw/index.json b/website/src/pages/sw/index.json new file mode 100644 index 000000000000..787097b1fbc4 --- /dev/null +++ b/website/src/pages/sw/index.json @@ -0,0 +1,99 @@ +{ + "title": "Home", + "hero": { + "title": "The Graph Docs", + "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.", + "cta1": "How The Graph works", + "cta2": "Build your first subgraph" + }, + "products": { + "title": "The Graph’s Products", + "description": "Choose a solution that fits your needs—interact with blockchain data your way.", + "subgraphs": { + "title": "Subgraphs", + "description": "Extract, process, and query blockchain data with open APIs.", + "cta": "Develop a subgraph" + }, + "substreams": { + "title": "Substreams", + "description": "Fetch and consume blockchain data with parallel execution.", + "cta": "Develop with Substreams" + }, + "sps": { + "title": "Substreams-Powered Subgraphs", + "description": "Boost your subgraph’s efficiency and scalability by using Substreams.", + "cta": "Set up a Substreams-powered subgraph" + }, + "graphNode": { + "title": "Graph Node", + "description": "Index blockchain data and serve it via GraphQL queries.", + "cta": "Set up a local Graph Node" + }, + "firehose": { + "title": "Firehose", + "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.", + "cta": "Get started with Firehose" + } + }, + "supportedNetworks": { + "title": "Supported Networks", + "description": { + "base": "The Graph supports {0}. To add a new network, {1}", + "networks": "networks", + "completeThisForm": "complete this form" + } + }, + "guides": { + "title": "Guides", + "description": "", + "explorer": { + "title": "Find Data in Graph Explorer", + "description": "Leverage hundreds of public subgraphs for existing blockchain data." + }, + "publishASubgraph": { + "title": "Publish a Subgraph", + "description": "Add your subgraph to the decentralized network." + }, + "publishSubstreams": { + "title": "Publish Substreams", + "description": "Launch your Substreams package to the Substreams Registry." + }, + "queryingBestPractices": { + "title": "Querying Best Practices", + "description": "Optimize your subgraph queries for faster, better results." + }, + "timeseries": { + "title": "Optimized Timeseries & Aggregations", + "description": "Streamline your subgraph for efficiency." + }, + "apiKeyManagement": { + "title": "API Key Management", + "description": "Easily create, manage, and secure API keys for your subgraphs." + }, + "transferToTheGraph": { + "title": "Transfer to The Graph", + "description": "Seamlessly upgrade your subgraph from any platform." + } + }, + "videos": { + "title": "Video Tutorials", + "watchOnYouTube": "Watch on YouTube", + "theGraphExplained": { + "title": "The Graph Explained In 1 Minute", + "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video." + }, + "whatIsDelegating": { + "title": "What is Delegating?", + "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating." + }, + "howToIndexSolana": { + "title": "How to Index Solana with a Substreams-powered Subgraph", + "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph." + } + }, + "time": { + "reading": "Reading time", + "duration": "Duration", + "minutes": "min" + } +} diff --git a/website/src/pages/sw/indexing/_meta-titles.json b/website/src/pages/sw/indexing/_meta-titles.json new file mode 100644 index 000000000000..42f4de188fd4 --- /dev/null +++ b/website/src/pages/sw/indexing/_meta-titles.json @@ -0,0 +1,3 @@ +{ + "tooling": "Indexer Tooling" +} diff --git a/website/src/pages/sw/indexing/chain-integration-overview.mdx b/website/src/pages/sw/indexing/chain-integration-overview.mdx new file mode 100644 index 000000000000..33619b03c483 --- /dev/null +++ b/website/src/pages/sw/indexing/chain-integration-overview.mdx @@ -0,0 +1,49 @@ +--- +title: Chain Integration Process Overview +--- + +A transparent and governance-based integration process was designed for blockchain teams seeking [integration with The Graph protocol](https://forum.thegraph.com/t/gip-0057-chain-integration-process/4468). It is a 3-phase process, as summarised below. + +## Stage 1. Technical Integration + +- Please visit [New Chain Integration](/indexing/new-chain-integration/) for information on `graph-node` support for new chains. +- Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. + +## Stage 2. Integration Validation + +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Graph Indexers test the integration on The Graph's testnet. +- Core developers and Indexers monitor stability, performance, and data determinism. + +## Stage 3. Mainnet Integration + +- Teams propose mainnet integration by submitting a Graph Improvement Proposal (GIP) and initiating a pull request (PR) on the [feature support matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) (more details on the link). +- The Graph Council reviews the request and approves mainnet support, providing a successful Stage 2 and positive community feedback. + +--- + +If the process looks daunting, don't worry! The Graph Foundation is committed to supporting integrators by fostering collaboration, offering essential information, and guiding them through various stages, including navigating governance processes such as Graph Improvement Proposals (GIPs) and pull requests. If you have questions, please reach out to [info@thegraph.foundation](mailto:info@thegraph.foundation) or through Discord (either Pedro, The Graph Foundation member, IndexerDAO, or other core developers). + +Ready to shape the future of The Graph Network? [Start your proposal](https://github.com/graphprotocol/graph-improvement-proposals/blob/main/gips/0057-chain-integration-process.md) now and be a part of the web3 revolution! + +--- + +## Frequently Asked Questions + +### 1. How does this relate to the [World of Data Services GIP](https://forum.thegraph.com/t/gip-0042-a-world-of-data-services/3761)? + +This process is related to the Subgraph Data Service, applicable only to new Subgraph `Data Sources`. + +### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? + +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. + +### 3. How much time will the process of reaching full protocol support take? + +The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. + +Protocol support for indexing rewards depends on the stakeholders' bandwidth to proceed with testing, feedback gathering, and handling contributions to the core codebase, if applicable. This is directly tied to the integration's maturity and how responsive the integration team is (who may or may not be the team behind the RPC/Firehose implementation). The Foundation is here to help support throughout the whole process. + +### 4. How will priorities be handled? + +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/src/pages/sw/indexing/new-chain-integration.mdx b/website/src/pages/sw/indexing/new-chain-integration.mdx new file mode 100644 index 000000000000..c401fa57b348 --- /dev/null +++ b/website/src/pages/sw/indexing/new-chain-integration.mdx @@ -0,0 +1,70 @@ +--- +title: New Chain Integration +--- + +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: + +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. + +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. + +## Integration Strategies + +### 1. EVM JSON-RPC + +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. + +#### Testing an EVM JSON-RPC + +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: + +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(limited tracing and optionally required for Graph Node)_ + +### 2. Firehose Integration + +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. + +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. + +#### Integration for Non-EVM chains + +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. + +#### Specific Instrumentation for EVM (`geth`) chains + +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. + +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) + +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. + +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. + +1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) + +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC or Firehose compliant URL + + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. + +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ + +## Substreams-powered Subgraphs + +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/sw/indexing/overview.mdx b/website/src/pages/sw/indexing/overview.mdx new file mode 100644 index 000000000000..0b9b31f5d22d --- /dev/null +++ b/website/src/pages/sw/indexing/overview.mdx @@ -0,0 +1,817 @@ +--- +title: Indexing Overview +sidebarTitle: Overview +--- + +Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn query fees that are rebated according to an exponential rebate function. + +GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. + +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. + +## FAQ + +### What is the minimum stake required to be an Indexer on the network? + +The minimum stake for an Indexer is currently set to 100K GRT. + +### What are the revenue streams for an Indexer? + +**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. + +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. + +### How are indexing rewards distributed? + +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** + +Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. + +### What is a proof of indexing (POI)? + +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. + +### When are indexing rewards distributed? + +Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h). + +### Can pending indexing rewards be monitored? + +The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation. + +Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: + +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: + +```graphql +query indexerAllocations { + indexer(id: "") { + allocations { + activeForIndexer { + allocations { + id + } + } + } + } +} +``` + +Use Etherscan to call `getRewards()`: + +- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract) +- To call `getRewards()`: + - Expand the **9. getRewards** dropdown. + - Enter the **allocationID** in the input. + - Click the **Query** button. + +### What are disputes and where can I view them? + +Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes. + +Disputes have **three** possible outcomes, so does the deposit of the Fishermen. + +- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed. +- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed. +- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT. + +Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab. + +### What are query fee rebates and when are they distributed? + +Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect. + +Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function. + +### What is query fee cut and indexing reward cut? + +The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters. + +- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators. + +- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. + +### How do Indexers know which Subgraphs to index? + +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: + +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up. + +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. + +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. + +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. + +### What are the hardware requirements? + +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. +- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. + +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: | +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | + +### What are some basic security precautions an Indexer should take? + +- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions. + +- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed. + +## Infrastructure + +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. + +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. + +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. + +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. + +- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. + +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. + +- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. + +Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes. + +### Ports overview + +> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below. + +#### Graph Node + +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | + +#### Indexer Service + +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | \--metrics-port | - | + +#### Indexer Agent + +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- | +| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | + +### Setup server infrastructure using Terraform on Google Cloud + +> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba. + +#### Install prerequisites + +- Google Cloud SDK +- Kubectl command line tool +- Terraform + +#### Create a Google Cloud Project + +- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer). + +- Navigate to the `./terraform` directory, this is where all commands should be executed. + +```sh +cd terraform +``` + +- Authenticate with Google Cloud and create a new project. + +```sh +gcloud auth login +project= +gcloud projects create --enable-cloud-apis $project +``` + +- Use the Google Cloud Console's billing page to enable billing for the new project. + +- Create a Google Cloud configuration. + +```sh +proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project") +gcloud config configurations create $project +gcloud config set project "$proj_id" +gcloud config set compute/region us-central1 +gcloud config set compute/zone us-central1-a +``` + +- Enable required Google Cloud APIs. + +```sh +gcloud services enable compute.googleapis.com +gcloud services enable container.googleapis.com +gcloud services enable servicenetworking.googleapis.com +gcloud services enable sqladmin.googleapis.com +``` + +- Create a service account. + +```sh +svc_name= +gcloud iam service-accounts create $svc_name \ + --description="Service account for Terraform" \ + --display-name="$svc_name" +gcloud iam service-accounts list +# Get the email of the service account from the list +svc=$(gcloud iam service-accounts list --format='get(email)' +--filter="displayName=$svc_name") +gcloud iam service-accounts keys create .gcloud-credentials.json \ + --iam-account="$svc" +gcloud projects add-iam-policy-binding $proj_id \ + --member serviceAccount:$svc \ + --role roles/editor +``` + +- Enable peering between database and Kubernetes cluster that will be created in the next step. + +```sh +gcloud compute addresses create google-managed-services-default \ + --prefix-length=20 \ + --purpose=VPC_PEERING \ + --network default \ + --global \ + --description 'IP Range for peer networks.' +gcloud services vpc-peerings connect \ + --network=default \ + --ranges=google-managed-services-default +``` + +- Create minimal terraform configuration file (update as needed). + +```sh +indexer= +cat > terraform.tfvars < \ + -f Dockerfile.indexer-service \ + -t indexer-service:latest \ +# Indexer agent +docker build \ + --build-arg NPM_TOKEN= \ + -f Dockerfile.indexer-agent \ + -t indexer-agent:latest \ +``` + +- Run the components + +```sh +docker run -p 7600:7600 -it indexer-service:latest ... +docker run -p 18000:8000 -it indexer-agent:latest ... +``` + +**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/). + +#### Using K8s and Terraform + +See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section + +#### Usage + +> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`). + +#### Indexer agent + +```sh +graph-indexer-agent start \ + --ethereum \ + --ethereum-network mainnet \ + --mnemonic \ + --indexer-address \ + --graph-node-query-endpoint http://localhost:8000/ \ + --graph-node-status-endpoint http://localhost:8030/graphql \ + --graph-node-admin-endpoint http://localhost:8020/ \ + --public-indexer-url http://localhost:7600/ \ + --indexer-geo-coordinates \ + --index-node-ids default \ + --indexer-management-port 18000 \ + --metrics-port 7040 \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ + --default-allocation-amount 100 \ + --register true \ + --inject-dai true \ + --postgres-host localhost \ + --postgres-port 5432 \ + --postgres-username \ + --postgres-password \ + --postgres-database indexer \ + --allocation-management auto \ + | pino-pretty +``` + +#### Indexer service + +```sh +SERVER_HOST=localhost \ +SERVER_PORT=5432 \ +SERVER_DB_NAME=is_staging \ +SERVER_DB_USER= \ +SERVER_DB_PASSWORD= \ +graph-indexer-service start \ + --ethereum \ + --ethereum-network mainnet \ + --mnemonic \ + --indexer-address \ + --port 7600 \ + --metrics-port 7300 \ + --graph-node-query-endpoint http://localhost:8000/ \ + --graph-node-status-endpoint http://localhost:8030/graphql \ + --postgres-host localhost \ + --postgres-port 5432 \ + --postgres-username \ + --postgres-password \ + --postgres-database is_staging \ + --network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \ + | pino-pretty +``` + +#### Indexer CLI + +The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`. + +```sh +graph indexer connect http://localhost:18000 +graph indexer status +``` + +#### Indexer management using Indexer CLI + +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. + +#### Usage + +The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here. + +- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`) + +- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent. + +- `graph indexer rules set [options] ...` - Set one or more indexing rules. + +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. + +- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. + +- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. + +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. + +- `graph indexer action queue allocate ` - Queue allocation action + +- `graph indexer action queue reallocate ` - Queue reallocate action + +- `graph indexer action queue unallocate ` - Queue unallocate action + +- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator + +- `graph indexer actions approve [ ...]` - Approve multiple actions for execution + +- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately + +All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument. + +#### Indexing rules + +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. + +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. + +Data model: + +```graphql +type IndexingRule { + identifier: string + identifierType: IdentifierType + decisionBasis: IndexingDecisionBasis! + allocationAmount: number | null + allocationLifetime: number | null + autoRenewal: boolean + parallelAllocations: number | null + maxAllocationPercentage: number | null + minSignal: string | null + maxSignal: string | null + minStake: string | null + minAverageQueryFees: string | null + custom: string | null + requireSupported: boolean | null + } + +IdentifierType { + deployment + subgraph + group +} + +IndexingDecisionBasis { + rules + never + always + offchain +} +``` + +Example usage of indexing rule: + +``` +graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK + +graph indexer rules set QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK decisionBasis always allocationAmount 123321 allocationLifetime 14 autoRenewal false requireSupported false + +graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK + +graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK +``` + +#### Actions queue CLI + +The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue. + +The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like: + +- Action added to the queue by the 3rd party optimizer tool or indexer-cli user +- Indexer can use the `indexer-cli` to view all queued actions +- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input. +- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`. +- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode. +- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken. + +Data model: + +```graphql +Type ActionInput { + status: ActionStatus + type: ActionType + deploymentID: string | null + allocationID: string | null + amount: string | null + poi: string | null + force: boolean | null + source: string + reason: string | null + priority: number | null +} + +ActionStatus { + queued + approved + pending + success + failed + canceled +} + +ActionType { + allocate + unallocate + reallocate + collect +} +``` + +Example usage from source: + +```bash +graph indexer actions get all + +graph indexer actions get --status queued + +graph indexer actions queue allocate QmeqJ6hsdyk9dVbo1tvRgAxWrVS3rkERiEMsxzPShKLco6 5000 + +graph indexer actions queue reallocate QmeqJ6hsdyk9dVbo1tvRgAxWrVS3rkERiEMsxzPShKLco6 0x4a58d33e27d3acbaecc92c15101fbc82f47c2ae5 55000 + +graph indexer actions queue unallocate QmeqJ6hsdyk9dVbo1tvRgAxWrVS3rkERiEMsxzPShKLco6 0x4a58d33e27d3acbaecc92c15101fbc82f47c2ae + +graph indexer actions cancel + +graph indexer actions approve 1 3 5 + +graph indexer actions execute approve +``` + +Note that supported action types for allocation management have different input requirements: + +- `Allocate` - allocate stake to a specific Subgraph deployment + + - required action params: + - deploymentID + - amount + +- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere + + - required action params: + - allocationID + - deploymentID + - optional action params: + - poi + - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment + + - required action params: + - allocationID + - deploymentID + - amount + - optional action params: + - poi + - force (forces using the provided POI even if it doesn’t match what the graph-node provides) + +#### Cost models + +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. + +#### Agora + +The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query. + +A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression. + +Example cost model: + +``` +# This statement captures the skip value, +# uses a boolean expression in the predicate to match specific queries that use `skip` +# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global +query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD; + +# This default will match any GraphQL expression. +# It uses a Global substituted into the expression to calculate cost +default => 0.1 * $SYSTEM_LOAD; +``` + +Example query costing using the above model: + +| Query | Price | +| ---------------------------------------------------------------------------- | ------- | +| { pairs(skip: 5000) { id } } | 0.5 GRT | +| { tokens { symbol } } | 0.1 GRT | +| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | + +#### Applying the cost model + +Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them. + +```sh +indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }' +indexer cost set model my_model.agora +``` + +## Interacting with the network + +### Stake in the protocol + +The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions. + +> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools). + +Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network. + +#### Approve tokens + +1. Open the [Remix app](https://remix.ethereum.org/) in a browser + +2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json). + +3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. + +4. Under environment select `Injected Web3` and under `Account` select your Indexer address. + +5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply. + +6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei). + +#### Stake tokens + +1. Open the [Remix app](https://remix.ethereum.org/) in a browser + +2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI. + +3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface. + +4. Under environment select `Injected Web3` and under `Account` select your Indexer address. + +5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply. + +6. Call `stake()` to stake GRT in the protocol. + +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. + +8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks. + +``` +setDelegationParameters(950000, 600000, 500) +``` + +### Setting delegation parameters + +The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity. + +### How to set delegation parameters + +To set the delegation parameters using Graph Explorer interface, follow these steps: + +1. Navigate to [Graph Explorer](https://thegraph.com/explorer/). +2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One. +3. Connect the wallet you have as a signer. +4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage. +5. Submit the transaction to the network. + +> Note: This transaction will need to be confirmed by the multisig wallet signers. + +### The life of an allocation + +After being created by an Indexer a healthy allocation goes through two states. + +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. + +- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). + +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/sw/indexing/supported-network-requirements.mdx b/website/src/pages/sw/indexing/supported-network-requirements.mdx new file mode 100644 index 000000000000..ce9919503666 --- /dev/null +++ b/website/src/pages/sw/indexing/supported-network-requirements.mdx @@ -0,0 +1,18 @@ +--- +title: Supported Network Requirements +--- + +| Network | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/src/pages/sw/indexing/tap.mdx b/website/src/pages/sw/indexing/tap.mdx new file mode 100644 index 000000000000..e81b7af5421c --- /dev/null +++ b/website/src/pages/sw/indexing/tap.mdx @@ -0,0 +1,193 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Overview + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to onchain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Mainnet (42161) | Arbitrum Sepolia (421614) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | +| AllocationIDTracker | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | +| Escrow | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Mainnet) | Edge and Node Testnet (Arbitrum Sepolia) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +The required software version can be found [here](https://github.com/graphprotocol/indexer/blob/main/docs/networks/arbitrum-one.md#latest-releases). + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network Subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the Subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow Subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the Subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notes: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/indexing/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/indexing/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/main/charts/graph-network-indexer) diff --git a/website/src/pages/sw/indexing/tooling/firehose.mdx b/website/src/pages/sw/indexing/tooling/firehose.mdx new file mode 100644 index 000000000000..0f0fdebbafd0 --- /dev/null +++ b/website/src/pages/sw/indexing/tooling/firehose.mdx @@ -0,0 +1,24 @@ +--- +title: Firehose +--- + +![Firehose Logo](/img/firehose-logo.png) + +Firehose is a new technology developed by StreamingFast working with The Graph Foundation. The product provides **previously unseen capabilities and speeds for indexing blockchain data** using a files-based and streaming-first approach. + +The Graph merges into Go Ethereum/geth with the adoption of [Live Tracer with v1.14.0 release](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0). + +Firehose extracts, transforms and saves blockchain data in a highly performant file-based strategy. Blockchain developers can then access data extracted by Firehose through binary data streams. Firehose is intended to stand as a replacement for The Graph’s original blockchain data extraction layer. + +## Firehose Documentation + +The Firehose documentation is currently maintained by the StreamingFast team [on the StreamingFast website](https://firehose.streamingfast.io/). + +### Getting Started + +- Read this [Firehose introduction](https://firehose.streamingfast.io/introduction/firehose-overview) to get an overview of what it is and why it was built. +- Learn about the [Prerequisites](https://firehose.streamingfast.io/introduction/prerequisites) to install and deploy Firehose. + +### Expand Your Knowledge + +- Learn about the different [Firehose components](https://firehose.streamingfast.io/architecture/components) available. diff --git a/website/src/pages/sw/indexing/tooling/graph-node.mdx b/website/src/pages/sw/indexing/tooling/graph-node.mdx new file mode 100644 index 000000000000..f5778789213d --- /dev/null +++ b/website/src/pages/sw/indexing/tooling/graph-node.mdx @@ -0,0 +1,345 @@ +--- +title: Graph Node +--- + +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. + +This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). + +## Graph Node + +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. + +Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). + +### PostgreSQL database + +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. + +### Network clients + +In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. + +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). + +**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). + +### IPFS Nodes + +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. + +### Prometheus metrics server + +To enable monitoring and reporting, Graph Node can optionally log metrics to a Prometheus metrics server. + +### Getting started from source + +#### Install prerequisites + +- **Rust** + +- **PostgreSQL** + +- **IPFS** + +- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed. + +```sh +sudo apt-get install -y clang libpq-dev libssl-dev pkg-config +``` + +#### Setup + +1. Start a PostgreSQL database server + +```sh +initdb -D .postgres +pg_ctl -D .postgres -l logfile start +createdb graph-node +``` + +2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build` + +3. Now that all the dependencies are setup, start the Graph Node: + +```sh +cargo run -p graph-node --release -- \ + --postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc [NETWORK_NAME]:[URL] \ + --ipfs https://ipfs.network.thegraph.com +``` + +### Getting started with Kubernetes + +A complete Kubernetes example configuration can be found in the [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s). + +### Ports + +When it is running Graph Node exposes the following ports: + +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - | +| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - | +| 8040 | Prometheus metrics | /metrics | \--metrics-port | - | + +> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. + +## Advanced Graph Node configuration + +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. + +This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. + +### `config.toml` + +A [TOML](https://toml.io/en/) configuration file can be used to set more complex configurations than those exposed in the CLI. The location of the file is passed with the --config command line switch. + +> When using a configuration file, it is not possible to use the options --postgres-url, --postgres-secondary-hosts, and --postgres-host-weights. + +A minimal `config.toml` file can be provided; the following file is equivalent to using the --postgres-url command line option: + +```toml +[store] +[store.primary] +connection="<.. postgres-url argument ..>" +[deployment] +[[deployment.rule]] +indexers = [ "<.. list of all indexing nodes ..>" ] +``` + +Full documentation of `config.toml` can be found in the [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). + +#### Multiple Graph Nodes + +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). + +> Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. + +#### Deployment rules + +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. + +Example deployment rule configuration: + +```toml +[deployment] +[[deployment.rule]] +match = { name = "(vip|important)/.*" } +shard = "vip" +indexers = [ "index_node_vip_0", "index_node_vip_1" ] +[[deployment.rule]] +match = { network = "kovan" } +# No shard, so we use the default shard called 'primary' +indexers = [ "index_node_kovan_0" ] +[[deployment.rule]] +match = { network = [ "xdai", "poa-core" ] } +indexers = [ "index_node_other_0" ] +[[deployment.rule]] +# There's no 'match', so any Subgraph matches +shards = [ "sharda", "shardb" ] +indexers = [ + "index_node_community_0", + "index_node_community_1", + "index_node_community_2", + "index_node_community_3", + "index_node_community_4", + "index_node_community_5" + ] +``` + +Read more about deployment rules [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). + +#### Dedicated query nodes + +Nodes can be configured to explicitly be query nodes by including the following in the configuration file: + +```toml +[general] +query = "" +``` + +Any node whose --node-id matches the regular expression will be set up to only respond to queries. + +#### Database scaling via sharding + +For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. + +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. + +Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. + +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. + +In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. + +Read more about store configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). + +#### Dedicated block ingestion + +If there are multiple nodes configured, it will be necessary to specify one node which is responsible for ingestion of new blocks, so that all configured index nodes aren't polling the chain head. This is done as part of the `chains` namespace, specifying the `node_id` to be used for block ingestion: + +```toml +[chains] +ingestor = "block_ingestor_node" +``` + +#### Supporting multiple networks + +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: + +- Multiple networks +- Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). +- Additional provider details, such as features, authentication and the type of provider (for experimental Firehose support) + +The `[chains]` section controls the ethereum providers that graph-node connects to, and where blocks and other metadata for each chain are stored. The following example configures two chains, mainnet and kovan, where blocks for mainnet are stored in the vip shard and blocks for kovan are stored in the primary shard. The mainnet chain can use two different providers, whereas kovan only has one provider. + +```toml +[chains] +ingestor = "block_ingestor_node" +[chains.mainnet] +shard = "vip" +provider = [ + { label = "mainnet1", url = "http://..", features = [], headers = { Authorization = "Bearer foo" } }, + { label = "mainnet2", url = "http://..", features = [ "archive", "traces" ] } +] +[chains.kovan] +shard = "primary" +provider = [ { label = "kovan", url = "http://..", features = [] } ] +``` + +Read more about provider configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). + +### Environment variables + +Graph Node supports a range of environment variables which can enable features, or change Graph Node behaviour. These are documented [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md). + +### Continuous deployment + +Users who are operating a scaled indexing setup with advanced configuration may benefit from managing their Graph Nodes with Kubernetes. + +- The indexer repository has an [example Kubernetes reference](https://github.com/graphprotocol/indexer/tree/main/k8s) +- [Launchpad](https://docs.graphops.xyz/launchpad/intro) is a toolkit for running a Graph Protocol Indexer on Kubernetes maintained by GraphOps. It provides a set of Helm charts and a CLI to manage a Graph Node deployment. + +### Managing Graph Node + +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. + +#### Logging + +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. + +In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). + +#### Monitoring & alerting + +Graph Node provides the metrics via Prometheus endpoint on 8040 port by default. Grafana can then be used to visualise these metrics. + +The indexer repository provides an [example Grafana configuration](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). + +#### Graphman + +`graphman` is a maintenance tool for Graph Node, helping with diagnosis and resolution of different day-to-day and exceptional tasks. + +The graphman command is included in the official containers, and you can docker exec into your graph-node container to run it. It requires a `config.toml` file. + +Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` + +### Working with Subgraphs + +#### Indexing status API + +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. + +The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). + +#### Indexing performance + +There are three separate parts of the indexing process: + +- Fetching events of interest from the provider +- Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) +- Writing the resulting data to the store + +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. + +Common causes of indexing slowness: + +- Time taken to find relevant events from the chain (call handlers in particular can be slow, given the reliance on `trace_filter`) +- Making large numbers of `eth_calls` as part of handlers +- A large amount of store interaction during execution +- A large amount of data to save to the store +- A large number of events to process +- Slow database connection time, for crowded nodes +- The provider itself falling behind the chain head +- Slowness in fetching new receipts at the chain head from the provider + +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. + +#### Failed Subgraphs + +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: + +- Deterministic failures: these are failures which will not be resolved with retries +- Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. + +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. + +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. + +#### Block and call cache + +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. + +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. + +If a block cache inconsistency is suspected, such as a tx receipt missing event: + +1. `graphman chain list` to find the chain name. +2. `graphman chain check-blocks by-number ` will check if the cached block matches the provider, and deletes the block from the cache if it doesn’t. + 1. If there is a difference, it may be safer to truncate the whole cache with `graphman chain truncate `. + 2. If the block matches the provider, then the issue can be debugged directly against the provider. + +#### Querying issues and errors + +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. + +However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. + +There is not one "silver bullet", but a range of tools for preventing, diagnosing and dealing with slow queries. + +##### Query caching + +Graph Node caches GraphQL queries by default, which can significantly reduce database load. This can be further configured with the `GRAPH_QUERY_CACHE_BLOCKS` and `GRAPH_QUERY_CACHE_MAX_MEM` settings - read more [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md#graphql-caching). + +##### Analysing queries + +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. + +In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. + +Indexers can use [qlog](https://github.com/graphprotocol/qlog/) to process and summarize Graph Node's query logs. `GRAPH_LOG_QUERY_TIMING` can also be enabled to help identify and debug slow queries. + +Given a slow query, indexers have a few options. Of course they can alter their cost model, to significantly increase the cost of sending the problematic query. This may result in a reduction in the frequency of that query. However this often doesn't resolve the root cause of the issue. + +##### Account-like optimisation + +Database tables that store entities seem to generally come in two varieties: 'transaction-like', where entities, once created, are never updated, i.e., they store something akin to a list of financial transactions, and 'account-like' where entities are updated very often, i.e., they store something like financial accounts that get modified every time a transaction is recorded. Account-like tables are characterized by the fact that they contain a large number of entity versions, but relatively few distinct entities. Often, in such tables the number of distinct entities is 1% of the total number of rows (entity versions) + +For account-like tables, `graph-node` can generate queries that take advantage of details of how Postgres ends up storing data with such a high rate of change, namely that all of the versions for recent blocks are in a small subsection of the overall storage for such a table. + +The command `graphman stats show shows, for each entity type/table in a deployment, how many distinct entities, and how many entity versions each table contains. That data is based on Postgres-internal estimates, and is therefore necessarily imprecise, and can be off by an order of magnitude. A `-1` in the `entities` column means that Postgres believes that all rows contain a distinct entity. + +In general, tables where the number of distinct entities are less than 1% of the total number of rows/entity versions are good candidates for the account-like optimization. When the output of `graphman stats show` indicates that a table might benefit from this optimization, running `graphman stats show
` will perform a full count of the table - that can be slow, but gives a precise measure of the ratio of distinct entities to overall entity versions. + +Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. + +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. + +#### Removing Subgraphs + +> This is new functionality, which will be available in Graph Node 0.29.x + +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/sw/indexing/tooling/graphcast.mdx b/website/src/pages/sw/indexing/tooling/graphcast.mdx new file mode 100644 index 000000000000..d1795e9be577 --- /dev/null +++ b/website/src/pages/sw/indexing/tooling/graphcast.mdx @@ -0,0 +1,21 @@ +--- +title: Graphcast +--- + +## Introduction + +Is there something you'd like to learn from or share with your fellow Indexers in an automated manner, but it's too much hassle or costs too much gas? + +Currently, the cost to broadcast information to other network participants is determined by gas fees on the Ethereum blockchain. Graphcast solves this problem by acting as an optional decentralized, distributed peer-to-peer (P2P) communication tool that allows Indexers across the network to exchange information in real time. The cost of exchanging P2P messages is near zero, with the tradeoff of no data integrity guarantees. Nevertheless, Graphcast aims to provide message validity guarantees (i.e. that the message is valid and signed by a known protocol participant) with an open design space of reputation models. + +The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: + +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. + +### Learn More + +If you would like to learn more about Graphcast, [check out the documentation here.](https://docs.graphops.xyz/graphcast/intro) diff --git a/website/src/pages/sw/resources/_meta-titles.json b/website/src/pages/sw/resources/_meta-titles.json new file mode 100644 index 000000000000..f5971e95a8f6 --- /dev/null +++ b/website/src/pages/sw/resources/_meta-titles.json @@ -0,0 +1,4 @@ +{ + "roles": "Additional Roles", + "migration-guides": "Migration Guides" +} diff --git a/website/src/pages/sw/resources/benefits.mdx b/website/src/pages/sw/resources/benefits.mdx new file mode 100644 index 000000000000..6899e348a912 --- /dev/null +++ b/website/src/pages/sw/resources/benefits.mdx @@ -0,0 +1,92 @@ +--- +title: The Graph vs. Self Hosting +socialImage: https://thegraph.com/docs/img/seo/benefits.jpg +--- + +The Graph’s decentralized network has been engineered and refined to create a robust indexing and querying experience—and it’s getting better every day thanks to thousands of contributors around the world. + +The benefits of this decentralized protocol cannot be replicated by running a `graph-node` locally. The Graph Network is more reliable, more efficient, and less expensive. + +Here is an analysis: + +## Why You Should Use The Graph Network + +- Significantly lower monthly costs +- $0 infrastructure setup costs +- Superior uptime +- Access to hundreds of independent Indexers around the world +- 24/7 technical support by global community + +## The Benefits Explained + +### Lower & more Flexible Cost Structure + +No contracts. No monthly fees. Only pay for the queries you use—with an average cost-per-query of $40 per million queries (~$0.00004 per query). Queries are priced in USD and paid in GRT or credit card. + +Query costs may vary; the quoted cost is the average at time of publication (March 2024). + +## Low Volume User (less than 100,000 queries per month) + +| Cost Comparison | Self Hosted | The Graph Network | +| :--------------------------: | :-------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | + +## Medium Volume User (~3M queries per month) + +| Cost Comparison | Self Hosted | The Graph Network | +| :--------------------------: | :----------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | + +## High Volume User (~30M queries per month) + +| Cost Comparison | Self Hosted | The Graph Network | +| :--------------------------: | :-----------------------------------------: | :-------------------------------------------------------------: | +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | + +\*including costs for backup: $50-$100 per month + +Engineering time based on $200 per hour assumption + +Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. + +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. + +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). + +## No Setup Costs & Greater Operational Efficiency + +Zero setup fees. Get started immediately with no setup or overhead costs. No hardware requirements. No outages due to centralized infrastructure, and more time to concentrate on your core product . No need for backup servers, troubleshooting, or expensive engineering resources. + +## Reliability & Resiliency + +The Graph’s decentralized network gives users access to geographic redundancy that does not exist when self-hosting a `graph-node`. Queries are served reliably thanks to 99.9%+ uptime, achieved by hundreds of independent Indexers securing the network globally. + +Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. + +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/sw/resources/glossary.mdx b/website/src/pages/sw/resources/glossary.mdx new file mode 100644 index 000000000000..4c5ad55cd0d3 --- /dev/null +++ b/website/src/pages/sw/resources/glossary.mdx @@ -0,0 +1,83 @@ +--- +title: Glossary +--- + +- **The Graph**: A decentralized protocol for indexing and querying data. + +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. + +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. + +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. + +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. + +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. + +- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. + + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. + + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + +- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. + +- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. + +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. + +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. + +- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. + +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. + +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. + +- **Data Consumer**: Any application or user that queries a Subgraph. + +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. + +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. + +- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. + +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: + + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. + + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. + +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. + +- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. + +- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. + +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. + +- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. + +- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. + +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. + +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. + +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. + +- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. + +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. + +- **Graph CLI**: A command line interface tool for building and deploying to The Graph. + +- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. + +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. + +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. + +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/sw/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/sw/resources/migration-guides/assemblyscript-migration-guide.mdx new file mode 100644 index 000000000000..aead2514ff51 --- /dev/null +++ b/website/src/pages/sw/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -0,0 +1,524 @@ +--- +title: AssemblyScript Migration Guide +--- + +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 + +That will enable Subgraph developers to use newer features of the AS language and standard library. + +This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 + +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. + +## Features + +### New functionality + +- `TypedArray`s can now be built from `ArrayBuffer`s by using the [new `wrap` static method](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) +- New standard library functions: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`and `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Added support for x instanceof GenericClass ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Added `StaticArray`, a more efficient array variant ([v0.9.3](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.3)) +- Added `Array#flat` ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Implemented `radix` argument on `Number#toString` ([v0.10.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.1)) +- Added support for separators in floating point literals ([v0.13.7](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.13.7)) +- Added support for first class functions ([v0.14.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.0)) +- Add builtins: `i32/i64/f32/f64.add/sub/mul` ([v0.14.13](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.13)) +- Implement `Array/TypedArray/String#at` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) +- Added support for template literal strings ([v0.18.17](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.17)) +- Add `encodeURI(Component)` and `decodeURI(Component)` ([v0.18.27](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.27)) +- Add `toString`, `toDateString` and `toTimeString` to `Date` ([v0.18.29](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.29)) +- Add `toUTCString` for `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30)) +- Add `nonnull/NonNullable` builtin type ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) + +### Optimizations + +- `Math` functions such as `exp`, `exp2`, `log`, `log2` and `pow` have been replaced by faster variants ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Slightly optimize `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1)) +- Cache more field accesses in std Map and Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) +- Optimize for powers of two in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) + +### Other + +- The type of an array literal can now be inferred from its contents ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) +- Updated stdlib to Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) + +## How to upgrade? + +1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`: + +```yaml +... +dataSources: + ... + mapping: + ... + apiVersion: 0.0.9 + ... +``` + +2. Update the `graph-cli` you're using to the `latest` version by running: + +```bash +# if you have it globally installed +npm install --global @graphprotocol/graph-cli@latest + +# or in your subgraph if you have it as a dev dependency +npm install --save-dev @graphprotocol/graph-cli@latest +``` + +3. Do the same for `graph-ts`, but instead of installing globally, save it in your main dependencies: + +```bash +npm install --save @graphprotocol/graph-ts@latest +``` + +4. Follow the rest of the guide to fix the language breaking changes. +5. Run `codegen` and `deploy` again. + +## Breaking changes + +### Nullability + +On the older version of AssemblyScript, you could create code like this: + +```typescript +function load(): Value | null { ... } + +let maybeValue = load(); +maybeValue.aMethod(); +``` + +However on the newer version, because the value is nullable, it requires you to check, like this: + +```typescript +let maybeValue = load() + +if (maybeValue) { + maybeValue.aMethod() // `maybeValue` is not null anymore +} +``` + +Or force it like this: + +```typescript +let maybeValue = load()! // breaks in runtime if value is null + +maybeValue.aMethod() +``` + +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. + +### Variable Shadowing + +Before you could do [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) and code like this would work: + +```typescript +let a = 10 +let b = 20 +let a = a + b +``` + +However now this isn't possible anymore, and the compiler returns this error: + +```typescript +ERROR TS2451: Cannot redeclare block-scoped variable 'a' + + let a = a + b; + ~~~~~~~~~~~~~ +in assembly/index.ts(4,3) +``` + +You'll need to rename your duplicate variables if you had variable shadowing. + +### Null Comparisons + +By doing the upgrade on your Subgraph, sometimes you might get errors like these: + +```typescript +ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. + if (decimals == null) { + ~~~~ + in src/mappings/file.ts(41,21) +``` + +To solve you can simply change the `if` statement to something like this: + +```typescript + if (!decimals) { + + // or + + if (decimals === null) { +``` + +The same applies if you're doing != instead of ==. + +### Casting + +The common way to do casting before was to just use the `as` keyword, like this: + +```typescript +let byteArray = new ByteArray(10) +let uint8Array = byteArray as Uint8Array // equivalent to: byteArray +``` + +However this only works in two scenarios: + +- Primitive casting (between types such as `u8`, `i32`, `bool`; eg: `let b: isize = 10; b as usize`); +- Upcasting on class inheritance (subclass → superclass) + +Examples: + +```typescript +// primitive casting +let a: usize = 10 +let b: isize = 5 +let c: usize = a + (b as usize) +``` + +```typescript +// upcasting on class inheritance +class Bytes extends Uint8Array {} + +let bytes = new Bytes(2) +// bytes // same as: bytes as Uint8Array +``` + +There are two scenarios where you may want to cast, but using `as`/`var` **isn't safe**: + +- Downcasting on class inheritance (superclass → subclass) +- Between two types that share a superclass + +```typescript +// downcasting on class inheritance +class Bytes extends Uint8Array {} + +let uint8Array = new Uint8Array(2) +// uint8Array // breaks in runtime :( +``` + +```typescript +// between two types that share a superclass +class Bytes extends Uint8Array {} +class ByteArray extends Uint8Array {} + +let bytes = new Bytes(2) +// bytes // breaks in runtime :( +``` + +For those cases, you can use the `changetype` function: + +```typescript +// downcasting on class inheritance +class Bytes extends Uint8Array {} + +let uint8Array = new Uint8Array(2) +changetype(uint8Array) // works :) +``` + +```typescript +// between two types that share a superclass +class Bytes extends Uint8Array {} +class ByteArray extends Uint8Array {} + +let bytes = new Bytes(2) +changetype(bytes) // works :) +``` + +If you just want to remove nullability, you can keep using the `as` operator (or `variable`), but make sure you know that value can't be null, otherwise it will break. + +```typescript +// remove nullability +let previousBalance = AccountBalance.load(balanceId) // AccountBalance | null + +if (previousBalance != null) { + return previousBalance as AccountBalance // safe remove null +} + +let newBalance = new AccountBalance(balanceId) +``` + +For the nullability case we recommend taking a look at the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks), it will make your code cleaner 🙂 + +Also we've added a few more static methods in some types to ease casting, they are: + +- Bytes.fromByteArray +- Bytes.fromUint8Array +- BigInt.fromByteArray +- ByteArray.fromBigInt + +### Nullability check with property access + +To use the [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) you can use either `if` statements or the ternary operator (`?` and `:`) like this: + +```typescript +let something: string | null = 'data' + +let somethingOrElse = something ? something : 'else' + +// or + +let somethingOrElse + +if (something) { + somethingOrElse = something +} else { + somethingOrElse = 'else' +} +``` + +However that only works when you're doing the `if` / ternary on a variable, not on a property access, like this: + +```typescript +class Container { + data: string | null +} + +let container = new Container() +container.data = 'data' + +let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile +``` + +Which outputs this error: + +```typescript +ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. + + let somethingOrElse: string = container.data ? container.data : "else"; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +``` + +To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: + +```typescript +class Container { + data: string | null +} + +let container = new Container() +container.data = 'data' + +let data = container.data + +let somethingOrElse: string = data ? data : 'else' // compiles just fine :) +``` + +### Operator overloading with property access + +If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime. + +```typescript +class BigInt extends Uint8Array { + @operator('+') + plus(other: BigInt): BigInt { + // ... + } +} + +class Wrapper { + public constructor(public n: BigInt | null) {} +} + +let x = BigInt.fromI32(2) +let y: BigInt | null = null + +x + y // give compile time error about nullability + +let wrapper = new Wrapper(y) + +wrapper.n = wrapper.n + x // doesn't give compile time errors as it should +``` + +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. + +```typescript +let wrapper = new Wrapper(y) + +if (!wrapper.n) { + wrapper.n = BigInt.fromI32(0) +} + +wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt +``` + +### Value initialization + +If you have any code like this: + +```typescript +var value: Type // null +value.x = 10 +value.y = 'content' +``` + +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: + +```typescript +var value = new Type() // initialized +value.x = 10 +value.y = 'content' +``` + +Also if you have nullable properties in a GraphQL entity, like this: + +```graphql +type Total @entity { + id: Bytes! + amount: BigInt +} +``` + +And you have code similar to this: + +```typescript +let total = Total.load('latest') + +if (total === null) { + total = new Total('latest') +} + +total.amount = total.amount + BigInt.fromI32(1) +``` + +You'll need to make sure to initialize the `total.amount` value, because if you try to access like in the last line for the sum, it will crash. So you either initialize it first: + +```typescript +let total = Total.load('latest') + +if (total === null) { + total = new Total('latest') + total.amount = BigInt.fromI32(0) +} + +total.tokens = total.tokens + BigInt.fromI32(1) +``` + +Or you can just change your GraphQL schema to not use a nullable type for this property, then we'll initialize it as zero on the `codegen` step 😉 + +```graphql +type Total @entity { + id: Bytes! + amount: BigInt! +} +``` + +```typescript +let total = Total.load('latest') + +if (total === null) { + total = new Total('latest') // already initializes non-nullable properties +} + +total.amount = total.amount + BigInt.fromI32(1) +``` + +### Class property initialization + +If you export any classes with properties that are other classes (declared by you or by the standard library) like this: + +```typescript +class Thing {} + +export class Something { + value: Thing +} +``` + +The compiler will error because you either need to add an initializer for the properties that are classes, or add the `!` operator: + +```typescript +export class Something { + constructor(public value: Thing) {} +} + +// or + +export class Something { + value: Thing + + constructor(value: Thing) { + this.value = value + } +} + +// or + +export class Something { + value!: Thing +} +``` + +### Array initialization + +The `Array` class still accepts a number to initialize the length of the list, however you should take care because operations like `.push` will actually increase the size instead of adding to the beginning, for example: + +```typescript +let arr = new Array(5) // ["", "", "", "", ""] + +arr.push('something') // ["", "", "", "", "", "something"] // size 6 :( +``` + +Depending on the types you're using, eg nullable ones, and how you're accessing them, you might encounter a runtime error like this one: + +``` +ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/array.ts, line 110, column 40, with message: Element type must be nullable if array is holey wasm backtrace: 0: 0x19c4 - !~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type +``` + +To actually push at the beginning you should either, initialize the `Array` with size zero, like this: + +```typescript +let arr = new Array(0) // [] + +arr.push('something') // ["something"] +``` + +Or you should mutate it via index: + +```typescript +let arr = new Array(5) // ["", "", "", "", ""] + +arr[0] = 'something' // ["something", "", "", "", ""] +``` + +### GraphQL schema + +This is not a direct AssemblyScript change, but you may have to update your `schema.graphql` file. + +Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this: + +```graphql +type Something @entity { + id: Bytes! +} + +type MyEntity @entity { + id: Bytes! + invalidField: [Something]! # no longer valid +} +``` + +You'll have to add an `!` to the member of the List type, like this: + +```graphql +type Something @entity { + id: Bytes! +} + +type MyEntity @entity { + id: Bytes! + invalidField: [Something!]! # valid +} +``` + +This changed because of nullability differences between AssemblyScript versions, and it's related to the `src/generated/schema.ts` file (default path, you might have changed this). + +### Other + +- Aligned `Map#set` and `Set#add` with the spec, returning `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) +- Arrays no longer inherit from ArrayBufferView, but are now distinct ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- Classes initialized from object literals can no longer define a constructor ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) +- The result of a `**` binary operation is now the common denominator integer if both operands are integers. Previously, the result was a float as if calling `Math/f.pow` ([v0.11.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.11.0)) +- Coerce `NaN` to `false` when casting to `bool` ([v0.14.9](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.14.9)) +- When shifting a small integer value of type `i8`/`u8` or `i16`/`u16`, only the 3 respectively 4 least significant bits of the RHS value affect the result, analogous to the result of an `i32.shl` only being affected by the 5 least significant bits of the RHS value. Example: `someI8 << 8` previously produced the value `0`, but now produces `someI8` due to masking the RHS as `8 & 7 = 0` (3 bits) ([v0.17.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.0)) +- Bug fix of relational string comparisons when sizes differ ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) diff --git a/website/src/pages/sw/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/sw/resources/migration-guides/graphql-validations-migration-guide.mdx new file mode 100644 index 000000000000..ebed96df1002 --- /dev/null +++ b/website/src/pages/sw/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -0,0 +1,538 @@ +--- +title: GraphQL Validations Migration Guide +--- + +Soon `graph-node` will support 100% coverage of the [GraphQL Validations specification](https://spec.graphql.org/June2018/#sec-Validation). + +Previous versions of `graph-node` did not support all validations and provided more graceful responses - so, in cases of ambiguity, `graph-node` was ignoring invalid GraphQL operations components. + +GraphQL Validations support is the pillar for the upcoming new features and the performance at scale of The Graph Network. + +It will also ensure determinism of query responses, a key requirement on The Graph Network. + +**Enabling the GraphQL Validations will break some existing queries** sent to The Graph API. + +To be compliant with those validations, please follow the migration guide. + +> ⚠️ If you do not migrate your queries before the validations are rolled out, they will return errors and possibly break your frontends/clients. + +## Migration guide + +You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. + +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. + +## Migration CLI tool + +**Most of the GraphQL operations errors can be found in your codebase ahead of time.** + +For this reason, we provide a smooth experience for validating your GraphQL operations during development or in CI. + +[`@graphql-validate/cli`](https://github.com/saihaj/graphql-validate) is a simple CLI tool that helps validate GraphQL operations against a given schema. + +### **Getting started** + +You can run the tool as follows: + +```bash +npx @graphql-validate/cli -s https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME -o *.graphql +``` + +**Notes:** + +- Set or replace $GITHUB_USER, $SUBGRAPH_NAME with the appropriate values. Like: [`artblocks/art-blocks`](https://api.thegraph.com/subgraphs/name/artblocks/art-blocks) +- The preview schema URL (https://api-next.thegraph.com/) provided is heavily rate-limited and will be sunset once all users have migrated to the new version. **Do not use it in production.** +- Operations are identified in files with the following extensions [`.graphql`,](https://www.graphql-tools.com/docs/schema-loading#graphql-file-loader)[`.ts`, `.tsx`, `.js`, `jsx`](https://www.graphql-tools.com/docs/schema-loading#code-file-loader) (`-o` option). + +### CLI output + +The `[@graphql-validate/cli](https://github.com/saihaj/graphql-validate)` CLI tool will output any GraphQL operations errors as follows: + +![Error output from CLI](https://i.imgur.com/x1cBdhq.png) + +For each error, you will find a description, file path and position, and a link to a solution example (see the following section). + +## Run your local queries against the preview schema + +We provide an endpoint `https://api-next.thegraph.com/` that runs a `graph-node` version that has validations turned on. + +You can try out queries by sending them to: + +- `https://api-next.thegraph.com/subgraphs/id/` + +or + +- `https://api-next.thegraph.com/subgraphs/name//` + +To work on queries that have been flagged as having validation errors, you can use your favorite GraphQL query tool, like Altair or [GraphiQL](https://cloud.hasura.io/public/graphiql), and try your query out. Those tools will also mark those errors in their UI, even before you run it. + +## How to solve issues + +Below, you will find all the GraphQL validations errors that could occur on your existing GraphQL operations. + +### GraphQL variables, operations, fragments, or arguments must be unique + +We applied rules for ensuring that an operation includes a unique set of GraphQL variables, operations, fragments, and arguments. + +A GraphQL operation is only valid if it does not contain any ambiguity. + +To achieve that, we need to ensure that some components in your GraphQL operation must be unique. + +Here's an example of a few invalid operations that violates these rules: + +**Duplicate Query name (#UniqueOperationNamesRule)** + +```graphql +# The following operation violated the UniqueOperationName +# rule, since we have a single operation with 2 queries +# with the same name +query myData { + id +} + +query myData { + name +} +``` + +_Solution:_ + +```graphql +query myData { + id +} + +query myData2 { + # rename the second query + name +} +``` + +**Duplicate Fragment name (#UniqueFragmentNamesRule)** + +```graphql +# The following operation violated the UniqueFragmentName +# rule. +query myData { + id + ...MyFields +} + +fragment MyFields { + metadata +} + +fragment MyFields { + name +} +``` + +_Solution:_ + +```graphql +query myData { + id + ...MyFieldsName + ...MyFieldsMetadata +} + +fragment MyFieldsMetadata { # assign a unique name to fragment + metadata +} + +fragment MyFieldsName { # assign a unique name to fragment + name +} +``` + +**Duplicate variable name (#UniqueVariableNamesRule)** + +```graphql +# The following operation violates the UniqueVariables +query myData($id: String, $id: Int) { + id + ...MyFields +} +``` + +_Solution:_ + +```graphql +query myData($id: String) { + # keep the relevant variable (here: `$id: String`) + id + ...MyFields +} +``` + +**Duplicate argument name (#UniqueArgument)** + +```graphql +# The following operation violated the UniqueArguments +query myData($id: ID!) { + userById(id: $id, id: "1") { + id + } +} +``` + +_Solution:_ + +```graphql +query myData($id: ID!) { + userById(id: $id) { + id + } +} +``` + +**Duplicate anonymous query (#LoneAnonymousOperationRule)** + +Also, using two anonymous operations will violate the `LoneAnonymousOperation` rule due to conflict in the response structure: + +```graphql +# This will fail if executed together in +# a single operation with the following two queries: +query { + someField +} + +query { + otherField +} +``` + +_Solution:_ + +```graphql +query { + someField + otherField +} +``` + +Or name the two queries: + +```graphql +query FirstQuery { + someField +} + +query SecondQuery { + otherField +} +``` + +### Overlapping Fields + +A GraphQL selection set is considered valid only if it correctly resolves the eventual result set. + +If a specific selection set, or a field, creates ambiguity either by the selected field or by the arguments used, the GraphQL service will fail to validate the operation. + +Here are a few examples of invalid operations that violate this rule: + +**Conflicting fields aliases (#OverlappingFieldsCanBeMergedRule)** + +```graphql +# Aliasing fields might cause conflicts, either with +# other aliases or other fields that exist on the +# GraphQL schema. +query { + dogs { + name: nickname + name + } +} +``` + +_Solution:_ + +```graphql +query { + dogs { + name: nickname + originalName: name # alias the original `name` field + } +} +``` + +**Conflicting fields with arguments (#OverlappingFieldsCanBeMergedRule)** + +```graphql +# Different arguments might lead to different data, +# so we can't assume the fields will be the same. +query { + dogs { + doesKnowCommand(dogCommand: SIT) + doesKnowCommand(dogCommand: HEEL) + } +} +``` + +_Solution:_ + +```graphql +query { + dogs { + knowsHowToSit: doesKnowCommand(dogCommand: SIT) + knowsHowToHeel: doesKnowCommand(dogCommand: HEEL) + } +} +``` + +Also, in more complex use-cases, you might violate this rule by using two fragments that might cause a conflict in the eventually expected set: + +```graphql +query { + # Eventually, we have two "x" definitions, pointing + # to different fields! + ...A + ...B +} + +fragment A on Type { + x: a +} + +fragment B on Type { + x: b +} +``` + +In addition to that, client-side GraphQL directives like `@skip` and `@include` might lead to ambiguity, for example: + +```graphql +fragment mergeSameFieldsWithSameDirectives on Dog { + name @include(if: true) + name @include(if: false) +} +``` + +[You can read more about the algorithm here.](https://spec.graphql.org/June2018/#sec-Field-Selection-Merging) + +### Unused Variables or Fragments + +A GraphQL operation is also considered valid only if all operation-defined components (variables, fragments) are used. + +Here are a few examples for GraphQL operations that violates these rules: + +**Unused variable** (#NoUnusedVariablesRule) + +```graphql +# Invalid, because $someVar is never used. +query something($someVar: String) { + someData +} +``` + +_Solution:_ + +```graphql +query something { + someData +} +``` + +**Unused Fragment** (#NoUnusedFragmentsRule) + +```graphql +# Invalid, because fragment AllFields is never used. +query something { + someData +} + +fragment AllFields { # unused :( + name + age +} +``` + +_Solution:_ + +```graphql +# Invalid, because fragment AllFields is never used. +query something { + someData +} + +# remove the `AllFields` fragment +``` + +### Invalid or missing Selection-Set (#ScalarLeafsRule) + +Also, a GraphQL field selection is only valid if the following is validated: + +- An object field must-have selection set specified. +- An edge field (scalar, enum) must not have a selection set specified. + +Here are a few examples of violations of these rules with the following Schema: + +```graphql +type Image { + url: String! +} + +type User { + id: ID! + avatar: Image! +} + +type Query { + user: User! +} +``` + +**Invalid Selection-Set** + +```graphql +query { + user { + id { # Invalid, because "id" is of type ID and does not have sub-fields + + } + } +} +``` + +_Solution:_ + +```graphql +query { + user { + id + } +} +``` + +**Missing Selection-Set** + +```graphql +query { + user { + id + image # `image` requires a Selection-Set for sub-fields! + } +} +``` + +_Solution:_ + +```graphql +query { + user { + id + image { + src + } + } +} +``` + +### Incorrect Arguments values (#VariablesInAllowedPositionRule) + +GraphQL operations that pass hard-coded values to arguments must be valid, based on the value defined in the schema. + +Here are a few examples of invalid operations that violate these rules: + +```graphql +query purposes { + # If "name" is defined as "String" in the schema, + # this query will fail during validation. + purpose(name: 1) { + id + } +} + +# This might also happen when an incorrect variable is defined: + +query purposes($name: Int!) { + # If "name" is defined as `String` in the schema, + # this query will fail during validation, because the + # variable used is of type `Int` + purpose(name: $name) { + id + } +} +``` + +### Unknown Type, Variable, Fragment, or Directive (#UnknownX) + +The GraphQL API will raise an error if any unknown type, variable, fragment, or directive is used. + +Those unknown references must be fixed: + +- rename if it was a typo +- otherwise, remove + +### Fragment: invalid spread or definition + +**Invalid Fragment spread (#PossibleFragmentSpreadsRule)** + +A Fragment cannot be spread on a non-applicable type. + +Example, we cannot apply a `Cat` fragment to the `Dog` type: + +```graphql +query { + dog { + ...CatSimple + } +} + +fragment CatSimple on Cat { + # ... +} +``` + +**Invalid Fragment definition (#FragmentsOnCompositeTypesRule)** + +All Fragment must be defined upon (using `on ...`) a composite type, in short: object, interface, or union. + +The following examples are invalid, since defining fragments on scalars is invalid. + +```graphql +fragment fragOnScalar on Int { + # we cannot define a fragment upon a scalar (`Int`) + something +} + +fragment inlineFragOnScalar on Dog { + ... on Boolean { + # `Boolean` is not a subtype of `Dog` + somethingElse + } +} +``` + +### Directives usage + +**Directive cannot be used at this location (#KnownDirectivesRule)** + +Only GraphQL directives (`@...`) supported by The Graph API can be used. + +Here is an example with The GraphQL supported directives: + +```graphql +query { + dog { + name @include(true) + age @skip(true) + } +} +``` + +_Note: `@stream`, `@live`, `@defer` are not supported._ + +**Directive can only be used once at this location (#UniqueDirectivesPerLocationRule)** + +The directives supported by The Graph can only be used once per location. + +The following is invalid (and redundant): + +```graphql +query { + dog { + name @include(true) @include(true) + } +} +``` diff --git a/website/src/pages/sw/resources/roles/curating.mdx b/website/src/pages/sw/resources/roles/curating.mdx new file mode 100644 index 000000000000..a228ebfb3267 --- /dev/null +++ b/website/src/pages/sw/resources/roles/curating.mdx @@ -0,0 +1,89 @@ +--- +title: Curating +--- + +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. + +## What Does Signaling Mean for The Graph Network? + +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. + +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. + +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. + +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. + +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. + +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. + +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). + +![Explorer Subgraphs](/img/explorer-subgraphs.png) + +## How to Signal + +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) + +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. + +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. + +Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. + +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. + +## Withdrawing your GRT + +Curators have the option to withdraw their signaled GRT at any time. + +Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). + +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. + +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. + +## Risks + +1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. + +## Curation FAQs + +### 1. What % of query fees do Curators earn? + +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. + +### 2. How do I decide which Subgraphs are high quality to signal on? + +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: + +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. + +### 3. What’s the cost of updating a Subgraph? + +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. + +### 4. How often can I update my Subgraph? + +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. + +### 5. Can I sell my curation shares? + +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). + +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). + +### 6. Am I eligible for a curation grant? + +Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. + +Still confused? Check out our Curation video guide below: + + diff --git a/website/src/pages/sw/resources/roles/delegating/delegating.mdx b/website/src/pages/sw/resources/roles/delegating/delegating.mdx new file mode 100644 index 000000000000..a5494ad6f039 --- /dev/null +++ b/website/src/pages/sw/resources/roles/delegating/delegating.mdx @@ -0,0 +1,143 @@ +--- +title: Delegating +--- + +To start delegating right away, check out [delegate on the graph](https://thegraph.com/explorer/delegate?chain=arbitrum-one). + +## Overview + +Delegators earn GRT by delegating GRT to Indexers, which helps network security and functionality. + + + +## Benefits of Delegating + +- Strengthen the network’s security and scalability by supporting Indexers. +- Earn a portion of rewards generated by the Indexers. + +## How Does Delegation Work? + +Delegators earn GRT rewards from the Indexer(s) they choose to delegate their GRT to. + +An Indexer's ability to process queries and earn rewards depends on three key factors: + +1. The Indexer's Self-Stake (GRT staked by the Indexer). +2. The total GRT delegated to them by Delegators. +3. The price the Indexer sets for queries. + +The more GRT staked and delegated to an Indexer, the more queries they can serve, leading to higher potential rewards for both the Delegator and Indexer. + +### What is Delegation Capacity? + +Delegation Capacity refers to the maximum amount of GRT an Indexer can accept from Delegators, based on the Indexer's Self-Stake. + +The Graph Network includes a delegation ratio of 16, meaning an Indexer can accept up to 16 times their Self-Stake in delegated GRT. + +For example, if an Indexer has a Self-Stake of 1M GRT, their Delegation Capacity is 16M. + +### Why Does Delegation Capacity Matter? + +If an Indexer exceeds their Delegation Capacity, rewards for all Delegators become diluted because the excess delegated GRT cannot be used effectively within the protocol. + +This makes it crucial for Delegators to evaluate an Indexer's current Delegation Capacity before selecting an Indexer. + +Indexers can increase their Delegation Capacity by increasing their Self-Stake, thereby raising the limit for delegated tokens. + +## Delegation on The Graph + + + +> Please note this guide does not cover steps such as setting up MetaMask. The Ethereum community provides a [comprehensive resource regarding wallets](https://ethereum.org/en/wallets/). + +There are two sections in this guide: + +- The risks of delegating tokens in The Graph Network +- How to calculate expected returns as a Delegator + +## Delegation Risks + +Listed below are the main risks of being a Delegator in the protocol. + +### The Delegation Tax + +Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network. + +As a Delegator, it's important to understand the following: + +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. + +### The Undelegation Period + +When a Delegator chooses to undelegate, their tokens are subject to a 28-day undelegation period. + +This means they cannot transfer their tokens or earn any rewards for 28 days. + +After the undelegation period, GRT will return to your crypto wallet. + +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing opportunities to earn rewards. + +As a result, it’s recommended that you choose an Indexer wisely. + +![Delegation unbonding. Note the 0.5% fee in the Delegation UI, as well as the 28 day unbonding period.](/img/Delegation-Unbonding.png) + +#### Delegation Parameters + +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%. + +![Indexing Reward Cut. The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.](/img/Indexing-Reward-Cut.png) + +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +- It is highly recommended that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations. + +- Many Indexers are active in Discord and will be happy to answer your questions. + +## Calculating Delegators Expected Return + +> Calculate the ROI on your delegation [here](https://thegraph.com/explorer/delegate?chain=arbitrum-one). + +A Delegator must consider a variety of factors to determine a return: + +An Indexer's ability to use the delegated GRT available to them impacts their rewards. + +If an Indexer does not allocate all the GRT at their disposal, they may miss out on maximizing potential earnings for both themselves and their Delegators. + +Indexers can close an allocation and collect rewards at any time within the 1 to 28-day window. However, if rewards are not promptly collected, the total rewards may appear lower, even if a percentage of rewards remain unclaimed. + +### Considering the query fee cut and indexing fee cut + +You should choose an Indexer that is transparent about setting their Query Fee and Indexing Fee Cuts. + +The formula is: + +![Delegation Image 3](/img/Delegation-Reward-Formula.png) + +### Considering the Indexer's delegation pool + + + +Delegators should consider the proportion of the Delegation Pool they own. + +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. + +This gives the Delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) + +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. + +## Delegator FAQs and Bugs + +### MetaMask "Pending Transaction" Bug + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. diff --git a/website/src/pages/sw/resources/roles/delegating/undelegating.mdx b/website/src/pages/sw/resources/roles/delegating/undelegating.mdx new file mode 100644 index 000000000000..6a361c508450 --- /dev/null +++ b/website/src/pages/sw/resources/roles/delegating/undelegating.mdx @@ -0,0 +1,69 @@ +--- +title: Undelegating +--- + +Learn how to withdraw your delegated tokens through [Graph Explorer](https://thegraph.com/explorer) or [Arbiscan](https://arbiscan.io/). + +> To avoid this in the future, it's recommended that you select an Indexer wisely. To learn how to select and Indexer, check out the Delegate section in Graph Explorer. + +## How to Withdraw Using Graph Explorer + +### Step-by-Step + +1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio. + +2. Click on your profile. You can find it on the top right corner of the page. + - Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead. + +3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to. + +4. Click on the Indexer from which you wish to withdraw your tokens. + - Make sure to note the specific Indexer, as you will need to find them again to withdraw. + +5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below: + + ![Undelegate button](/img/undelegate-button.png) + +6. After approximately [28 epochs](https://thegraph.com/explorer/network/epochs?chain=arbitrum-one) (28 days), return to the Delegate section and locate the specific Indexer you undelegated from. + +7. Once you find the Indexer, click on the three dots next to them and proceed to withdraw all your tokens. + +## How to Withdraw Using Arbiscan + +> This process is primarily useful if the UI in Graph Explorer experiences issues. + +### Step-by-Step + +1. Find your delegation transaction on Arbiscan. + - Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a) + +2. Navigate to "Transaction Action" where you can find the staking extension contract: + - [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03) + +3. Then click on "Contract". ![Contract tab on Arbiscan, between NFT Transfers and Events](/img/arbiscan-contract.png) + +4. Scroll to the bottom and copy the Contract ABI. There should be a small button next to it that allows you to copy everything. + +5. Click on your profile button in the top right corner of the page. If you haven't created an account yet, please do so. + +6. Once you're in your profile, click on "Custom ABI”. + +7. Paste the custom ABI you copied from the staking extension contract, and add the custom ABI for the address: 0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03 (**sample address**) + +8. Go back to the [staking extension contract](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract). Now, call the `unstake` function in the [Write as Proxy tab](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03#writeProxyContract), which has been added thanks to the custom ABI, with the number of tokens that you delegated. + +9. If you don't know how many tokens you delegated, you can call `getDelegation` on the Read Custom tab. You will need to paste your address (delegator address) and the address of the Indexer that you delegated to, as shown in the following screenshot: + + ![Both of the addresses needed](/img/get-delegate.png) + + - This will return three numbers. The first number is the amount you can unstake. + +10. After you have called `unstake`, you can withdraw after approximately 28 epochs (28 days) by calling the `withdraw` function. + +11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below: + + ![Call \`getWithdrawableDelegatedTokens\` to see amount of tokens that can be withdrawn](/img/withdraw-available.png) + +## Additional Resources + +To delegate successfully, review the [delegating documentation](/resources/roles/delegating/delegating/) and check out the delegate section in Graph Explorer. diff --git a/website/src/pages/sw/resources/subgraph-studio-faq.mdx b/website/src/pages/sw/resources/subgraph-studio-faq.mdx new file mode 100644 index 000000000000..c2d4037bd099 --- /dev/null +++ b/website/src/pages/sw/resources/subgraph-studio-faq.mdx @@ -0,0 +1,31 @@ +--- +title: Subgraph Studio FAQs +--- + +## 1. What is Subgraph Studio? + +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. + +## 2. How do I create an API Key? + +To create an API, navigate to Subgraph Studio and connect your wallet. You will be able to click the API keys tab at the top. There, you will be able to create an API key. + +## 3. Can I create multiple API Keys? + +Yes! You can create multiple API Keys to use in different projects. Check out the link [here](https://thegraph.com/studio/apikeys/). + +## 4. How do I restrict a domain for an API Key? + +After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. + +## 5. Can I transfer my Subgraph to another owner? + +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. + +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. + +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? + +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. + +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/sw/resources/tokenomics.mdx b/website/src/pages/sw/resources/tokenomics.mdx new file mode 100644 index 000000000000..dac3383a28e7 --- /dev/null +++ b/website/src/pages/sw/resources/tokenomics.mdx @@ -0,0 +1,103 @@ +--- +title: Tokenomics of The Graph Network +sidebarTitle: Tokenomics +description: The Graph Network is incentivized by powerful tokenomics. Here’s how GRT, The Graph’s native work utility token, works. +--- + +## Overview + +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. + +## Specifics + +The Graph's model is akin to a B2B2C model, but it's driven by a decentralized network where participants collaborate to provide data to end users in exchange for GRT rewards. GRT is the utility token for The Graph. It coordinates and incentivizes the interaction between data providers and consumers within the network. + +The Graph plays a vital role in making blockchain data more accessible and supports a marketplace for its exchange. To learn more about The Graph's pay-for-what-you-need model, check out its [free and growth plans](/subgraphs/billing/). + +- GRT Token Address on Mainnet: [0xc944e90c64b2c07662a292be6244bdf05cda44a7](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) + +- GRT Token Address on Arbitrum One: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) + +## The Roles of Network Participants + +There are four primary network participants: + +1. Delegators - Delegate GRT to Indexers & secure the network + +2. Curators - Find the best Subgraphs for Indexers + +3. Developers - Build & query Subgraphs + +4. Indexers - Backbone of blockchain data + +Fishermen and Arbitrators are also integral to the network's success through other contributions, supporting the work of the other primary participant roles. For more information about network roles, [read this article](https://thegraph.com/blog/the-graph-grt-token-economics/). + +![Tokenomics diagram](/img/updated-tokenomics-image.png) + +## Delegators (Passively earn GRT) + +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. + +For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. + +There is a 0.5% delegation tax which is burned whenever a Delegator delegates GRT on the network. If a Delegator chooses to withdraw their delegated GRT, the Delegator must wait for the 28-epoch unbonding period. Each epoch is 6,646 blocks, which means 28 epochs ends up being approximately 26 days. + +If you're reading this, you're capable of becoming a Delegator right now by heading to the [network participants page](https://thegraph.com/explorer/participants/indexers), and delegating GRT to an Indexer of your choice. + +## Curators (Earn GRT) + +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. + +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. + +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. + +## Developers + +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. + +### Creating a Subgraph + +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. + +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. + +### Querying an existing Subgraph + +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. + +Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. + +1% of the query fees paid to the network are burned. + +## Indexers (Earn GRT) + +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. + +Indexers can earn GRT rewards in two ways: + +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). + +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. + +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. + +In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. + +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. + +The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. + +## Token Supply: Burning & Issuance + +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. + +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. + +![Total burned GRT](/img/total-burned-grt.jpeg) + +In addition to these regularly occurring burning activities, the GRT token also has a slashing mechanism in place to penalize malicious or irresponsible behavior by Indexers. If an Indexer is slashed, 50% of their indexing rewards for the epoch are burned (while the other half goes to the fisherman), and their self-stake is slashed by 2.5%, with half of this amount being burned. This helps to ensure that Indexers have a strong incentive to act in the best interests of the network and to contribute to its security and stability. + +## Improving the Protocol + +The Graph Network is ever-evolving and improvements to the economic design of the protocol are constantly being made to provide the best experience for all network participants. The Graph Council oversees protocol changes and community members are encouraged to participate. Get involved with protocol improvements in [The Graph Forum](https://forum.thegraph.com/). diff --git a/website/src/pages/sw/sps/introduction.mdx b/website/src/pages/sw/sps/introduction.mdx new file mode 100644 index 000000000000..366c126ac60a --- /dev/null +++ b/website/src/pages/sw/sps/introduction.mdx @@ -0,0 +1,30 @@ +--- +title: Introduction to Substreams-Powered Subgraphs +sidebarTitle: Introduction +--- + +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. + +## Overview + +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +### Specifics + +There are two methods of enabling this technology: + +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. + +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. + +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. + +### Additional Resources + +Visit the following links for tutorials on using code-generation tooling to build your first end-to-end Substreams project quickly: + +- [Solana](/substreams/developing/solana/transactions/) +- [EVM](https://docs.substreams.dev/tutorials/intro-to-tutorials/evm) +- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet) +- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective) +- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra) diff --git a/website/src/pages/sw/sps/sps-faq.mdx b/website/src/pages/sw/sps/sps-faq.mdx new file mode 100644 index 000000000000..250c466d5929 --- /dev/null +++ b/website/src/pages/sw/sps/sps-faq.mdx @@ -0,0 +1,96 @@ +--- +title: Substreams-Powered Subgraphs FAQ +sidebarTitle: FAQ +--- + +## What are Substreams? + +Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. It allows you to refine and shape blockchain data for fast and seamless digestion by end-user applications. + +Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engine that serves as a blockchain data transformation layer. It's powered by [Firehose](https://firehose.streamingfast.io/), and enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](/substreams/developing/sinks/) their data anywhere. + +Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. + +## What are Substreams-powered Subgraphs? + +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. + +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. + +## How are Substreams-powered Subgraphs different from Subgraphs? + +Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. + +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. + +## What are the benefits of using Substreams-powered Subgraphs? + +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. + +## What are the benefits of Substreams? + +There are many benefits to using Substreams, including: + +- Composable: You can stack Substreams modules like LEGO blocks, and build upon community modules, further refining public data. + +- High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). + +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. + +- Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. + +- Access to additional data which is not available as part of the JSON RPC + +- All the benefits of the Firehose. + +## What is the Firehose? + +Developed by [StreamingFast](https://www.streamingfast.io/), the Firehose is a blockchain data extraction layer designed from scratch to process the full history of blockchains at speeds that were previously unseen. Providing a files-based and streaming-first approach, it is a core component of StreamingFast's suite of open-source technologies and the foundation for Substreams. + +Go to the [documentation](https://firehose.streamingfast.io/) to learn more about the Firehose. + +## What are the benefits of the Firehose? + +There are many benefits to using Firehose, including: + +- Lowest latency & no polling: In a streaming-first fashion, the Firehose nodes are designed to race to push out the block data first. + +- Prevents downtimes: Designed from the ground up for High Availability. + +- Never miss a beat: The Firehose stream cursor is designed to handle forks and to continue where you left off in any condition. + +- Richest data model:  Best data model that includes the balance changes, the full call tree, internal transactions, logs, storage changes, gas costs, and more. + +- Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. + +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? + +The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. + +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. + +The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. + +## What is the role of Rust modules in Substreams? + +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. + +See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. + +## What makes Substreams composable? + +When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. + +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. + +## How can you build and deploy a Substreams-powered Subgraph? + +After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). + +## Where can I find examples of Substreams and Substreams-powered Subgraphs? + +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. + +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? + +The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/sw/sps/triggers.mdx b/website/src/pages/sw/sps/triggers.mdx new file mode 100644 index 000000000000..66687aa21889 --- /dev/null +++ b/website/src/pages/sw/sps/triggers.mdx @@ -0,0 +1,47 @@ +--- +title: Substreams Triggers +--- + +Use Custom Triggers and enable the full use GraphQL. + +## Overview + +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. + +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. + +### Defining `handleTransactions` + +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).transactions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you're seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new Subgraph entity for every transaction + +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). + +### Additional Resources + +To scaffold your first project in the Development Container, check out one of the [How-To Guide](/substreams/developing/dev-container/). diff --git a/website/src/pages/sw/sps/tutorial.mdx b/website/src/pages/sw/sps/tutorial.mdx new file mode 100644 index 000000000000..e20a22ba4b1c --- /dev/null +++ b/website/src/pages/sw/sps/tutorial.mdx @@ -0,0 +1,155 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +sidebarTitle: Tutorial +--- + +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. + +## Get Started + +For a video tutorial, check out [How to Index Solana with a Substreams-powered Subgraph](/sps/tutorial/#video-tutorial) + +### Prerequisites + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +### Step 1: Initialize Your Project + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +### Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.9 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +### Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. + +Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +### Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. + +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +### Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. + +### Conclusion + +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. + +### Video Tutorial + + + +### Additional Resources + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/src/pages/sw/subgraphs/_meta-titles.json b/website/src/pages/sw/subgraphs/_meta-titles.json new file mode 100644 index 000000000000..0556abfc236c --- /dev/null +++ b/website/src/pages/sw/subgraphs/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "querying": "Querying", + "developing": "Developing", + "cookbook": "Cookbook", + "best-practices": "Best Practices" +} diff --git a/website/src/pages/sw/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/sw/subgraphs/best-practices/avoid-eth-calls.mdx new file mode 100644 index 000000000000..07249c97dd2a --- /dev/null +++ b/website/src/pages/sw/subgraphs/best-practices/avoid-eth-calls.mdx @@ -0,0 +1,117 @@ +--- +title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls +sidebarTitle: Avoiding eth_calls +--- + +## TLDR + +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. + +## Why Avoiding `eth_calls` Is a Best Practice + +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. + +### What Does an eth_call Look Like? + +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: + +```yaml +event Transfer(address indexed from, address indexed to, uint256 value); +``` + +Suppose the tokens' pool membership is determined by a state variable named `getPoolInfo`. In this case, we would need to use an `eth_call` to query this data: + +```typescript +import { Address } from '@graphprotocol/graph-ts' +import { ERC20, Transfer } from '../generated/ERC20/ERC20' +import { TokenTransaction } from '../generated/schema' + +export function handleTransfer(event: Transfer): void { + let transaction = new TokenTransaction(event.transaction.hash.toHex()) + + // Bind the ERC20 contract instance to the given address: + let instance = ERC20.bind(event.address) + + // Retrieve pool information via eth_call + let poolInfo = instance.getPoolInfo(event.params.to) + + transaction.pool = poolInfo.toHexString() + transaction.from = event.params.from.toHexString() + transaction.to = event.params.to.toHexString() + transaction.value = event.params.value + + transaction.save() +} +``` + +This is functional, however is not ideal as it slows down our Subgraph’s indexing. + +## How to Eliminate `eth_calls` + +Ideally, the smart contract should be updated to emit all necessary data within events. For instance, modifying the smart contract to include pool information in the event could eliminate the need for `eth_calls`: + +``` +event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); +``` + +With this update, the Subgraph can directly index the required data without external calls: + +```typescript +import { Address } from '@graphprotocol/graph-ts' +import { ERC20, TransferWithPool } from '../generated/ERC20/ERC20' +import { TokenTransaction } from '../generated/schema' + +export function handleTransferWithPool(event: TransferWithPool): void { + let transaction = new TokenTransaction(event.transaction.hash.toHex()) + + transaction.pool = event.params.poolInfo.toHexString() + transaction.from = event.params.from.toHexString() + transaction.to = event.params.to.toHexString() + transaction.value = event.params.value + + transaction.save() +} +``` + +This is much more performant as it has eliminated the need for `eth_calls`. + +## How to Optimize `eth_calls` + +If modifying the smart contract is not possible and `eth_calls` are required, read “[Improve Subgraph Indexing Performance Easily: Reduce eth_calls](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)” by Simon Emanuel Schmid to learn various strategies on how to optimize `eth_calls`. + +## Reducing the Runtime Overhead of `eth_calls` + +For the `eth_calls` that can not be eliminated, the runtime overhead they introduce can be minimized by declaring them in the manifest. When `graph-node` processes a block it performs all declared `eth_calls` in parallel before handlers are run. Calls that are not declared are executed sequentially when handlers run. The runtime improvement comes from performing calls in parallel rather than sequentially - that helps reduce the total time spent in calls but does not eliminate it completely. + +Currently, `eth_calls` can only be declared for event handlers. In the manifest, write + +```yaml +event: TransferWithPool(address indexed, address indexed, uint256, bytes32 indexed) +handler: handleTransferWithPool +calls: + ERC20.poolInfo: ERC20[event.address].getPoolInfo(event.params.to) +``` + +The portion highlighted in yellow is the call declaration. The part before the colon is simply a text label that is only used for error messages. The part after the colon has the form `Contract[address].function(params)`. Permissible values for address and params are `event.address` and `event.params.`. + +The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. + +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. + +## Conclusion + +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/sw/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/sw/subgraphs/best-practices/derivedfrom.mdx new file mode 100644 index 000000000000..093eb29255ab --- /dev/null +++ b/website/src/pages/sw/subgraphs/best-practices/derivedfrom.mdx @@ -0,0 +1,89 @@ +--- +title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom +sidebarTitle: Arrays with @derivedFrom +--- + +## TLDR + +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. + +## How to Use the `@derivedFrom` Directive + +You just need to add a `@derivedFrom` directive after your array in your schema. Like this: + +```graphql +comments: [Comment!]! @derivedFrom(field: "post") +``` + +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. + +### Example Use Case for `@derivedFrom` + +An example of a dynamically growing array is a blogging platform where a “Post” can have many “Comments”. + +Let’s start with our two entities, `Post` and `Comment` + +Without optimization, you could implement it like this with an array: + +```graphql +type Post @entity { + id: Bytes! + title: String! + content: String! + comments: [Comment!]! +} + +type Comment @entity { + id: Bytes! + content: String! +} +``` + +Arrays like these will effectively store extra Comments data on the Post side of the relationship. + +Here’s what an optimized version looks like using `@derivedFrom`: + +```graphql +type Post @entity { + id: Bytes! + title: String! + content: String! + comments: [Comment!]! @derivedFrom(field: "post") +} + +type Comment @entity { + id: Bytes! + content: String! + post: Post! +} +``` + +Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. + +This will not only make our Subgraph more efficient, but it will also unlock three features: + +1. We can query the `Post` and see all of its comments. + +2. We can do a reverse lookup and query any `Comment` and see which post it comes from. + +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. + +## Conclusion + +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. + +For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/sw/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/sw/subgraphs/best-practices/grafting-hotfix.mdx new file mode 100644 index 000000000000..674cf6b87c62 --- /dev/null +++ b/website/src/pages/sw/subgraphs/best-practices/grafting-hotfix.mdx @@ -0,0 +1,187 @@ +--- +title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +sidebarTitle: Grafting and Hotfixing +--- + +## TLDR + +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.3.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.3.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed Subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new Subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/sw/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/sw/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx new file mode 100644 index 000000000000..3a633244e0f2 --- /dev/null +++ b/website/src/pages/sw/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -0,0 +1,191 @@ +--- +title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs +sidebarTitle: Immutable Entities and Bytes as IDs +--- + +## TLDR + +Using Immutable Entities and Bytes for IDs in our `schema.graphql` file [significantly improves ](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/) indexing speed and query performance. + +## Immutable Entities + +To make an entity immutable, we simply add `(immutable: true)` to an entity. + +```graphql +type Transfer @entity(immutable: true) { + id: Bytes! + from: Bytes! + to: Bytes! + value: BigInt! +} +``` + +By making the `Transfer` entity immutable, graph-node is able to process the entity more efficiently, improving indexing speeds and query responsiveness. + +Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging onchain event data, such as a `Transfer` event being logged as a `Transfer` entity. + +### Under the hood + +Mutable entities have a 'block range' indicating their validity. Updating these entities requires the graph node to adjust the block range of previous versions, increasing database workload. Queries also need filtering to find only live entities. Immutable entities are faster because they are all live and since they won't change, no checks or updates are required while writing, and no filtering is required during queries. + +### When not to use Immutable Entities + +If you have a field like `status` that needs to be modified over time, then you should not make the entity immutable. Otherwise, you should use immutable entities whenever possible. + +## Bytes as IDs + +Every entity requires an ID. In the previous example, we can see that the ID is already of the Bytes type. + +```graphql +type Transfer @entity(immutable: true) { + id: Bytes! + from: Bytes! + to: Bytes! + value: BigInt! +} +``` + +While other types for IDs are possible, such as String and Int8, it is recommended to use the Bytes type for all IDs due to character strings taking twice as much space as Byte strings to store binary data, and comparisons of UTF-8 character strings must take the locale into account which is much more expensive than the bytewise comparison used to compare Byte strings. + +### Reasons to Not Use Bytes as IDs + +1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +3. Indexing and querying performance improvements are not desired. + +### Concatenating With Bytes as IDs + +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. + +Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. + +```typescript +export function handleTransfer(event: TransferEvent): void { + let entity = new Transfer(event.transaction.hash.concatI32(event.logIndex.toI32())) + entity.from = event.params.from + entity.to = event.params.to + entity.value = event.params.value + + entity.blockNumber = event.block.number + entity.blockTimestamp = event.block.timestamp + entity.transactionHash = event.transaction.hash + + entity.save() +} +``` + +### Sorting With Bytes as IDs + +Sorting using Bytes as IDs is not optimal as seen in this example query and response. + +Query: + +```graphql +{ + transfers(first: 3, orderBy: id) { + id + from + to + value + } +} +``` + +Query response: + +```json +{ + "data": { + "transfers": [ + { + "id": "0x00010000", + "from": "0xabcd...", + "to": "0x1234...", + "value": "256" + }, + { + "id": "0x00020000", + "from": "0xefgh...", + "to": "0x5678...", + "value": "512" + }, + { + "id": "0x01000000", + "from": "0xijkl...", + "to": "0x9abc...", + "value": "1" + } + ] + } +} +``` + +The IDs are returned as hex. + +To improve sorting, we should create another field on the entity that is a BigInt. + +```graphql +type Transfer @entity { + id: Bytes! + from: Bytes! # address + to: Bytes! # address + value: BigInt! # unit256 + tokenId: BigInt! # uint256 +} +``` + +This will allow for sorting to be optimized sequentially. + +Query: + +```graphql +{ + transfers(first: 3, orderBy: tokenId) { + id + tokenId + } +} +``` + +Query Response: + +```json +{ + "data": { + "transfers": [ + { + "id": "0x…", + "tokenId": "1" + }, + { + "id": "0x…", + "tokenId": "2" + }, + { + "id": "0x…", + "tokenId": "3" + } + ] + } +} +``` + +## Conclusion + +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. + +Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/sw/subgraphs/best-practices/pruning.mdx b/website/src/pages/sw/subgraphs/best-practices/pruning.mdx new file mode 100644 index 000000000000..2d4f9ad803e0 --- /dev/null +++ b/website/src/pages/sw/subgraphs/best-practices/pruning.mdx @@ -0,0 +1,56 @@ +--- +title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning +sidebarTitle: Pruning with indexerHints +--- + +## TLDR + +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. + +## How to Prune a Subgraph With `indexerHints` + +Add a section called `indexerHints` in the manifest. + +`indexerHints` has three `prune` options: + +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. +- `prune: `: Sets a custom limit on the number of historical blocks to retain. +- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. + +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +indexerHints: + prune: auto +dataSources: + - kind: ethereum/contract + name: Contract + network: mainnet +``` + +## Important Considerations + +- If [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data. + +- It is not possible to [graft](/subgraphs/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months). + +## Conclusion + +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/sw/subgraphs/best-practices/timeseries.mdx b/website/src/pages/sw/subgraphs/best-practices/timeseries.mdx new file mode 100644 index 000000000000..9732199531a8 --- /dev/null +++ b/website/src/pages/sw/subgraphs/best-practices/timeseries.mdx @@ -0,0 +1,199 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +sidebarTitle: Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Prerequisites + +You need `spec version 1.1.0` for this feature. + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + amount: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "amount") +} +``` + +In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/subgraphs/best-practices/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/subgraphs/best-practices/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/subgraphs/best-practices/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/subgraphs/best-practices/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/subgraphs/best-practices/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/subgraphs/best-practices/grafting-hotfix/) diff --git a/website/src/pages/sw/subgraphs/billing.mdx b/website/src/pages/sw/subgraphs/billing.mdx new file mode 100644 index 000000000000..e3a834f86844 --- /dev/null +++ b/website/src/pages/sw/subgraphs/billing.mdx @@ -0,0 +1,214 @@ +--- +title: Billing +--- + +## Querying Plans + +There are two plans to use when querying Subgraphs on The Graph Network. + +- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. + +- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. + + + +## Query Payments with credit card + +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) + 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). + 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". + 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. + 4. To choose a credit card payment, choose “Credit card” as the payment method and fill out your credit card information. Those who have used Stripe before can use the Link feature to autofill their details. +- Invoices will be processed at the end of each month and require an active credit card on file for all queries beyond the free plan quota. + +## Query Payments with GRT + +Subgraph users can use The Graph Token (or GRT) to pay for queries on The Graph Network. With GRT, invoices will be processed at the end of each month and require a sufficient balance of GRT to make queries beyond the Free Plan quota of 100,000 monthly queries. You'll be required to pay fees generated from your API keys. Using the billing contract, you'll be able to: + +- Add and withdraw GRT from your account balance. +- Keep track of your balances based on how much GRT you have added to your account balance, how much you have removed, and your invoices. +- Automatically pay invoices based on query fees generated, as long as there is enough GRT in your account balance. + +### GRT on Arbitrum or Ethereum + +The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. + +To pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: + +- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges: + +- [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) + +- [TransferTo](https://transferto.xyz/swap) + +- If you already have assets on Arbitrum, you can swap them for GRT via a swapping protocol like Uniswap. + +- Alternatively, you acquire GRT directly on Arbitrum through a decentralized exchange. + +> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt). + +Once you bridge GRT, you can add it to your billing balance. + +### Adding GRT using a wallet + +1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". +3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet". +4. Use the slider to estimate the number of queries you expect to make on a monthly basis. + - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. +5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. +6. Select the number of months you would like to prepay. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. +7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. +8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. + - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. +9. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. + +- Note that GRT deposited from Arbitrum will process within a few moments while GRT deposited from Ethereum will take approximately 15-20 minutes to process. Once the transaction is confirmed, you'll see the GRT added to your account balance. + +### Withdrawing GRT using a wallet + +1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +4. Enter the amount of GRT you would like to withdraw. +5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. +6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. + +### Adding GRT using a multisig wallet + +1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/subgraphs/billing/). +2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. +3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet". +4. Use the slider to estimate the number of queries you expect to make on a monthly basis. + - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. +5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. +6. Select the number of months you would like to prepay. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. +7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. + - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. +8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. + +- Note that GRT deposited from Arbitrum will process within a few moments while GRT deposited from Ethereum will take approximately 15-20 minutes to process. Once the transaction is confirmed, you'll see the GRT added to your account balance. + +## Getting GRT + +This section will show you how to get GRT to pay for query fees. + +### Coinbase + +This will be a step by step guide for purchasing GRT on Coinbase. + +1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. +3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy/Sell" button on the top right of the page. +4. Select the currency you want to purchase. Select GRT. +5. Select the payment method. Select your preferred payment method. +6. Select the amount of GRT you want to purchase. +7. Review your purchase. Review your purchase and click "Buy GRT". +8. Confirm your purchase. Confirm your purchase and you will have successfully purchased GRT. +9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). + - To transfer the GRT to your wallet, click on the "Accounts" button on the top right of the page. + - Click on the "Send" button next to the GRT account. + - Enter the amount of GRT you want to send and the wallet address you want to send it to. + - Click "Continue" and confirm your transaction. -Please note that for larger purchase amounts, Coinbase may require you to wait 7-10 days before transferring the full amount to a wallet. + +You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). + +### Binance + +This will be a step by step guide for purchasing GRT on Binance. + +1. Go to [Binance](https://www.binance.com/en) and create an account. +2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. +3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy Now" button on the homepage banner. +4. You will be taken to a page where you can select the currency you want to purchase. Select GRT. +5. Select your preferred payment method. You'll be able to pay with different fiat currencies such as Euros, US Dollars, and more. +6. Select the amount of GRT you want to purchase. +7. Review your purchase and click "Buy GRT". +8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. +9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. + - Click on the "wallet" button, click withdraw, and select GRT. + - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. + - Click "Continue" and confirm your transaction. + +You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). + +### Uniswap + +This is how you can purchase GRT on Uniswap. + +1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. +2. Select the token you want to swap from. Select ETH. +3. Select the token you want to swap to. Select GRT. + - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +4. Enter the amount of ETH you want to swap. +5. Click "Swap". +6. Confirm the transaction in your wallet and you wait for the transaction to process. + +You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). + +## Getting Ether + +This section will show you how to get Ether (ETH) to pay for transaction fees or gas costs. ETH is necessary to execute operations on the Ethereum network such as transferring tokens or interacting with contracts. + +### Coinbase + +This will be a step by step guide for purchasing ETH on Coinbase. + +1. Go to [Coinbase](https://www.coinbase.com/) and create an account. +2. Once you have created an account, verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. +3. Once you have verified your identity, purchase ETH by clicking on the "Buy/Sell" button on the top right of the page. +4. Select the currency you want to purchase. Select ETH. +5. Select your preferred payment method. +6. Enter the amount of ETH you want to purchase. +7. Review your purchase and click "Buy ETH". +8. Confirm your purchase and you will have successfully purchased ETH. +9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/). + - To transfer the ETH to your wallet, click on the "Accounts" button on the top right of the page. + - Click on the "Send" button next to the ETH account. + - Enter the amount of ETH you want to send and the wallet address you want to send it to. + - Ensure that you are sending to your Ethereum wallet address on Arbitrum One. + - Click "Continue" and confirm your transaction. + +You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency). + +### Binance + +This will be a step by step guide for purchasing ETH on Binance. + +1. Go to [Binance](https://www.binance.com/en) and create an account. +2. Once you have created an account, verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. +3. Once you have verified your identity, purchase ETH by clicking on the "Buy Now" button on the homepage banner. +4. Select the currency you want to purchase. Select ETH. +5. Select your preferred payment method. +6. Enter the amount of ETH you want to purchase. +7. Review your purchase and click "Buy ETH". +8. Confirm your purchase and you will see your ETH in your Binance Spot Wallet. +9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/). + - To withdraw the ETH to your wallet, add your wallet's address to the withdrawal whitelist. + - Click on the "wallet" button, click withdraw, and select ETH. + - Enter the amount of ETH you want to send and the whitelisted wallet address you want to send it to. + - Ensure that you are sending to your Ethereum wallet address on Arbitrum One. + - Click "Continue" and confirm your transaction. + +You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582). + +## Billing FAQs + +### How many queries will I need? + +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. + +We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. + +Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage. + +### Can I withdraw GRT from my billing balance? + +Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). + +### What happens when my billing balance runs out? Will I get a warning? + +You will receive several email notifications before your billing balance runs out. diff --git a/website/src/pages/sw/subgraphs/cookbook/arweave.mdx b/website/src/pages/sw/subgraphs/cookbook/arweave.mdx new file mode 100644 index 000000000000..e59abffa383f --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/arweave.mdx @@ -0,0 +1,239 @@ +--- +title: Building Subgraphs on Arweave +--- + +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! + +In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. + +## What is Arweave? + +The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted. + +Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check: + +- [Arwiki](https://arwiki.wiki/#/en/main) +- [Arweave Resources](https://www.arweave.org/build) + +## What are Arweave Subgraphs? + +The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/). + +[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet. + +## Building an Arweave Subgraph + +To be able to build and deploy Arweave Subgraphs, you need two packages: + +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. + +## Subgraph's components + +There are three components of a Subgraph: + +### 1. Manifest - `subgraph.yaml` + +Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source. + +### 2. Schema - `schema.graphql` + +Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. + +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +### 3. AssemblyScript Mappings - `mapping.ts` + +This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. + +During Subgraph development there are two key commands: + +``` +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: + +```yaml +specVersion: 1.3.0 +description: Arweave Blocks Indexing +schema: + file: ./schema.graphql # link to the schema file +dataSources: + - kind: arweave + name: arweave-blocks + network: arweave-mainnet # The Graph only supports Arweave Mainnet + source: + owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet + startBlock: 0 # set this to 0 to start indexing from chain genesis + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/blocks.ts # link to the file with the Assemblyscript mappings + entities: + - Block + - Transaction + blockHandlers: + - handler: handleBlock # the function name in the mapping file + transactionHandlers: + - handler: handleTx # the function name in the mapping file +``` + +- Arweave Subgraphs introduce a new kind of data source (`arweave`) +- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` +- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet + +Arweave data sources support two types of handlers: + +- `blockHandlers` - Run on every new Arweave block. No source.owner is required. +- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner` + +> The source.owner can be the owner's address, or their Public Key. +> +> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users. +> +> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet. + +## Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +## AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```tsx +class Block { + timestamp: u64 + lastRetarget: u64 + height: u64 + indepHash: Bytes + nonce: Bytes + previousBlock: Bytes + diff: Bytes + hash: Bytes + txRoot: Bytes + txs: Bytes[] + walletList: Bytes + rewardAddr: Bytes + tags: Tag[] + rewardPool: Bytes + weaveSize: Bytes + blockSize: Bytes + cumulativeDiff: Bytes + hashListMerkle: Bytes + poa: ProofOfAccess +} + +class Transaction { + format: u32 + id: Bytes + lastTx: Bytes + owner: Bytes + tags: Tag[] + target: Bytes + quantity: Bytes + data: Bytes + dataSize: Bytes + dataRoot: Bytes + signature: Bytes + reward: Bytes +} +``` + +Block handlers receive a `Block`, while transactions receive a `Transaction`. + +Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings). + +## Deploying an Arweave Subgraph in Subgraph Studio + +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. + +```bash +graph deploy --access-token +``` + +## Querying an Arweave Subgraph + +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here is an example Subgraph for reference: + +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) + +## FAQ + +### Can a Subgraph index Arweave and other chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can I index the stored files on Arweave? + +Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). + +### Can I identify Bundlr bundles in my Subgraph? + +This is not currently supported. + +### How can I filter transactions to a specific account? + +The source.owner can be the user's public key or account address. + +### What is the current encryption format? + +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). + +The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: + +``` +const base64Alphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/" +]; + +const base64UrlAlphabet = [ + "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", + "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", + "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", + "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", + "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_" +]; + +function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string { + let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet; + + let result = '', i: i32, l = bytes.length; + for (i = 2; i < l; i += 3) { + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)]; + result += alphabet[bytes[i] & 0x3F]; + } + if (i === l + 1) { // 1 octet yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[(bytes[i - 2] & 0x03) << 4]; + if (!urlSafe) { + result += "=="; + } + } + if (!urlSafe && i === l) { // 2 octets yet to write + result += alphabet[bytes[i - 2] >> 2]; + result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)]; + result += alphabet[(bytes[i - 1] & 0x0F) << 2]; + if (!urlSafe) { + result += "="; + } + } + return result; +} +``` diff --git a/website/src/pages/sw/subgraphs/cookbook/enums.mdx b/website/src/pages/sw/subgraphs/cookbook/enums.mdx new file mode 100644 index 000000000000..9f55ae07c54b --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/enums.mdx @@ -0,0 +1,274 @@ +--- +title: Categorize NFT Marketplaces Using Enums +--- + +Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces. + +## What are Enums? + +Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values. + +### Example of Enums in Your Schema + +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. + +You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. + +Here's what an enum definition might look like in your schema, based on the example above: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity. + +To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types). + +## Benefits of Using Enums + +- **Clarity:** Enums provide meaningful names for values, making data easier to understand. +- **Validation:** Enums enforce strict value definitions, preventing invalid data entries. +- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner. + +### Without Enums + +If you choose to define the type as a string instead of using an Enum, your code might look like this: + +```graphql +type Token @entity { + id: ID! + tokenId: BigInt! + owner: Bytes! # Owner of the token + tokenStatus: String! # String field to track token status + timestamp: BigInt! +} +``` + +In this schema, `TokenStatus` is a simple string with no specific, allowed values. + +#### Why is this a problem? + +- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set. +- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable. + +### With Enums + +Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used. + +Enums provide type safety, minimize typo risks, and ensure consistent and reliable results. + +## Defining Enums for NFT Marketplaces + +> Note: The following guide uses the CryptoCoven NFT smart contract. + +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: + +```gql +# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) +enum Marketplace { + OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace + OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace + SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace + LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace + # ...and other marketplaces +} +``` + +## Using Enums for NFT Marketplaces + +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. + +For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. + +### Implementing a Function for NFT Marketplaces + +Here's how you can implement a function to retrieve the marketplace name from the enum as a string: + +```ts +export function getMarketplaceName(marketplace: Marketplace): string { + // Using if-else statements to map the enum value to a string + if (marketplace === Marketplace.OpenSeaV1) { + return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation + } else if (marketplace === Marketplace.OpenSeaV2) { + return 'OpenSeaV2' + } else if (marketplace === Marketplace.SeaPort) { + return 'SeaPort' // If the marketplace is SeaPort, return its string representation + } else if (marketplace === Marketplace.LooksRare) { + return 'LooksRare' // If the marketplace is LooksRare, return its string representation + // ... and other market places + } +} +``` + +## Best Practices for Using Enums + +- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability. +- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth. +- **Documentation:** Add comments to enum to clarify their purpose and usage. + +## Using Enums in Queries + +Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values. + +**Specifics** + +- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces. +- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate. + +### Sample Queries + +#### Query 1: Account With The Highest NFT Marketplace Interactions + +This query does the following: + +- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity. +- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response. + +```gql +{ + accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) { + id + sendCount + receiveCount + totalSpent + uniqueMarketplacesCount + marketplaces { + marketplace # This field returns the enum value representing the marketplace + } + } +} +``` + +#### Returns + +This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity: + +```gql +{ + "data": { + "accounts": [ + { + "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0", + "sendCount": "44", + "receiveCount": "44", + "totalSpent": "1197500000000000000", + "uniqueMarketplacesCount": "7", + "marketplaces": [ + { + "marketplace": "OpenSeaV1" + }, + { + "marketplace": "OpenSeaV2" + }, + { + "marketplace": "GenieSwap" + }, + { + "marketplace": "CryptoCoven" + }, + { + "marketplace": "Unknown" + }, + { + "marketplace": "LooksRare" + }, + { + "marketplace": "NFTX" + } + ] + } + ] + } +} +``` + +#### Query 2: Most Active Marketplace for CryptoCoven transactions + +This query does the following: + +- It identifies the marketplace with the highest volume of CryptoCoven transactions. +- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data. + +```gql +{ + marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) { + marketplace + transactionCount + } +} +``` + +#### Result 2 + +The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "Unknown", + "transactionCount": "222" + } + ] + } +} +``` + +#### Query 3: Marketplace Interactions with High Transaction Counts + +This query does the following: + +- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces. +- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy. + +```gql +{ + marketplaceInteractions( + first: 4 + orderBy: transactionCount + orderDirection: desc + where: { transactionCount_gt: "100", marketplace_not: "Unknown" } + ) { + marketplace + transactionCount + } +} +``` + +#### Result 3 + +Expected output includes the marketplaces that meet the criteria, each represented by an enum value: + +```gql +{ + "data": { + "marketplaceInteractions": [ + { + "marketplace": "NFTX", + "transactionCount": "201" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "148" + }, + { + "marketplace": "CryptoCoven", + "transactionCount": "117" + }, + { + "marketplace": "OpenSeaV1", + "transactionCount": "111" + } + ] + } +} +``` + +## Additional Resources + +For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums). diff --git a/website/src/pages/sw/subgraphs/cookbook/grafting.mdx b/website/src/pages/sw/subgraphs/cookbook/grafting.mdx new file mode 100644 index 000000000000..d9abe0e70d2a --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/grafting.mdx @@ -0,0 +1,202 @@ +--- +title: Replace a Contract and Keep its History With Grafting +--- + +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. + +## What is Grafting? + +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- It changes for which entity types an interface is implemented + +For more information, you can check: + +- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) + +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. + +## Important Note on Grafting When Upgrading to the Network + +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network + +### Why Is This Important? + +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. + +### Best Practices + +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. + +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. + +By adhering to these guidelines, you minimize risks and ensure a smoother migration process. + +## Building an Existing Subgraph + +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: + +- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) + +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). + +## Subgraph Manifest Definition + +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: ethereum + name: Lock + network: sepolia + source: + address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63' + abi: Lock + startBlock: 5955690 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts +``` + +- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. + +## Grafting Manifest Definition + +Grafting requires adding two new items to the original Subgraph manifest: + +```yaml +--- +features: + - grafting # feature name +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 5956000 # block number +``` + +- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. + +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting + +## Deploying the Base Subgraph + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It returns something like this: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + } + ] + } +} +``` + +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. + +## Deploying the Grafting Subgraph + +The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. + +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground + +```graphql +{ + withdrawals(first: 5) { + id + amount + when + } +} +``` + +It should return the following: + +``` +{ + "data": { + "withdrawals": [ + { + "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000", + "amount": "0", + "when": "1716394824" + }, + { + "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000", + "amount": "0", + "when": "1716394848" + }, + { + "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000", + "amount": "0", + "when": "1716429732" + } + ] + } +} +``` + +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. + +Congrats! You have successfully grafted a Subgraph onto another Subgraph. + +## Additional Resources + +If you want more experience with grafting, here are a few examples for popular contracts: + +- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) +- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) +- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), + +To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results + +> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/) diff --git a/website/src/pages/sw/subgraphs/cookbook/near.mdx b/website/src/pages/sw/subgraphs/cookbook/near.mdx new file mode 100644 index 000000000000..e78a69eb7fa2 --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/near.mdx @@ -0,0 +1,283 @@ +--- +title: Building Subgraphs on NEAR +--- + +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). + +## What is NEAR? + +[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. + +## What are NEAR Subgraphs? + +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. + +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: + +- Block handlers: these are run on every new block +- Receipt handlers: run every time a message is executed at a specified account + +[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): + +> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point. + +## Building a NEAR Subgraph + +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. + +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. + +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. + +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. + +There are three aspects of Subgraph definition: + +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. + +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). + +**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. + +During Subgraph development there are two key commands: + +```bash +$ graph codegen # generates types from the schema file identified in the manifest +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder +``` + +### Subgraph Manifest Definition + +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: + +```yaml +specVersion: 1.3.0 +schema: + file: ./src/schema.graphql # link to the schema file +dataSources: + - kind: near + network: near-mainnet + source: + account: app.good-morning.near # This data source will monitor this account + startBlock: 10662188 # Required for NEAR + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + blockHandlers: + - handler: handleNewBlock # the function name in the mapping file + receiptHandlers: + - handler: handleReceipt # the function name in the mapping file + file: ./src/mapping.ts # link to the file with the Assemblyscript mappings +``` + +- NEAR Subgraphs introduce a new `kind` of data source (`near`) +- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` +- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. + +```yaml +accounts: + prefixes: + - app + - good + suffixes: + - morning.near + - morning.testnet +``` + +NEAR data sources support two types of handlers: + +- `blockHandlers`: run on every new NEAR block. No `source.account` is required. +- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). + +### Schema Definition + +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). + +### AssemblyScript Mappings + +The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). + +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/). + +```typescript + +class ExecutionOutcome { + gasBurnt: u64, + blockHash: Bytes, + id: Bytes, + logs: Array, + receiptIds: Array, + tokensBurnt: BigInt, + executorId: string, + } + +class ActionReceipt { + predecessorId: string, + receiverId: string, + id: CryptoHash, + signerId: string, + gasPrice: BigInt, + outputDataReceivers: Array, + inputDataIds: Array, + actions: Array, + } + +class BlockHeader { + height: u64, + prevHeight: u64,// Always zero when version < V3 + epochId: Bytes, + nextEpochId: Bytes, + chunksIncluded: u64, + hash: Bytes, + prevHash: Bytes, + timestampNanosec: u64, + randomValue: Bytes, + gasPrice: BigInt, + totalSupply: BigInt, + latestProtocolVersion: u32, + } + +class ChunkHeader { + gasUsed: u64, + gasLimit: u64, + shardId: u64, + chunkHash: Bytes, + prevBlockHash: Bytes, + balanceBurnt: BigInt, + } + +class Block { + author: string, + header: BlockHeader, + chunks: Array, + } + +class ReceiptWithOutcome { + outcome: ExecutionOutcome, + receipt: ActionReceipt, + block: Block, + } +``` + +These types are passed to block & receipt handlers: + +- Block handlers will receive a `Block` +- Receipt handlers will receive a `ReceiptWithOutcome` + +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. + +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. + +## Deploying a NEAR Subgraph + +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). + +Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: + +- `near-mainnet` +- `near-testnet` + +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). + +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". + +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: + +```sh +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash +``` + +The node configuration will depend on where the Subgraph is being deployed. + +### Subgraph Studio + +```sh +graph auth +graph deploy +``` + +### Local Graph Node (based on default configuration) + +```sh +graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 +``` + +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: + +```graphql +{ + _meta { + block { + number + } + } +} +``` + +### Indexing NEAR with a Local Graph Node + +Running a Graph Node that indexes NEAR has the following operational requirements: + +- NEAR Indexer Framework with Firehose instrumentation +- NEAR Firehose Component(s) +- Graph Node with Firehose endpoint configured + +We will provide more information on running the above components soon. + +## Querying a NEAR Subgraph + +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. + +## Example Subgraphs + +Here are some example Subgraphs for reference: + +[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) + +[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts) + +## FAQ + +### How does the beta work? + +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! + +### Can a Subgraph index both NEAR and EVM chains? + +No, a Subgraph can only support data sources from one chain/network. + +### Can Subgraphs react to more specific triggers? + +Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. + +### Will receipt handlers trigger for accounts and their sub-accounts? + +If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts: + +```yaml +accounts: + suffixes: + - mintbase1.near +``` + +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? + +This is not supported. We are evaluating whether this functionality is required for indexing. + +### Can I use data source templates in my NEAR Subgraph? + +This is not currently supported. We are evaluating whether this functionality is required for indexing. + +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? + +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. + +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? + +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. + +## References + +- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton) diff --git a/website/src/pages/sw/subgraphs/cookbook/polymarket.mdx b/website/src/pages/sw/subgraphs/cookbook/polymarket.mdx new file mode 100644 index 000000000000..74efe387b0d7 --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/polymarket.mdx @@ -0,0 +1,148 @@ +--- +title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph +sidebarTitle: Query Polymarket Data +--- + +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. + +## Polymarket Subgraph on Graph Explorer + +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. + +![Polymarket Playground](/img/Polymarket-playground.png) + +## How to use the Visual Query Editor + +The visual query editor helps you test sample queries from your Subgraph. + +You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. + +### Example Query: Get the top 5 highest payouts from Polymarket + +``` +{ + redemptions(orderBy: payout, orderDirection: desc, first: 5) { + payout + redeemer + id + timestamp + } +} +``` + +### Example output + +``` +{ + "data": { + "redemptions": [ + { + "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b", + "payout": "6274509531681", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929672" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7", + "payout": "2246253575996", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26", + "payout": "2135448291991", + "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689", + "timestamp": "1704932625" + }, + { + "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa", + "payout": "1917395333835", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1726701528" + }, + { + "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30", + "payout": "1862505580000", + "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c", + "timestamp": "1722929866" + } + ] + } +} +``` + +## Polymarket's GraphQL Schema + +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). + +### Polymarket Subgraph Endpoint + +https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp + +The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer). + +![Polymarket Endpoint](/img/Polymarket-endpoint.png) + +## How to Get your own API Key + +1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet +2. Go to https://thegraph.com/studio/apikeys/ to create an API key + +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. + +100k queries per month are free which is perfect for your side project! + +## Additional Polymarket Subgraphs + +- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one) +- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one) +- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one) +- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one) + +## How to Query with the API + +You can pass any GraphQL query to the Polymarket endpoint and receive data in json format. + +This following code example will return the exact same output as above. + +### Sample Code from node.js + +``` +const axios = require('axios'); + +const graphqlQuery = `{ + positions(first: 5) { + condition + outcomeIndex + } +}; + +const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp' + +const graphQLRequest = { + method: 'post', + url: queryUrl, + data: { + query: graphqlQuery, + }, +}; + +// Send the GraphQL query +axios(graphQLRequest) + .then((response) => { + // Handle the response here + const data = response.data.data + console.log(data) + + }) + .catch((error) => { + // Handle any errors + console.error(error); + }); +``` + +### Additional resources + +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). + +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/sw/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/sw/subgraphs/cookbook/secure-api-keys-nextjs.mdx new file mode 100644 index 000000000000..e17e594408ff --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -0,0 +1,123 @@ +--- +title: How to Secure API Keys Using Next.js Server Components +--- + +## Overview + +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). + +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. + +### Caveats + +- Next.js server components do not protect API keys from being drained using denial of service attacks. +- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. +- Next.js server components introduce centralization risks as the server can go down. + +### Why It's Needed + +In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. + +### Using client-side rendering to query a Subgraph + +![Client-side rendering](/img/api-key-client-side-rendering.png) + +### Prerequisites + +- An API key from [Subgraph Studio](https://thegraph.com/studio) +- Basic knowledge of Next.js and React. +- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). + +## Step-by-Step Cookbook + +### Step 1: Set Up Environment Variables + +1. In our Next.js project root, create a `.env.local` file. +2. Add our API key: `API_KEY=`. + +### Step 2: Create a Server Component + +1. In our `components` directory, create a new file, `ServerComponent.js`. +2. Use the provided example code to set up the server component. + +### Step 3: Implement Server-Side API Request + +In `ServerComponent.js`, add the following code: + +```javascript +const API_KEY = process.env.API_KEY + +export default async function ServerComponent() { + const response = await fetch( + `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`, + { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + query: /* GraphQL */ ` + { + factories(first: 5) { + id + poolCount + txCount + totalVolumeUSD + } + } + `, + }), + }, + ) + + const responseData = await response.json() + const data = responseData.data + + return ( +
+

Server Component

+ {data ? ( +
    + {data.factories.map((factory) => ( +
  • +

    ID: {factory.id}

    +

    Pool Count: {factory.poolCount}

    +

    Transaction Count: {factory.txCount}

    +

    Total Volume USD: {factory.totalVolumeUSD}

    +
  • + ))} +
+ ) : ( +

Loading data...

+ )} +
+ ) +} +``` + +### Step 4: Use the Server Component + +1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. +2. Render the component: + +```javascript +import ServerComponent from './components/ServerComponent' + +export default function Home() { + return ( +
+ +
+ ) +} +``` + +### Step 5: Run and Test Our Dapp + +Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. + +![Server-side rendering](/img/api-key-server-side-rendering.png) + +### Conclusion + +By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further. diff --git a/website/src/pages/sw/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/sw/subgraphs/cookbook/subgraph-composition-three-sources.mdx new file mode 100644 index 000000000000..de6fdd9fd9fb --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/subgraph-composition-three-sources.mdx @@ -0,0 +1,98 @@ +--- +title: Aggregate Data Using Subgraph Composition +sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs +--- + +Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - This feature requires `specVersion` 1.3.0. + +## Overview + +Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates. + +## Prerequisites + +To deploy **all** Subgraphs locally, you must have the following: + +- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally +- An [IPFS](https://docs.ipfs.tech/) instance running locally +- [Node.js](https://nodejs.org) and npm + +## Get Started + +The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph. + +### Specifics + +- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts. +- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality. +- Each source Subgraph is optimized with a specific entity. +- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance. + +### Step 1. Deploy Block Time Source Subgraph + +This first source Subgraph calculates the block time for each block. + +- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined. +- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the following commands: + +```bash +npm install +npm run codegen +npm run build +npm run create-local +npm run deploy-local +``` + +### Step 2. Deploy Block Cost Source Subgraph + +This second source Subgraph indexes the cost of each block. + +#### Key Functions + +- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields. +- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly. + +To deploy this Subgraph locally, run the same commands as above. + +### Step 3. Define Block Size in Source Subgraph + +This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above. + +#### Key Functions + +- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size. +- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly. + +### Step 4. Combine Into Block Stats Subgraph + +This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above. + +> Note: +> +> - Any change to a source Subgraph will likely generate a new deployment ID. +> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes. +> - All source Subgraphs should be deployed before the composed Subgraph is deployed. + +#### Key Functions + +- It provides a consolidated data model that encompasses all relevant block metrics. +- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses. + +## Key Takeaways + +- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs. +- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph. +- This feature unlocks scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example). +- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/). +- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations). diff --git a/website/src/pages/sw/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/sw/subgraphs/cookbook/subgraph-composition.mdx new file mode 100644 index 000000000000..17b105edac59 --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/subgraph-composition.mdx @@ -0,0 +1,139 @@ +--- +title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base +sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base +--- + +Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it. + +> Important Reminders: +> +> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/). +> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code. +> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world. + +## Introduction + +Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset. + +### Benefits of Composition + +Subgraph composition is a powerful feature for scaling, allowing you to: + +- Reuse, mix, and combine existing data +- Streamline development and queries +- Use multiple data sources (up to five source Subgraphs) +- Speed up your Subgraph's syncing speed +- Handle errors and optimize the resync + +## Architecture Overview + +The setup for this example involves two Subgraphs: + +1. **Source Subgraph**: Tracks event data as entities. +2. **Dependent Subgraph**: Uses the source Subgraph as a data source. + +You can find these in the `source` and `dependent` directories. + +- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts. +- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers. + +While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature. + +### Source Subgraph + +The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`. + +> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml). + +### Dependent Subgraph + +The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities. + +> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml). + +## Get Started + +The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses: + +- Sushiswap v3 Subgraph on Base chain +- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development). + +### Step 1. Set Up Your Source Subgraph + +To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`: + +```yaml +specVersion: 1.3.0 +schema: + file: ./schema.graphql +dataSources: + - kind: subgraph + name: Factory + network: base + source: + address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz' + startBlock: 82522 +``` + +Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin. + +### Step 2. Define Handlers in Dependent Subgraph + +Below is an example of defining handlers in the dependent Subgraph: + +```typescript +export function handleInitialize(trigger: EntityTrigger): void { + if (trigger.operation === EntityOp.Create) { + let entity = trigger.data + let poolAddressParam = Address.fromBytes(entity.poolAddress) + + // Update pool sqrt price and tick + let pool = Pool.load(poolAddressParam.toHexString()) as Pool + pool.sqrtPrice = entity.sqrtPriceX96 + pool.tick = BigInt.fromI32(entity.tick) + pool.save() + + // Update token prices + let token0 = Token.load(pool.token0) as Token + let token1 = Token.load(pool.token1) as Token + + // Update ETH price in USD + let bundle = Bundle.load('1') as Bundle + bundle.ethPriceUSD = getEthPriceInUSD() + bundle.save() + + updatePoolDayData(entity) + updatePoolHourData(entity) + + // Update derived ETH price for tokens + token0.derivedETH = findEthPerToken(token0) + token1.derivedETH = findEthPerToken(token1) + token0.save() + token1.save() + } +} +``` + +In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity. + +`EntityTrigger` has three fields: + +1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`. +2. `type`: Indicates the entity type. +3. `data`: Contains the entity data. + +Developers can then determine specific actions for the entity data based on the operation type. + +## Key Takeaways + +- Use this powerful tool to quickly scale your Subgraph development and reuse existing data. +- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph. +- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities. + +This approach unlocks composability and scalability, simplifying both development and maintenance efficiency. + +## Additional Resources + +To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph). + +To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example). diff --git a/website/src/pages/sw/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/sw/subgraphs/cookbook/subgraph-debug-forking.mdx new file mode 100644 index 000000000000..91aa7484d2ec --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -0,0 +1,101 @@ +--- +title: Quick and Easy Subgraph Debugging Using Forks +--- + +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! + +## Ok, what is it? + +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). + +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. + +## What?! How? + +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. + +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. + +## Please, show me some code! + +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. + +Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: + +```tsx +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex().toString()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let gravatar = Gravatar.load(event.params.id.toI32().toString()) + if (gravatar == null) { + log.critical('Gravatar not found!', []) + return + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. + +The usual way to attempt a fix is: + +1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +3. Wait for it to sync-up. +4. If it breaks again go back to 1, otherwise: Hooray! + +It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ + +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: + +0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. +1. Make a change in the mappings source, which you believe will solve the issue. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. +3. If it breaks again, go back to 1, otherwise: Hooray! + +Now, you may have 2 questions: + +1. fork-base what??? +2. Forking who?! + +And I answer: + +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. +2. Forking is easy, no need to sweat: + +```bash +$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! + +So, here is what I do: + +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). + +``` +$ cargo run -p graph-node --release -- \ + --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \ + --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \ + --ipfs 127.0.0.1:5001 + --fork-base https://api.thegraph.com/subgraphs/id/ +``` + +2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: + +```bash +$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 +``` + +4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/sw/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/sw/subgraphs/cookbook/subgraph-uncrashable.mdx new file mode 100644 index 000000000000..a08e2a7ad8c9 --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -0,0 +1,29 @@ +--- +title: Safe Subgraph Code Generator +--- + +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. + +## Why integrate with Subgraph Uncrashable? + +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. + +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. + +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +**Key Features** + +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. + +- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. + +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. + +Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. + +```sh +graph codegen -u [options] [] +``` + +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/sw/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/sw/subgraphs/cookbook/transfer-to-the-graph.mdx new file mode 100644 index 000000000000..9a4b037cafbc --- /dev/null +++ b/website/src/pages/sw/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -0,0 +1,104 @@ +--- +title: Transfer to The Graph +--- + +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). + +## Benefits of Switching to The Graph + +- Use the same Subgraph that your apps already use with zero-downtime migration. +- Increase reliability from a global network supported by 100+ Indexers. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. + +## Upgrade Your Subgraph to The Graph in 3 Easy Steps + +1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment) +2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio) +3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network) + +## 1. Set Up Your Studio Environment + +### Create a Subgraph in Subgraph Studio + +- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". + +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. + +### Install the Graph CLI⁠ + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run the following command: + +Using [npm](https://www.npmjs.com/): + +```sh +npm install -g @graphprotocol/graph-cli@latest +``` + +Use the following command to create a Subgraph in Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +### Authenticate Your Subgraph + +In The Graph CLI, use the auth command seen in Subgraph Studio: + +```sh +graph auth +``` + +## 2. Deploy Your Subgraph to Studio + +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. + +In The Graph CLI, run the following command: + +```sh +graph deploy --ipfs-hash + +``` + +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). + +## 3. Publish Your Subgraph to The Graph Network + +![publish button](/img/publish-sub-transfer.png) + +### Query Your Subgraph + +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. + +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. + +#### Example + +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: + +![Query URL](/img/cryptopunks-screenshot-transfer.png) + +The query URL for this Subgraph is: + +```sh +https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK +``` + +Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint. + +### Getting your own API Key + +You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page: + +![API keys](/img/Api-keys-screenshot.png) + +### Monitor Subgraph Status + +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). + +### Additional Resources + +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/sw/subgraphs/developing/_meta-titles.json b/website/src/pages/sw/subgraphs/developing/_meta-titles.json new file mode 100644 index 000000000000..01a91b09ed77 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/_meta-titles.json @@ -0,0 +1,6 @@ +{ + "creating": "Creating", + "deploying": "Deploying", + "publishing": "Publishing", + "managing": "Managing" +} diff --git a/website/src/pages/sw/subgraphs/developing/creating/_meta-titles.json b/website/src/pages/sw/subgraphs/developing/creating/_meta-titles.json new file mode 100644 index 000000000000..6106ac328dc1 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/_meta-titles.json @@ -0,0 +1,3 @@ +{ + "graph-ts": "AssemblyScript API" +} diff --git a/website/src/pages/sw/subgraphs/developing/creating/advanced.mdx b/website/src/pages/sw/subgraphs/developing/creating/advanced.mdx new file mode 100644 index 000000000000..8dbc48253034 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/advanced.mdx @@ -0,0 +1,563 @@ +--- +title: Advanced Subgraph Features +--- + +## Overview + +Add and implement advanced Subgraph features to enhanced your Subgraph's built. + +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: + +| Feature | Name | +| ---------------------------------------------------- | ---------------- | +| [Non-fatal errors](#non-fatal-errors) | `nonFatalErrors` | +| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | +| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | + +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: + +```yaml +specVersion: 1.3.0 +description: Gravatar for Ethereum +features: + - fullTextSearch + - nonFatalErrors +dataSources: ... +``` + +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. + +## Timeseries and Aggregations + +Prerequisites: + +- Subgraph specVersion must be ≥1.1.0. + +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. + +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. + +### Example Schema + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} + +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +### How to Define Timeseries and Aggregations + +Timeseries entities are defined with `@entity(timeseries: true)` in the GraphQL schema. Every timeseries entity must: + +- have a unique ID of the int8 type +- have a timestamp of the Timestamp type +- include data that will be used for calculation by aggregation entities. + +These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the aggregation entities. + +Aggregation entities are defined with `@aggregation` in the GraphQL schema. Every aggregation entity defines the source from which it will gather data (which must be a timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). + +Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval. + +#### Available Aggregation Intervals + +- `hour`: sets the timeseries period every hour, on the hour. +- `day`: sets the timeseries period every day, starting and ending at 00:00. + +#### Available Aggregation Functions + +- `sum`: Total of all values. +- `count`: Number of values. +- `min`: Minimum value. +- `max`: Maximum value. +- `first`: First value in the period. +- `last`: Last value in the period. + +#### Example Aggregations Query + +```graphql +{ + stats(interval: "hour", where: { timestamp_gt: 1704085200 }) { + id + timestamp + sum + } +} +``` + +[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations. + +## Non-fatal errors + +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. + +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. + +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: + +```yaml +specVersion: 1.3.0 +description: Gravatar for Ethereum +features: + - nonFatalErrors + ... +``` + +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: + +```graphql +foos(first: 100, subgraphError: allow) { + id +} + +_meta { + hasIndexingErrors +} +``` + +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: + +```graphql +"data": { + "foos": [ + { + "id": "0xdead" + } + ], + "_meta": { + "hasIndexingErrors": true + } +}, +"errors": [ + { + "message": "indexing_error" + } +] +``` + +## IPFS/Arweave File Data Sources + +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. + +> This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. + +### Overview + +Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found. + +This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources. + +> This replaces the existing `ipfs.cat` API + +### Upgrade guide + +#### Update `graph-ts` and `graph-cli` + +File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1 + +#### Add a new entity type which will be updated when files are found + +File data sources cannot access or update chain-based entities, but must update file specific entities. + +This may mean splitting out fields from existing entities into separate entities, linked together. + +Original combined entity: + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + externalURL: String! + ipfsURI: String! + image: String! + name: String! + description: String! + type: String! + updatedAtTimestamp: BigInt + owner: User! +} +``` + +New, split entity: + +```graphql +type Token @entity { + id: ID! + tokenID: BigInt! + tokenURI: String! + ipfsURI: TokenMetadata + updatedAtTimestamp: BigInt + owner: String! +} + +type TokenMetadata @entity { + id: ID! + image: String! + externalURL: String! + name: String! + description: String! +} +``` + +If the relationship is 1:1 between the parent entity and the resulting file data source entity, the simplest pattern is to link the parent entity to a resulting file entity by using the IPFS CID as the lookup. Get in touch on Discord if you are having difficulty modelling your new file-based entities! + +> You can use [nested filters](/subgraphs/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities. + +#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave` + +This is the data source which will be spawned when a file of interest is identified. + +```yaml +templates: + - name: TokenMetadata + kind: file/ipfs + mapping: + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/mapping.ts + handler: handleMetadata + entities: + - TokenMetadata + abis: + - name: Token + file: ./abis/Token.json +``` + +> Currently `abis` are required, though it is not possible to call contracts from within file data sources + +The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details. + +#### Create a new handler to process files + +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/subgraphs/developing/creating/graph-ts/api/#json-api)). + +The CID of the file as a readable string can be accessed via the `dataSource` as follows: + +```typescript +const cid = dataSource.stringParam() +``` + +Example handler: + +```typescript +import { json, Bytes, dataSource } from '@graphprotocol/graph-ts' +import { TokenMetadata } from '../generated/schema' + +export function handleMetadata(content: Bytes): void { + let tokenMetadata = new TokenMetadata(dataSource.stringParam()) + const value = json.fromBytes(content).toObject() + if (value) { + const image = value.get('image') + const name = value.get('name') + const description = value.get('description') + const externalURL = value.get('external_url') + + if (name && image && description && externalURL) { + tokenMetadata.name = name.toString() + tokenMetadata.image = image.toString() + tokenMetadata.externalURL = externalURL.toString() + tokenMetadata.description = description.toString() + } + + tokenMetadata.save() + } +} +``` + +#### Spawn file data sources when required + +You can now create file data sources during execution of chain-based handlers: + +- Import the template from the auto-generated `templates` +- call `TemplateName.create(cid: string)` from within a mapping, where the cid is a valid content identifier for IPFS or Arweave + +For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifiers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`). + +For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing). + +Example: + +```typescript +import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' + +const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. + +export function handleTransfer(event: TransferEvent): void { + let token = Token.load(event.params.tokenId.toString()) + if (!token) { + token = new Token(event.params.tokenId.toString()) + token.tokenID = event.params.tokenId + + token.tokenURI = '/' + event.params.tokenId.toString() + '.json' + const tokenIpfsHash = ipfshash + token.tokenURI + //This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json" + + token.ipfsURI = tokenIpfsHash + + TokenMetadataTemplate.create(tokenIpfsHash) + } + + token.updatedAtTimestamp = event.block.timestamp + token.owner = event.params.to.toHexString() + token.save() +} +``` + +This will create a new file data source, which will poll Graph Node's configured IPFS or Arweave endpoint, retrying if it is not found. When the file is found, the file data source handler will be executed. + +This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. + +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file + +Congratulations, you are using file data sources! + +#### Deploying your Subgraphs + +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. + +#### Limitations + +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: + +- Entities created by File Data Sources are immutable, and cannot be updated +- File Data Source handlers cannot access entities from other file data sources +- Entities associated with File Data Sources cannot be accessed by chain-based handlers + +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! + +Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. + +#### Best practices + +If you are linking NFT metadata to corresponding tokens, use the metadata's IPFS hash to reference a Metadata entity from the Token entity. Save the Metadata entity using the IPFS hash as an ID. + +You can use [DataSource context](/subgraphs/developing/creating/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler. + +If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity. + +> We are working to improve the above recommendation, so queries only return the "most recent" version + +#### Known issues + +File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI. + +Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file. + +#### Examples + +[Crypto Coven Subgraph migration](https://github.com/azf20/cryptocoven-api/tree/file-data-sources-refactor) + +#### References + +[GIP File Data Sources](https://forum.thegraph.com/t/gip-file-data-sources/2721) + +## Indexed Argument Filters / Topic Filters + +> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` + +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. + +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. + +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. + +### How Topic Filters Work + +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. + +- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. + +```solidity +// SPDX-License-Identifier: MIT +pragma solidity ^0.8.0; + +contract Token { + // Event declaration with indexed parameters for addresses + event Transfer(address indexed from, address indexed to, uint256 value); + + // Function to simulate transferring tokens + function transfer(address to, uint256 value) public { + // Emitting the Transfer event with from, to, and value + emit Transfer(msg.sender, to, value); + } +} +``` + +In this example: + +- The `Transfer` event is used to log transactions of tokens between addresses. +- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. +- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called. + +#### Configuration in Subgraphs + +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: + +```yaml +eventHandlers: + - event: SomeEvent(indexed uint256, indexed address, indexed uint256) + handler: handleSomeEvent + topic1: ['0xValue1', '0xValue2'] + topic2: ['0xAddress1', '0xAddress2'] + topic3: ['0xValue3'] +``` + +In this setup: + +- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third. +- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic. + +#### Filter Logic + +- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic. +- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler. + +#### Example 1: Tracking Direct Transfers from Address A to Address B + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleDirectedTransfer + topic1: ['0xAddressA'] # Sender Address + topic2: ['0xAddressB'] # Receiver Address +``` + +In this configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. + +#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses + +```yaml +eventHandlers: + - event: Transfer(indexed address,indexed address,uint256) + handler: handleTransferToOrFrom + topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address + topic2: ['0xAddressB', '0xAddressC'] # Receiver Address +``` + +In this configuration: + +- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. +- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. + +## Declared eth_call + +> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. + +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. + +This feature does the following: + +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. +- Allows faster data fetching, resulting in quicker query responses and a better user experience. +- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. + +### Key Concepts + +- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. +- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously. +- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel). + +#### Scenario without Declarative `eth_calls` + +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. + +Traditionally, these calls might be made sequentially: + +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds + +Total time taken = 3 + 2 + 4 = 9 seconds + +#### Scenario with Declarative `eth_calls` + +With this feature, you can declare these calls to be executed in parallel: + +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds + +Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. + +Total time taken = max (3, 2, 4) = 4 seconds + +#### How it Works + +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. + +#### Example Configuration in Subgraph Manifest + +Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. + +`subgraph.yaml` using `event.address`: + +```yaml +eventHandlers: +event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24) +handler: handleSwap +calls: + global0X128: Pool[event.address].feeGrowthGlobal0X128() + global1X128: Pool[event.address].feeGrowthGlobal1X128() +``` + +Details for the example above: + +- `global0X128` is the declared `eth_call`. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` +- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. + +`subgraph.yaml` using `event.params` + +```yaml +calls: + - ERC20DecimalsToken0: ERC20[event.params.token0].decimals() +``` + +### Grafting onto Existing Subgraphs + +> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). + +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. + +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: + +```yaml +description: ... +graft: + base: Qm... # Subgraph ID of base Subgraph + block: 7345624 # Block number +``` + +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. + +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. + +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: + +- It adds or removes entity types +- It removes attributes from entity types +- It adds nullable attributes to entity types +- It turns non-nullable attributes into nullable attributes +- It adds values to enums +- It adds or removes interfaces +- It changes for which entity types an interface is implemented + +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/sw/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/sw/subgraphs/developing/creating/assemblyscript-mappings.mdx new file mode 100644 index 000000000000..cd81dc118f28 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -0,0 +1,113 @@ +--- +title: Writing AssemblyScript Mappings +--- + +## Overview + +The mappings take data from a particular source and transform it into entities that are defined within your schema. Mappings are written in a subset of [TypeScript](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html) called [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) which can be compiled to WASM ([WebAssembly](https://webassembly.org/)). AssemblyScript is stricter than normal TypeScript, yet provides a familiar syntax. + +## Writing Mappings + +For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. + +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: + +```javascript +import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' +import { Gravatar } from '../generated/schema' + +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleUpdatedGravatar(event: UpdatedGravatar): void { + let id = event.params.id + let gravatar = Gravatar.load(id) + if (gravatar == null) { + gravatar = new Gravatar(id) + } + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} +``` + +The first handler takes a `NewGravatar` event and creates a new `Gravatar` entity with `new Gravatar(event.params.id.toHex())`, populating the entity fields using the corresponding event parameters. This entity instance is represented by the variable `gravatar`, with an id value of `event.params.id.toHex()`. + +The second handler tries to load the existing `Gravatar` from the Graph Node store. If it does not exist yet, it is created on-demand. The entity is then updated to match the new event parameters before it is saved back to the store using `gravatar.save()`. + +### Recommended IDs for Creating New Entities + +It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities. + +- `transfer.id = event.transaction.hash` + +- `let id = event.transaction.hash.concatI32(event.logIndex.toI32())` + +- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like + +```typescript +let dayID = event.block.timestamp.toI32() / 86400 +let id = Bytes.fromI32(dayID) +``` + +- Convert constant addresses to `Bytes`. + +`const id = Bytes.fromHexString('0xdead...beef')` + +There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. It can be imported into `mapping.ts` from `@graphprotocol/graph-ts`. + +### Handling of entities with identical IDs + +When creating and saving a new entity, if an entity with the same ID already exists, the properties of the new entity are always preferred during the merge process. This means that the existing entity will be updated with the values from the new entity. + +If a null value is intentionally set for a field in the new entity with the same ID, the existing entity will be updated with the null value. + +If no value is set for a field in the new entity with the same ID, the field will result in null as well. + +## Code Generation + +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. + +This is done with + +```sh +graph codegen [--output-dir ] [] +``` + +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: + +```sh +# Yarn +yarn codegen + +# NPM +npm run codegen +``` + +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. + +```javascript +import { + // The contract class: + Gravity, + // The events classes: + NewGravatar, + UpdatedGravatar, +} from '../generated/Gravity/Gravity' +``` + +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with + +```javascript +import { Gravatar } from '../generated/schema' +``` + +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. + +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/sw/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/sw/subgraphs/developing/creating/graph-ts/CHANGELOG.md new file mode 100644 index 000000000000..5f964d3cbb78 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/graph-ts/CHANGELOG.md @@ -0,0 +1,107 @@ +# @graphprotocol/graph-ts + +## 0.38.0 + +### Minor Changes + +- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings + +## 0.37.0 + +### Minor Changes + +- [#1843](https://github.com/graphprotocol/graph-tooling/pull/1843) + [`c09b56b`](https://github.com/graphprotocol/graph-tooling/commit/c09b56b093f23c80aa5d217b2fd56fccac061145) + Thanks [@YaroShkvorets](https://github.com/YaroShkvorets)! - Update all dependencies + +## 0.36.0 + +### Minor Changes + +- [#1754](https://github.com/graphprotocol/graph-tooling/pull/1754) + [`2050bf6`](https://github.com/graphprotocol/graph-tooling/commit/2050bf6259c19bd86a7446410c7e124dfaddf4cd) + Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for subgraph datasource and + associated types. + +## 0.35.1 + +### Patch Changes + +- [#1637](https://github.com/graphprotocol/graph-tooling/pull/1637) + [`f0c583f`](https://github.com/graphprotocol/graph-tooling/commit/f0c583f00c90e917d87b707b5b7a892ad0da916f) + Thanks [@incrypto32](https://github.com/incrypto32)! - Update return type for ethereum.hasCode + +## 0.35.0 + +### Minor Changes + +- [#1609](https://github.com/graphprotocol/graph-tooling/pull/1609) + [`e299f6c`](https://github.com/graphprotocol/graph-tooling/commit/e299f6ce5cf1ad74cab993f6df3feb7ca9993254) + Thanks [@incrypto32](https://github.com/incrypto32)! - Add support for eth.hasCode method + +## 0.34.0 + +### Minor Changes + +- [#1522](https://github.com/graphprotocol/graph-tooling/pull/1522) + [`d132f9c`](https://github.com/graphprotocol/graph-tooling/commit/d132f9c9f6ea5283e40a8d913f3abefe5a8ad5f8) + Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL + `Timestamp` scalar as `i64` (AssemblyScript) + +## 0.33.0 + +### Minor Changes + +- [#1584](https://github.com/graphprotocol/graph-tooling/pull/1584) + [`0075f06`](https://github.com/graphprotocol/graph-tooling/commit/0075f06ddaa6d37606e42e1c12d11d19674d00ad) + Thanks [@incrypto32](https://github.com/incrypto32)! - Added getBalance call to ethereum API + +## 0.32.0 + +### Minor Changes + +- [#1523](https://github.com/graphprotocol/graph-tooling/pull/1523) + [`167696e`](https://github.com/graphprotocol/graph-tooling/commit/167696eb611db0da27a6cf92a7390e72c74672ca) + Thanks [@xJonathanLEI](https://github.com/xJonathanLEI)! - add starknet data types + +## 0.31.0 + +### Minor Changes + +- [#1340](https://github.com/graphprotocol/graph-tooling/pull/1340) + [`2375877`](https://github.com/graphprotocol/graph-tooling/commit/23758774b33b5b7c6934f57a3e137870205ca6f0) + Thanks [@incrypto32](https://github.com/incrypto32)! - export `loadRelated` host function + +- [#1296](https://github.com/graphprotocol/graph-tooling/pull/1296) + [`dab4ca1`](https://github.com/graphprotocol/graph-tooling/commit/dab4ca1f5df7dcd0928bbaa20304f41d23b20ced) + Thanks [@dotansimha](https://github.com/dotansimha)! - Added support for handling GraphQL `Int8` + scalar as `i64` (AssemblyScript) + +## 0.30.0 + +### Minor Changes + +- [#1299](https://github.com/graphprotocol/graph-tooling/pull/1299) + [`3f8b514`](https://github.com/graphprotocol/graph-tooling/commit/3f8b51440db281e69879be7d91d79cd43e45fe86) + Thanks [@saihaj](https://github.com/saihaj)! - introduce new Etherum utility to get a CREATE2 + Address + +- [#1306](https://github.com/graphprotocol/graph-tooling/pull/1306) + [`f5e4b58`](https://github.com/graphprotocol/graph-tooling/commit/f5e4b58989edc5f3bb8211f1b912449e77832de8) + Thanks [@saihaj](https://github.com/saihaj)! - expose Host's `get_in_block` function + +## 0.29.3 + +### Patch Changes + +- [#1057](https://github.com/graphprotocol/graph-tooling/pull/1057) + [`b7a2ec3`](https://github.com/graphprotocol/graph-tooling/commit/b7a2ec3e9e2206142236f892e2314118d410ac93) + Thanks [@saihaj](https://github.com/saihaj)! - fix publihsed contents + +## 0.29.2 + +### Patch Changes + +- [#1044](https://github.com/graphprotocol/graph-tooling/pull/1044) + [`8367f90`](https://github.com/graphprotocol/graph-tooling/commit/8367f90167172181870c1a7fe5b3e84d2c5aeb2c) + Thanks [@saihaj](https://github.com/saihaj)! - publish readme with packages diff --git a/website/src/pages/sw/subgraphs/developing/creating/graph-ts/README.md b/website/src/pages/sw/subgraphs/developing/creating/graph-ts/README.md new file mode 100644 index 000000000000..b6771a8305e5 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/graph-ts/README.md @@ -0,0 +1,85 @@ +# The Graph TypeScript Library (graph-ts) + +[![npm (scoped)](https://img.shields.io/npm/v/@graphprotocol/graph-ts.svg)](https://www.npmjs.com/package/@graphprotocol/graph-ts) +[![Build Status](https://travis-ci.org/graphprotocol/graph-ts.svg?branch=master)](https://travis-ci.org/graphprotocol/graph-ts) + +TypeScript/AssemblyScript library for writing subgraph mappings to be deployed to +[The Graph](https://github.com/graphprotocol/graph-node). + +## Usage + +For a detailed guide on how to create a subgraph, please see the +[Graph CLI docs](https://github.com/graphprotocol/graph-cli). + +One step of creating the subgraph is writing mappings that will process blockchain events and will +write entities into the store. These mappings are written in TypeScript/AssemblyScript. + +The `graph-ts` library provides APIs to access the Graph Node store, blockchain data, smart +contracts, data on IPFS, cryptographic functions and more. To use it, all you have to do is add a +dependency on it: + +```sh +npm install --dev @graphprotocol/graph-ts # NPM +yarn add --dev @graphprotocol/graph-ts # Yarn +``` + +After that, you can import the `store` API and other features from this library in your mappings. A +few examples: + +```typescript +import { crypto, store } from '@graphprotocol/graph-ts' +// This is just an example event type generated by `graph-cli` +// from an Ethereum smart contract ABI +import { NameRegistered } from './types/abis/SomeContract' +// This is an example of an entity type generated from a +// subgraph's GraphQL schema +import { Domain } from './types/schema' + +function handleNameRegistered(event: NameRegistered) { + // Example use of a crypto function + let id = crypto.keccak256(name).toHexString() + + // Example use of the generated `Entry` class + let domain = new Domain() + domain.name = name + domain.owner = event.params.owner + domain.timeRegistered = event.block.timestamp + + // Example use of the store API + store.set('Name', id, entity) +} +``` + +## Helper Functions for AssemblyScript + +Refer to the `helper-functions.ts` file in +[this](https://github.com/graphprotocol/graph-tooling/blob/main/packages/ts/helper-functions.ts) +repository for a few common functions that help build on top of the AssemblyScript library, such as +byte array concatenation, among others. + +## API + +Documentation on the API can be found +[here](https://thegraph.com/docs/en/developer/assemblyscript-api/). + +For examples of `graph-ts` in use take a look at one of the following subgraphs: + +- https://github.com/graphprotocol/ens-subgraph +- https://github.com/graphprotocol/decentraland-subgraph +- https://github.com/graphprotocol/adchain-subgraph +- https://github.com/graphprotocol/0x-subgraph +- https://github.com/graphprotocol/aragon-subgraph +- https://github.com/graphprotocol/dharma-subgraph + +## License + +Copyright © 2018 Graph Protocol, Inc. and contributors. + +The Graph TypeScript library is dual-licensed under the +[MIT license](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-MIT) and the +[Apache License, Version 2.0](https://github.com/graphprotocol/graph-tooling/blob/main/LICENSE-APACHE). + +Unless required by applicable law or agreed to in writing, software distributed under the License is +distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or +implied. See the License for the specific language governing permissions and limitations under the +License. diff --git a/website/src/pages/sw/subgraphs/developing/creating/graph-ts/_meta-titles.json b/website/src/pages/sw/subgraphs/developing/creating/graph-ts/_meta-titles.json new file mode 100644 index 000000000000..7580246e94fd --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/graph-ts/_meta-titles.json @@ -0,0 +1,5 @@ +{ + "README": "Introduction", + "api": "API Reference", + "common-issues": "Common Issues" +} diff --git a/website/src/pages/sw/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/sw/subgraphs/developing/creating/graph-ts/api.mdx new file mode 100644 index 000000000000..2e256ae18190 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/graph-ts/api.mdx @@ -0,0 +1,890 @@ +--- +title: AssemblyScript API +--- + +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). + +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: + +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from Subgraph files by `graph codegen` + +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). + +## API Reference + +The `@graphprotocol/graph-ts` library provides the following APIs: + +- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values. +- A `store` API to load and save entities from and to the Graph Node store. +- A `log` API to log messages to the Graph Node output and Graph Explorer. +- An `ipfs` API to load files from IPFS. +- A `json` API to parse JSON data. +- A `crypto` API to use cryptographic functions. +- Low-level primitives to translate between different type systems such as Ethereum, JSON, GraphQL and AssemblyScript. + +### Versions + +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. + +| Version | Release notes | +| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`ethereum.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | + +### Built-in Types + +Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://www.assemblyscript.org/types.html). + +The following additional types are provided by `@graphprotocol/graph-ts`. + +#### ByteArray + +```typescript +import { ByteArray } from '@graphprotocol/graph-ts' +``` + +`ByteArray` represents an array of `u8`. + +_Construction_ + +- `fromI32(x: i32): ByteArray` - Decomposes `x` into bytes. +- `fromHexString(hex: string): ByteArray` - Input length must be even. Prefixing with `0x` is optional. + +_Type conversions_ + +- `toHexString(): string` - Converts to a hex string prefixed with `0x`. +- `toString(): string` - Interprets the bytes as a UTF-8 string. +- `toBase58(): string` - Encodes the bytes into a base58 string. +- `toU32(): u32` - Interprets the bytes as a little-endian `u32`. Throws in case of overflow. +- `toI32(): i32` - Interprets the byte array as a little-endian `i32`. Throws in case of overflow. + +_Operators_ + +- `equals(y: ByteArray): bool` – can be written as `x == y`. +- `concat(other: ByteArray) : ByteArray` - return a new `ByteArray` consisting of `this` directly followed by `other` +- `concatI32(other: i32) : ByteArray` - return a new `ByteArray` consisting of `this` directly followed by the byte representation of `other` + +#### BigDecimal + +```typescript +import { BigDecimal } from '@graphprotocol/graph-ts' +``` + +`BigDecimal` is used to represent arbitrary precision decimals. + +> Note: [Internally](https://github.com/graphprotocol/graph-node/blob/master/graph/src/data/store/scalar/bigdecimal.rs) `BigDecimal` is stored in [IEEE-754 decimal128 floating-point format](https://en.wikipedia.org/wiki/Decimal128_floating-point_format), which supports 34 decimal digits of significand. This makes `BigDecimal` unsuitable for representing fixed-point types that can span wider than 34 digits, such as a Solidity [`ufixed256x18`](https://docs.soliditylang.org/en/latest/types.html#fixed-point-numbers) or equivalent. + +_Construction_ + +- `constructor(bigInt: BigInt)` – creates a `BigDecimal` from an `BigInt`. +- `static fromString(s: string): BigDecimal` – parses from a decimal string. + +_Type conversions_ + +- `toString(): string` – prints to a decimal string. + +_Math_ + +- `plus(y: BigDecimal): BigDecimal` – can be written as `x + y`. +- `minus(y: BigDecimal): BigDecimal` – can be written as `x - y`. +- `times(y: BigDecimal): BigDecimal` – can be written as `x * y`. +- `div(y: BigDecimal): BigDecimal` – can be written as `x / y`. +- `equals(y: BigDecimal): bool` – can be written as `x == y`. +- `notEqual(y: BigDecimal): bool` – can be written as `x != y`. +- `lt(y: BigDecimal): bool` – can be written as `x < y`. +- `le(y: BigDecimal): bool` – can be written as `x <= y`. +- `gt(y: BigDecimal): bool` – can be written as `x > y`. +- `ge(y: BigDecimal): bool` – can be written as `x >= y`. +- `neg(): BigDecimal` - can be written as `-x`. + +#### BigInt + +```typescript +import { BigInt } from '@graphprotocol/graph-ts' +``` + +`BigInt` is used to represent big integers. This includes Ethereum values of type `uint32` to `uint256` and `int64` to `int256`. Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. + +The `BigInt` class has the following API: + +_Construction_ + +- `BigInt.fromI32(x: i32): BigInt` – creates a `BigInt` from an `i32`. + +- `BigInt.fromString(s: string): BigInt`– Parses a `BigInt` from a string. + +- `BigInt.fromUnsignedBytes(x: Bytes): BigInt` – Interprets `bytes` as an unsigned, little-endian integer. If your input is big-endian, call `.reverse()` first. + +- `BigInt.fromSignedBytes(x: Bytes): BigInt` – Interprets `bytes` as a signed, little-endian integer. If your input is big-endian, call `.reverse()` first. + + _Type conversions_ + +- `x.toHex(): string` – turns `BigInt` into a string of hexadecimal characters. + +- `x.toString(): string` – turns `BigInt` into a decimal number string. + +- `x.toI32(): i32` – returns the `BigInt` as an `i32`; fails if the value does not fit into `i32`. It's a good idea to first check `x.isI32()`. + +- `x.toBigDecimal(): BigDecimal` - converts into a decimal with no fractional part. + +_Math_ + +- `x.plus(y: BigInt): BigInt` – can be written as `x + y`. +- `x.minus(y: BigInt): BigInt` – can be written as `x - y`. +- `x.times(y: BigInt): BigInt` – can be written as `x * y`. +- `x.div(y: BigInt): BigInt` – can be written as `x / y`. +- `x.mod(y: BigInt): BigInt` – can be written as `x % y`. +- `x.equals(y: BigInt): bool` – can be written as `x == y`. +- `x.notEqual(y: BigInt): bool` – can be written as `x != y`. +- `x.lt(y: BigInt): bool` – can be written as `x < y`. +- `x.le(y: BigInt): bool` – can be written as `x <= y`. +- `x.gt(y: BigInt): bool` – can be written as `x > y`. +- `x.ge(y: BigInt): bool` – can be written as `x >= y`. +- `x.neg(): BigInt` – can be written as `-x`. +- `x.divDecimal(y: BigDecimal): BigDecimal` – divides by a decimal, giving a decimal result. +- `x.isZero(): bool` – Convenience for checking if the number is zero. +- `x.isI32(): bool` – Check if the number fits in an `i32`. +- `x.abs(): BigInt` – Absolute value. +- `x.pow(exp: u8): BigInt` – Exponentiation. +- `bitOr(x: BigInt, y: BigInt): BigInt` – can be written as `x | y`. +- `bitAnd(x: BigInt, y: BigInt): BigInt` – can be written as `x & y`. +- `leftShift(x: BigInt, bits: u8): BigInt` – can be written as `x << y`. +- `rightShift(x: BigInt, bits: u8): BigInt` – can be written as `x >> y`. + +#### TypedMap + +```typescript +import { TypedMap } from '@graphprotocol/graph-ts' +``` + +`TypedMap` can be used to store key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). + +The `TypedMap` class has the following API: + +- `new TypedMap()` – creates an empty map with keys of type `K` and values of type `V` +- `map.set(key: K, value: V): void` – sets the value of `key` to `value` +- `map.getEntry(key: K): TypedMapEntry | null` – returns the key-value pair for a `key` or `null` if the `key` does not exist in the map +- `map.get(key: K): V | null` – returns the value for a `key` or `null` if the `key` does not exist in the map +- `map.isSet(key: K): bool` – returns `true` if the `key` exists in the map and `false` if it does not + +#### Bytes + +```typescript +import { Bytes } from '@graphprotocol/graph-ts' +``` + +`Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32`, etc. + +The `Bytes` class extends AssemblyScript's [Uint8Array](https://github.com/AssemblyScript/assemblyscript/blob/3b1852bc376ae799d9ebca888e6413afac7b572f/std/assembly/typedarray.ts#L64) and this supports all the `Uint8Array` functionality, plus the following new methods: + +_Construction_ + +- `fromHexString(hex: string) : Bytes` - Convert the string `hex` which must consist of an even number of hexadecimal digits to a `ByteArray`. The string `hex` can optionally start with `0x` +- `fromI32(i: i32) : Bytes` - Convert `i` to an array of bytes + +_Type conversions_ + +- `b.toHex()` – returns a hexadecimal string representing the bytes in the array +- `b.toString()` – converts the bytes in the array to a string of unicode characters +- `b.toBase58()` – turns an Ethereum Bytes value to base58 encoding (used for IPFS hashes) + +_Operators_ + +- `b.concat(other: Bytes) : Bytes` - - return new `Bytes` consisting of `this` directly followed by `other` +- `b.concatI32(other: i32) : ByteArray` - return new `Bytes` consisting of `this` directly follow by the byte representation of `other` + +#### Address + +```typescript +import { Address } from '@graphprotocol/graph-ts' +``` + +`Address` extends `Bytes` to represent Ethereum `address` values. + +It adds the following method on top of the `Bytes` API: + +- `Address.fromString(s: string): Address` – creates an `Address` from a hexadecimal string +- `Address.fromBytes(b: Bytes): Address` – create an `Address` from `b` which must be exactly 20 bytes long. Passing in a value with fewer or more bytes will result in an error + +### Store API + +```typescript +import { store } from '@graphprotocol/graph-ts' +``` + +The `store` API allows to load, save and remove entities from and to the Graph Node store. + +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. + +#### Creating entities + +The following is a common pattern for creating entities from Ethereum events. + +```typescript +// Import the Transfer event class generated from the ERC20 ABI +import { Transfer as TransferEvent } from '../generated/ERC20/ERC20' + +// Import the Transfer entity type generated from the GraphQL schema +import { Transfer } from '../generated/schema' + +// Transfer event handler +export function handleTransfer(event: TransferEvent): void { + // Create a Transfer entity, using the transaction hash as the entity ID + let id = event.transaction.hash + let transfer = new Transfer(id) + + // Set properties on the entity, using the event parameters + transfer.from = event.params.from + transfer.to = event.params.to + transfer.amount = event.params.amount + + // Save the entity to the store + transfer.save() +} +``` + +When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. + +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. + +#### Loading entities from the store + +If an entity already exists, it can be loaded from the store with the following: + +```typescript +let id = event.transaction.hash // or however the ID is constructed +let transfer = Transfer.load(id) +if (transfer == null) { + transfer = new Transfer(id) +} + +// Use the Transfer entity as before +``` + +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. + +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. + +#### Looking up entities created withing a block + +As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. + +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. + +```typescript +let id = event.transaction.hash // or however the ID is constructed +let transfer = Transfer.loadInBlock(id) +if (transfer == null) { + transfer = new Transfer(id) +} + +// Use the Transfer entity as before +``` + +> Note: If there is no entity created in the given block, `loadInBlock` will return `null` even if there is an entity with the given ID in the store. + +#### Looking up derived entities + +As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.31.0 and `@graphprotocol/graph-cli` v0.51.0 the `loadRelated` method is available. + +This enables loading derived entity fields from within an event handler. For example, given the following schema: + +```graphql +type Token @entity { + id: ID! + holder: Holder! + color: String +} + +type Holder @entity { + id: ID! + tokens: [Token!]! @derivedFrom(field: "holder") +} +``` + +The following code will load the `Token` entity that the `Holder` entity was derived from: + +```typescript +let holder = Holder.load('test-id') +// Load the Token entities associated with a given holder +let tokens = holder.tokens.load() +``` + +#### Updating existing entities + +There are two ways to update an existing entity: + +1. Load the entity with e.g. `Transfer.load(id)`, set properties on the entity, then `.save()` it back to the store. +2. Simply create the entity with e.g. `new Transfer(id)`, set properties on the entity, then `.save()` it to the store. If the entity already exists, the changes are merged into it. + +Changing properties is straight forward in most cases, thanks to the generated property setters: + +```typescript +let transfer = new Transfer(id) +transfer.from = ... +transfer.to = ... +transfer.amount = ... +``` + +It is also possible to unset properties with one of the following two instructions: + +```typescript +transfer.from.unset() +transfer.from = null +``` + +This only works with optional properties, i.e. properties that are declared without a `!` in GraphQL. Two examples would be `owner: Bytes` or `amount: BigInt`. + +Updating array properties is a little more involved, as the getting an array from an entity creates a copy of that array. This means array properties have to be set again explicitly after changing the array. The following assumes `entity` has a `numbers: [BigInt!]!` field. + +```typescript +// This won't work +entity.numbers.push(BigInt.fromI32(1)) +entity.save() + +// This will work +let numbers = entity.numbers +numbers.push(BigInt.fromI32(1)) +entity.numbers = numbers +entity.save() +``` + +#### Removing entities from the store + +There is currently no way to remove an entity via the generated types. Instead, removing an entity requires passing the name of the entity type and the entity ID to `store.remove`: + +```typescript +import { store } from '@graphprotocol/graph-ts' +... +let id = event.transaction.hash +store.remove('Transfer', id) +``` + +### Ethereum API + +The Ethereum API provides access to smart contracts, public state variables, contract functions, events, transactions, blocks and the encoding/decoding Ethereum data. + +#### Support for Ethereum Types + +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. + +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. + +The following example illustrates this. Given a Subgraph schema like + +```graphql +type Transfer @entity { + id: Bytes! + from: Bytes! + to: Bytes! + amount: BigInt! +} +``` + +and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: + +```typescript +let id = event.transaction.hash +let transfer = new Transfer(id) +transfer.from = event.params.from +transfer.to = event.params.to +transfer.amount = event.params.amount +transfer.save() +``` + +#### Events and Block/Transaction Data + +Ethereum events passed to event handlers, such as the `Transfer` event in the previous examples, not only provide access to the event parameters but also to their parent transaction and the block they are part of. The following data can be obtained from `event` instances (these classes are a part of the `ethereum` module in `graph-ts`): + +```typescript +class Event { + address: Address + logIndex: BigInt + transactionLogIndex: BigInt + logType: string | null + block: Block + transaction: Transaction + parameters: Array + receipt: TransactionReceipt | null +} + +class Block { + hash: Bytes + parentHash: Bytes + unclesHash: Bytes + author: Address + stateRoot: Bytes + transactionsRoot: Bytes + receiptsRoot: Bytes + number: BigInt + gasUsed: BigInt + gasLimit: BigInt + timestamp: BigInt + difficulty: BigInt + totalDifficulty: BigInt + size: BigInt | null + baseFeePerGas: BigInt | null +} + +class Transaction { + hash: Bytes + index: BigInt + from: Address + to: Address | null + value: BigInt + gasLimit: BigInt + gasPrice: BigInt + input: Bytes + nonce: BigInt +} + +class TransactionReceipt { + transactionHash: Bytes + transactionIndex: BigInt + blockHash: Bytes + blockNumber: BigInt + cumulativeGasUsed: BigInt + gasUsed: BigInt + contractAddress: Address + logs: Array + status: BigInt + root: Bytes + logsBloom: Bytes +} + +class Log { + address: Address + topics: Array + data: Bytes + blockHash: Bytes + blockNumber: Bytes + transactionHash: Bytes + transactionIndex: BigInt + logIndex: BigInt + transactionLogIndex: BigInt + logType: string + removed: bool | null +} +``` + +#### Access to Smart Contract State + +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. + +A common pattern is to access the contract from which an event originates. This is achieved with the following code: + +```typescript +// Import the generated contract class and generated Transfer event class +import { ERC20Contract, Transfer as TransferEvent } from '../generated/ERC20Contract/ERC20Contract' +// Import the generated entity class +import { Transfer } from '../generated/schema' + +export function handleTransfer(event: TransferEvent) { + // Bind the contract to the address that emitted the event + let contract = ERC20Contract.bind(event.address) + + // Access state variables and functions by calling them + let erc20Symbol = contract.symbol() +} +``` + +`Transfer` is aliased to `TransferEvent` here to avoid a naming conflict with the entity type + +As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. + +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. + +#### Handling Reverted Calls + +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: + +```typescript +let gravity = Gravity.bind(event.address) +let callResult = gravity.try_gravatarToOwner(gravatar) +if (callResult.reverted) { + log.info('getGravatar reverted', []) +} else { + let owner = callResult.value +} +``` + +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. + +#### Encoding/Decoding ABI + +Data can be encoded and decoded according to Ethereum's ABI encoding format using the `encode` and `decode` functions in the `ethereum` module. + +```typescript +import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' + +let tupleArray: Array = [ + ethereum.Value.fromAddress(Address.fromString('0x0000000000000000000000000000000000000420')), + ethereum.Value.fromUnsignedBigInt(BigInt.fromI32(62)), +] + +let tuple = tupleArray as ethereum.Tuple + +let encoded = ethereum.encode(ethereum.Value.fromTuple(tuple))! + +let decoded = ethereum.decode('(address,uint256)', encoded) +``` + +For more information: + +- [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types) +- Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi) +- More [complex example](https://github.com/graphprotocol/graph-node/blob/08da7cb46ddc8c09f448c5ea4b210c9021ea05ad/tests/integration-tests/host-exports/src/mapping.ts#L86). + +#### Balance of an Address + +The native token balance of an address can be retrieved using the `ethereum` module. This feature is available from `apiVersion: 0.0.9` which is defined `subgraph.yaml`. The `getBalance()` retrieves the balance of the specified address as of the end of the block in which the event is triggered. + +```typescript +import { ethereum } from '@graphprotocol/graph-ts' + +let address = Address.fromString('0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045') +let balance = ethereum.getBalance(address) // returns balance in BigInt +``` + +#### Check if an Address is a Contract or EOA + +To check whether an address is a smart contract address or an externally owned address (EOA), use the `hasCode()` function from the `ethereum` module which will return `boolean`. This feature is available from `apiVersion: 0.0.9` which is defined `subgraph.yaml`. + +```typescript +import { ethereum } from '@graphprotocol/graph-ts' + +let contractAddr = Address.fromString('0x2E645469f354BB4F5c8a05B3b30A929361cf77eC') +let isContract = ethereum.hasCode(contractAddr).inner // returns true + +let eoa = Address.fromString('0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045') +let isContract = ethereum.hasCode(eoa).inner // returns false +``` + +### Logging API + +```typescript +import { log } from '@graphprotocol/graph-ts' +``` + +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. + +The `log` API includes the following functions: + +- `log.debug(fmt: string, args: Array): void` - logs a debug message. +- `log.info(fmt: string, args: Array): void` - logs an informational message. +- `log.warning(fmt: string, args: Array): void` - logs a warning. +- `log.error(fmt: string, args: Array): void` - logs an error message. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. + +The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. + +```typescript +log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) +``` + +#### Logging one or more values + +##### Logging a single value + +In the example below, the string value "A" is passed into an array to become`['A']` before being logged: + +```typescript +let myValue = 'A' + +export function handleSomeEvent(event: SomeEvent): void { + // Displays : "My value is: A" + log.info('My value is: {}', [myValue]) +} +``` + +##### Logging a single entry from an existing array + +In the example below, only the first value of the argument array is logged, despite the array containing three values. + +```typescript +let myArray = ['A', 'B', 'C'] + +export function handleSomeEvent(event: SomeEvent): void { + // Displays : "My value is: A" (Even though three values are passed to `log.info`) + log.info('My value is: {}', myArray) +} +``` + +#### Logging multiple entries from an existing array + +Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. + +```typescript +let myArray = ['A', 'B', 'C'] + +export function handleSomeEvent(event: SomeEvent): void { + // Displays : "My first value is: A, second value is: B, third value is: C" + log.info('My first value is: {}, second value is: {}, third value is: {}', myArray) +} +``` + +##### Logging a specific entry from an existing array + +To display a specific value in the array, the indexed value must be provided. + +```typescript +export function handleSomeEvent(event: SomeEvent): void { + // Displays : "My third value is C" + log.info('My third value is: {}', [myArray[2]]) +} +``` + +##### Logging event information + +The example below logs the block number, block hash and transaction hash from an event: + +```typescript +import { log } from '@graphprotocol/graph-ts' + +export function handleSomeEvent(event: SomeEvent): void { + log.debug('Block number: {}, block hash: {}, transaction hash: {}', [ + event.block.number.toString(), // "47596000" + event.block.hash.toHexString(), // "0x..." + event.transaction.hash.toHexString(), // "0x..." + ]) +} +``` + +### IPFS API + +```typescript +import { ipfs } from '@graphprotocol/graph-ts' +``` + +Smart contracts occasionally anchor IPFS files onchain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. + +Given an IPFS hash or path, reading a file from IPFS is done as follows: + +```typescript +// Put this inside an event handler in the mapping +let hash = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D' +let data = ipfs.cat(hash) + +// Paths like `QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile` +// that include files in directories are also supported +let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' +let data = ipfs.cat(path) +``` + +**Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. + +It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: + +```typescript +import { JSONValue, Value } from '@graphprotocol/graph-ts' + +export function processItem(value: JSONValue, userData: Value): void { + // See the JSONValue documentation for details on dealing + // with JSON values + let obj = value.toObject() + let id = obj.get('id') + let title = obj.get('title') + + if (!id || !title) { + return + } + + // Callbacks can also created entities + let newItem = new Item(id) + newItem.title = title.toString() + newitem.parent = userData.toString() // Set parent to "parentId" + newitem.save() +} + +// Put this inside an event handler in the mapping +ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) + +// Alternatively, use `ipfs.mapJSON` +ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) +``` + +The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. + +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. + +### Crypto API + +```typescript +import { crypto } from '@graphprotocol/graph-ts' +``` + +The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: + +- `crypto.keccak256(input: ByteArray): ByteArray` + +### JSON API + +```typescript +import { json, JSONValueKind } from '@graphprotocol/graph-ts' +``` + +JSON data can be parsed using the `json` API: + +- `json.fromBytes(data: Bytes): JSONValue` – parses JSON data from a `Bytes` array interpreted as a valid UTF-8 sequence +- `json.try_fromBytes(data: Bytes): Result` – safe version of `json.fromBytes`, it returns an error variant if the parsing failed +- `json.fromString(data: string): JSONValue` – parses JSON data from a valid UTF-8 `String` +- `json.try_fromString(data: string): Result` – safe version of `json.fromString`, it returns an error variant if the parsing failed + +The `JSONValue` class provides a way to pull values out of an arbitrary JSON document. Since JSON values can be booleans, numbers, arrays and more, `JSONValue` comes with a `kind` property to check the type of a value: + +```typescript +let value = json.fromBytes(...) +if (value.kind == JSONValueKind.BOOL) { + ... +} +``` + +In addition, there is a method to check if the value is `null`: + +- `value.isNull(): boolean` + +When the type of a value is certain, it can be converted to a [built-in type](#built-in-types) using one of the following methods: + +- `value.toBool(): boolean` +- `value.toI64(): i64` +- `value.toF64(): f64` +- `value.toBigInt(): BigInt` +- `value.toString(): string` +- `value.toArray(): Array` - (and then convert `JSONValue` with one of the 5 methods above) + +### Type Conversions Reference + +| Source(s) | Destination | Conversion function | +| -------------------- | -------------------- | ---------------------------- | +| Address | Bytes | none | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | none | +| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | none | +| int32 | i32 | none | +| int32 | BigInt | BigInt.fromI32(s) | +| uint24 | i32 | none | +| int64 - int256 | BigInt | none | +| uint32 - uint256 | BigInt | none | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toI64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| Bytes | Address | Address.fromBytes(s) | +| String | BigInt | BigInt.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | + +### Data Source Metadata + +You can inspect the contract address, network and context of the data source that invoked the handler through the `dataSource` namespace: + +- `dataSource.address(): Address` +- `dataSource.network(): string` +- `dataSource.context(): DataSourceContext` + +### Entity and DataSourceContext + +The base `Entity` class and the child `DataSourceContext` class have helpers to dynamically set and get fields: + +- `setString(key: string, value: string): void` +- `setI32(key: string, value: i32): void` +- `setBigInt(key: string, value: BigInt): void` +- `setBytes(key: string, value: Bytes): void` +- `setBoolean(key: string, value: bool): void` +- `setBigDecimal(key, value: BigDecimal): void` +- `getString(key: string): string` +- `getI32(key: string): i32` +- `getBigInt(key: string): BigInt` +- `getBytes(key: string): Bytes` +- `getBoolean(key: string): boolean` +- `getBigDecimal(key: string): BigDecimal` + +### DataSourceContext in Manifest + +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. + +Here is a YAML example illustrating the usage of various types in the `context` section: + +```yaml +dataSources: + - kind: ethereum/contract + name: ContractName + network: mainnet + context: + bool_example: + type: Bool + data: true + string_example: + type: String + data: 'hello' + int_example: + type: Int + data: 42 + int8_example: + type: Int8 + data: 127 + big_decimal_example: + type: BigDecimal + data: '10.99' + bytes_example: + type: Bytes + data: '0x68656c6c6f' + list_example: + type: List + data: + - type: Int + data: 1 + - type: Int + data: 2 + - type: Int + data: 3 + big_int_example: + type: BigInt + data: '1000000000000000000000000' +``` + +- `Bool`: Specifies a Boolean value (`true` or `false`). +- `String`: Specifies a String value. +- `Int`: Specifies a 32-bit integer. +- `Int8`: Specifies an 8-bit integer. +- `BigDecimal`: Specifies a decimal number. Must be quoted. +- `Bytes`: Specifies a hexadecimal string. +- `List`: Specifies a list of items. Each item needs to specify its type and data. +- `BigInt`: Specifies a large integer value. Must be quoted due to its large size. + +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/sw/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/sw/subgraphs/developing/creating/graph-ts/common-issues.mdx new file mode 100644 index 000000000000..65e8e3d4a8a3 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -0,0 +1,8 @@ +--- +title: Common AssemblyScript Issues +--- + +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: + +- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. +- Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/sw/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/sw/subgraphs/developing/creating/install-the-cli.mdx new file mode 100644 index 000000000000..c9d6966ef5fe --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/install-the-cli.mdx @@ -0,0 +1,105 @@ +--- +title: Install the Graph CLI +--- + +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). + +## Overview + +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. + +## Getting Started + +### Install the Graph CLI + +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +On your local machine, run one of the following commands: + +#### Using [npm](https://www.npmjs.com/) + +```bash +npm install -g @graphprotocol/graph-cli@latest +``` + +#### Using [yarn](https://yarnpkg.com/) + +```bash +yarn global add @graphprotocol/graph-cli +``` + +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. + +## Create a Subgraph + +### From an Existing Contract + +The following command creates a Subgraph that indexes all events of an existing contract: + +```sh +graph init \ + --product subgraph-studio + --from-contract \ + [--network ] \ + [--abi ] \ + [] +``` + +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. + +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. + +### From an Example Subgraph + +The following command initializes a new project from an example Subgraph: + +```sh +graph init --from-example=example-subgraph +``` + +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. + +### Add New `dataSources` to an Existing Subgraph + +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. + +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: + +```sh +graph add
[] + +Options: + + --abi Path to the contract ABI (default: download from Etherscan) + --contract-name Name of the contract (default: Contract) + --merge-entities Whether to merge entities with the same name (default: false) + --network-file Networks config file path (default: "./networks.json") +``` + +#### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. + +### Getting The ABIs + +The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: + +- If you are building your own project, you will likely have access to your most current ABIs. +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/sw/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/sw/subgraphs/developing/creating/ql-schema.mdx new file mode 100644 index 000000000000..2eb805320753 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/ql-schema.mdx @@ -0,0 +1,324 @@ +--- +title: The Graph QL Schema +--- + +## Overview + +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. + +> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. + +### Defining Entities + +Before defining entities, it is important to take a step back and think about how your data is structured and linked. + +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. +- It may be useful to imagine entities as "objects containing data", rather than as events or functions. +- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. +- Each type that should be an entity is required to be annotated with an `@entity` directive. +- By default, entities are mutable, meaning that mappings can load existing entities, modify them and store a new version of that entity. + - Mutability comes at a price, so for entity types that will never be modified, such as those containing data extracted verbatim from the chain, it is recommended to mark them as immutable with `@entity(immutable: true)`. + - If changes happen in the same block in which the entity was created, then mappings can make changes to immutable entities. Immutable entities are much faster to write and to query so they should be used whenever possible. + +#### Good Example + +The following `Gravatar` entity is structured around a Gravatar object and is a good example of how an entity could be defined. + +```graphql +type Gravatar @entity(immutable: true) { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String + accepted: Boolean +} +``` + +#### Bad Example + +The following example `GravatarAccepted` and `GravatarDeclined` entities are based around events. It is not recommended to map events or function calls to entities 1:1. + +```graphql +type GravatarAccepted @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} + +type GravatarDeclined @entity { + id: Bytes! + owner: Bytes + displayName: String + imageUrl: String +} +``` + +#### Optional and Required Fields + +Entity fields can be defined as required or optional. Required fields are indicated by the `!` in the schema. If the field is a scalar field, you get an error when you try to store the entity. If the field references another entity then you get this error: + +``` +Null value resolved for non-null field 'name' +``` + +Each entity must have an `id` field, which must be of type `Bytes!` or `String!`. It is generally recommended to use `Bytes!`, unless the `id` contains human-readable text, since entities with `Bytes!` id's will be faster to write and query as those with a `String!` `id`. The `id` field serves as the primary key, and needs to be unique among all entities of the same type. For historical reasons, the type `ID!` is also accepted and is a synonym for `String!`. + +For some entity types the `id` for `Bytes!` is constructed from the id's of two other entities; that is possible using `concat`, e.g., `let id = left.id.concat(right.id) ` to form the id from the id's of `left` and `right`. Similarly, to construct an id from the id of an existing entity and a counter `count`, `let id = left.id.concatI32(count)` can be used. The concatenation is guaranteed to produce unique id's as long as the length of `left` is the same for all such entities, for example, because `left.id` is an `Address`. + +### Built-In Scalar Types + +#### GraphQL Supported Scalars + +The following scalars are supported in the GraphQL API: + +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | + +### Enums + +You can also create enums within a schema. Enums have the following syntax: + +```graphql +enum TokenStatus { + OriginalOwner + SecondOwner + ThirdOwner +} +``` + +Once the enum is defined in the schema, you can use the string representation of the enum value to set an enum field on an entity. For example, you can set the `tokenStatus` to `SecondOwner` by first defining your entity and subsequently setting the field with `entity.tokenStatus = "SecondOwner"`. The example below demonstrates what the Token entity would look like with an enum field: + +More detail on writing enums can be found in the [GraphQL documentation](https://graphql.org/learn/schema/). + +### Entity Relationships + +An entity may have a relationship to one or more other entities in your schema. These relationships may be traversed in your queries. Relationships in The Graph are unidirectional. It is possible to simulate bidirectional relationships by defining a unidirectional relationship on either "end" of the relationship. + +Relationships are defined on entities just like any other field except that the type specified is that of another entity. + +#### One-To-One Relationships + +Define a `Transaction` entity type with an optional one-to-one relationship with a `TransactionReceipt` entity type: + +```graphql +type Transaction @entity(immutable: true) { + id: Bytes! + transactionReceipt: TransactionReceipt +} + +type TransactionReceipt @entity(immutable: true) { + id: Bytes! + transaction: Transaction +} +``` + +#### One-To-Many Relationships + +Define a `TokenBalance` entity type with a required one-to-many relationship with a Token entity type: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +### Reverse Lookups + +Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. + +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. + +#### Example + +We can make the balances for a token accessible from the token by deriving a `tokenBalances` field: + +```graphql +type Token @entity(immutable: true) { + id: Bytes! + tokenBalances: [TokenBalance!]! @derivedFrom(field: "token") +} + +type TokenBalance @entity { + id: Bytes! + amount: Int! + token: Token! +} +``` + +Here is an example of how to write a mapping for a Subgraph with reverse lookups: + +```typescript +let token = new Token(event.address) // Create Token +token.save() // tokenBalances is derived automatically + +let tokenBalance = new TokenBalance(event.address) +tokenBalance.amount = BigInt.fromI32(0) +tokenBalance.token = token.id // Reference stored here +tokenBalance.save() +``` + +#### Many-To-Many Relationships + +For many-to-many relationships, such as users that each may belong to any number of organizations, the most straightforward, but generally not the most performant, way to model the relationship is as an array in each of the two entities involved. If the relationship is symmetric, only one side of the relationship needs to be stored and the other side can be derived. + +#### Example + +Define a reverse lookup from a `User` entity type to an `Organization` entity type. In the example below, this is achieved by looking up the `members` attribute from within the `Organization` entity. In queries, the `organizations` field on `User` will be resolved by finding all `Organization` entities that include the user's ID. + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [User!]! +} + +type User @entity { + id: Bytes! + name: String! + organizations: [Organization!]! @derivedFrom(field: "members") +} +``` + +A more performant way to store this relationship is through a mapping table that has one entry for each `User` / `Organization` pair with a schema like + +```graphql +type Organization @entity { + id: Bytes! + name: String! + members: [UserOrganization!]! @derivedFrom(field: "organization") +} + +type User @entity { + id: Bytes! + name: String! + organizations: [UserOrganization!] @derivedFrom(field: "user") +} + +type UserOrganization @entity { + id: Bytes! # Set to `user.id.concat(organization.id)` + user: User! + organization: Organization! +} +``` + +This approach requires that queries descend into one additional level to retrieve, for example, the organizations for users: + +```graphql +query usersWithOrganizations { + users { + organizations { + # this is a UserOrganization entity + organization { + name + } + } + } +} +``` + +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. + +### Adding comments to the schema + +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: + +```graphql +type MyFirstEntity @entity { + # unique identifier and primary key of the entity + id: Bytes! + address: Bytes! +} +``` + +## Defining Fulltext Search Fields + +Fulltext search queries filter and rank entities based on a text search input. Fulltext queries are able to return matches for similar words by processing the query text input into stems before comparing them to the indexed text data. + +A fulltext query definition includes the query name, the language dictionary used to process the text fields, the ranking algorithm used to order the results, and the fields included in the search. Each fulltext query may span multiple fields, but all included fields must be from a single entity type. + +To add a fulltext query, include a `_Schema_` type with a fulltext directive in the GraphQL schema. + +```graphql +type _Schema_ + @fulltext( + name: "bandSearch" + language: en + algorithm: rank + include: [{ entity: "Band", fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] }] + ) + +type Band @entity { + id: Bytes! + name: String! + description: String! + bio: String + wallet: Address + labels: [Label!]! + discography: [Album!]! + members: [Musician!]! +} +``` + +The example `bandSearch` field can be used in queries to filter `Band` entities based on the text documents in the `name`, `description`, and `bio` fields. Jump to [GraphQL API - Queries](/subgraphs/querying/graphql-api/#queries) for a description of the fulltext search API and more example usage. + +```graphql +query { + bandSearch(text: "breaks & electro & detroit") { + id + name + description + wallet + } +} +``` + +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. + +## Languages supported + +Choosing a different language will have a definitive, though sometimes subtle, effect on the fulltext search API. Fields covered by a fulltext query field are examined in the context of the chosen language, so the lexemes produced by analysis and search queries vary from language to language. For example: when using the supported Turkish dictionary "token" is stemmed to "toke" while, of course, the English dictionary will stem it to "token". + +Supported language dictionaries: + +| Code | Dictionary | +| ------ | ---------- | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | Portuguese | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | + +### Ranking Algorithms + +Supported algorithms for ordering results: + +| Algorithm | Description | +| ------------- | ----------------------------------------------------------------------- | +| rank | Use the match quality (0-1) of the fulltext query to order the results. | +| proximityRank | Similar to rank but also includes the proximity of the matches. | diff --git a/website/src/pages/sw/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/sw/subgraphs/developing/creating/starting-your-subgraph.mdx new file mode 100644 index 000000000000..4931e6b1fd34 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -0,0 +1,35 @@ +--- +title: Starting Your Subgraph +--- + +## Overview + +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. + +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. + +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. + +### Start Building + +Start the process and build a Subgraph that matches your needs: + +1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component +3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema +4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features + +Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). + +| Version | Release notes | +| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/sw/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/sw/subgraphs/developing/creating/subgraph-manifest.mdx new file mode 100644 index 000000000000..085eaf2fb533 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/subgraph-manifest.mdx @@ -0,0 +1,549 @@ +--- +title: Subgraph Manifest +--- + +## Overview + +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. + +The **Subgraph definition** consists of the following files: + +- `subgraph.yaml`: Contains the Subgraph manifest + +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL + +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +### Subgraph Capabilities + +A single Subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example Subgraph listed above, `subgraph.yaml` is: + +```yaml +specVersion: 1.3.0 +description: Gravatar for Ethereum +repository: https://github.com/graphprotocol/graph-tooling +schema: + file: ./schema.graphql +indexerHints: + prune: auto +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + abi: Gravity + startBlock: 6175244 + endBlock: 7175245 + context: + foo: + type: Bool + data: true + bar: + type: String + data: 'bar' + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Gravatar + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + - event: UpdatedGravatar(uint256,address,string,string) + handler: handleUpdatedGravatar + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCall + filter: + kind: call + file: ./src/mapping.ts +``` + +## Subgraph Entries + +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). + +The important entries to update for the manifest are: + +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. + +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. + +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. + +- `features`: a list of all used [feature](#experimental-features) names. + +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. + +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. + +- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. + +- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. + +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. + +- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. + +- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. + +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. + +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. + +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. + +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. + +## Event Handlers + +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. + +### Defining an Event Handler + +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + eventHandlers: + - event: Approval(address,address,uint256) + handler: handleApproval + - event: Transfer(address,address,uint256) + handler: handleTransfer + topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic. +``` + +## Call Handlers + +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. + +Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. + +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. + +### Defining a Call Handler + +To define a call handler in your manifest, simply add a `callHandlers` array under the data source you would like to subscribe to. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + callHandlers: + - function: createGravatar(string,string) + handler: handleCreateGravatar +``` + +The `function` is the normalized function signature to filter calls by. The `handler` property is the name of the function in your mapping you would like to execute when the target function is called in the data source contract. + +### Mapping Function + +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: + +```typescript +import { CreateGravatarCall } from '../generated/Gravity/Gravity' +import { Transaction } from '../generated/schema' + +export function handleCreateGravatar(call: CreateGravatarCall): void { + let id = call.transaction.hash + let transaction = new Transaction(id) + transaction.displayName = call.inputs._displayName + transaction.imageUrl = call.inputs._imageUrl + transaction.save() +} +``` + +The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a subclass of `ethereum.Call`, provided by `@graphprotocol/graph-ts`, that includes the typed inputs and outputs of the call. The `CreateGravatarCall` type is generated for you when you run `graph codegen`. + +## Block Handlers + +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. + +### Supported Filters + +#### Call Filter + +```yaml +filter: + kind: call +``` + +_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ + +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. + +The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. + +```yaml +dataSources: + - kind: ethereum/contract + name: Gravity + network: dev + source: + address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + abi: Gravity + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + entities: + - Gravatar + - Transaction + abis: + - name: Gravity + file: ./abis/Gravity.json + blockHandlers: + - handler: handleBlock + - handler: handleBlockWithCallToContract + filter: + kind: call +``` + +#### Polling Filter + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Polling filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleBlock + filter: + kind: polling + every: 10 +``` + +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. + +#### Once Filter + +> **Requires `specVersion` >= 0.0.8** +> +> **Note:** Once filters are only available on dataSources of `kind: ethereum`. + +```yaml +blockHandlers: + - handler: handleOnce + filter: + kind: once +``` + +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. + +```ts +export function handleOnce(block: ethereum.Block): void { + let data = new InitialData(Bytes.fromUTF8('initial')) + data.data = 'Setup data here' + data.save() +} +``` + +### Mapping Function + +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. + +```typescript +import { ethereum } from '@graphprotocol/graph-ts' + +export function handleBlock(block: ethereum.Block): void { + let id = block.hash + let entity = new Block(id) + entity.save() +} +``` + +## Anonymous Events + +If you need to process anonymous events in Solidity, that can be achieved by providing the topic 0 of the event, as in the example: + +```yaml +eventHandlers: + - event: LogNote(bytes4,address,bytes32,bytes32,uint256,bytes) + topic0: '0x644843f351d3fba4abcd60109eaff9f54bac8fb8ccf0bab941009c21df21cf31' + handler: handleGive +``` + +An event will only be triggered when both the signature and topic 0 match. By default, `topic0` is equal to the hash of the event signature. + +## Transaction Receipts in Event Handlers + +Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. + +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. + +```yaml +eventHandlers: + - event: NewGravatar(uint256,address,string,string) + handler: handleNewGravatar + receipt: true +``` + +Inside the handler function, the receipt can be accessed in the `Event.receipt` field. When the `receipt` key is set to `false` or omitted in the manifest, a `null` value will be returned instead. + +## Order of Triggering Handlers + +The triggers for a data source within a block are ordered using the following process: + +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. + +These ordering rules are subject to change. + +> **Note:** When new [dynamic data source](#data-source-templates-for-dynamically-created-contracts) are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. + +## Data Source Templates + +A common pattern in EVM-compatible smart contracts is the use of registry or factory contracts, where one contract creates, manages, or references an arbitrary number of other contracts that each have their own state and events. + +The addresses of these sub-contracts may or may not be known upfront and many of these contracts may be created and/or added over time. This is why, in such cases, defining a single data source or a fixed number of data sources is impossible and a more dynamic approach is needed: _data source templates_. + +### Data Source for the Main Contract + +First, you define a regular data source for the main contract. The snippet below shows a simplified example data source for the [Uniswap](https://uniswap.org) exchange factory contract. Note the `NewExchange(address,address)` event handler. This is emitted when a new exchange contract is created onchain by the factory contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: Factory + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - Directory + abis: + - name: Factory + file: ./abis/factory.json + eventHandlers: + - event: NewExchange(address,address) + handler: handleNewExchange +``` + +### Data Source Templates for Dynamically Created Contracts + +Then, you add _data source templates_ to the manifest. These are identical to regular data sources, except that they lack a pre-defined contract address under `source`. Typically, you would define one template for each type of sub-contract managed or referenced by the parent contract. + +```yaml +dataSources: + - kind: ethereum/contract + name: Factory + # ... other source fields for the main contract ... +templates: + - name: Exchange + kind: ethereum/contract + network: mainnet + source: + abi: Exchange + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/mappings/exchange.ts + entities: + - Exchange + abis: + - name: Exchange + file: ./abis/exchange.json + eventHandlers: + - event: TokenPurchase(address,uint256,uint256) + handler: handleTokenPurchase + - event: EthPurchase(address,uint256,uint256) + handler: handleEthPurchase + - event: AddLiquidity(address,uint256,uint256) + handler: handleAddLiquidity + - event: RemoveLiquidity(address,uint256,uint256) + handler: handleRemoveLiquidity +``` + +### Instantiating a Data Source Template + +In the final step, you update your main contract mapping to create a dynamic data source instance from one of the templates. In this example, you would change the main contract mapping to import the `Exchange` template and call the `Exchange.create(address)` method on it to start indexing the new exchange contract. + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + // Start indexing the exchange; `event.params.exchange` is the + // address of the new exchange contract + Exchange.create(event.params.exchange) +} +``` + +> **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. +> +> If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. + +### Data Source Context + +Data source contexts allow passing extra configuration when instantiating a template. In our example, let's say exchanges are associated with a particular trading pair, which is included in the `NewExchange` event. That information can be passed into the instantiated data source, like so: + +```typescript +import { Exchange } from '../generated/templates' + +export function handleNewExchange(event: NewExchange): void { + let context = new DataSourceContext() + context.setString('tradingPair', event.params.tradingPair) + Exchange.createWithContext(event.params.exchange, context) +} +``` + +Inside a mapping of the `Exchange` template, the context can then be accessed: + +```typescript +import { dataSource } from '@graphprotocol/graph-ts' + +let context = dataSource.context() +let tradingPair = context.getString('tradingPair') +``` + +There are setters and getters like `setString` and `getString` for all value types. + +## Start Blocks + +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. + +```yaml +dataSources: + - kind: ethereum/contract + name: ExampleSource + network: mainnet + source: + address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + abi: ExampleContract + startBlock: 6627917 + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/mappings/factory.ts + entities: + - User + abis: + - name: ExampleContract + file: ./abis/ExampleContract.json + eventHandlers: + - event: NewEvent(address,address) + handler: handleNewEvent +``` + +> **Note:** The contract creation block can be quickly looked up on Etherscan: +> +> 1. Search for the contract by entering its address in the search bar. +> 2. Click on the creation transaction hash in the `Contract Creator` section. +> 3. Load the transaction details page where you'll find the start block for that contract. + +## Indexer Hints + +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. + +> This feature is available from `specVersion: 1.0.0` + +### Prune + +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: + +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. + +``` + indexerHints: + prune: auto +``` + +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. + +History as of a given block is required for: + +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block + +If historical data as of the block has been pruned, the above capabilities will not be available. + +> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. + +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: + +To retain a specific amount of historical data: + +``` + indexerHints: + prune: 1000 # Replace 1000 with the desired number of blocks to retain +``` + +To preserve the complete history of entity states: + +``` +indexerHints: + prune: never +``` + +## SpecVersion Releases + +| Version | Release notes | +| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) | +| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | diff --git a/website/src/pages/sw/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/sw/subgraphs/developing/creating/unit-testing-framework.mdx new file mode 100644 index 000000000000..e56e1109bc04 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/creating/unit-testing-framework.mdx @@ -0,0 +1,1402 @@ +--- +title: Unit Testing Framework +--- + +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. + +## Benefits of Using Matchstick + +- It's written in Rust and optimized for high performance. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. + +## Getting Started + +### Install Dependencies + +In order to use the test helper methods and run tests, you need to install the following dependencies: + +```sh +yarn add --dev matchstick-as +``` + +### Install PostgreSQL + +`graph-node` depends on PostgreSQL, so if you don't already have it, then you will need to install it. + +> Note: It's highly recommended to use the commands below to avoid unexpected errors. + +#### Using MacOS + +Installation command: + +```sh +brew install postgresql +``` + +Create a symlink to the latest libpq.5.lib _You may need to create this dir first_ `/usr/local/opt/postgresql/lib/` + +```sh +ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib +``` + +#### Using Linux + +Installation command (depends on your distro): + +```sh +sudo apt install postgresql +``` + +### Using WSL (Windows Subsystem for Linux) + +You can use Matchstick on WSL both using the Docker approach and the binary approach. As WSL can be a bit tricky, here's a few tips in case you encounter issues like + +``` +static BYTES = Symbol("Bytes") SyntaxError: Unexpected token = +``` + +or + +``` +/node_modules/gluegun/build/index.js:13 throw up; +``` + +Please make sure you're on a newer version of Node.js graph-cli doesn't support **v10.19.0** anymore, and that is still the default version for new Ubuntu images on WSL. For instance Matchstick is confirmed to be working on WSL with **v18.1.0**, you can switch to it either via **nvm** or if you update your global Node.js. Don't forget to delete `node_modules` and to run `npm install` again after updating you nodejs! Then, make sure you have **libpq** installed, you can do that by running + +``` +sudo apt-get install libpq-dev +``` + +And finally, do not use `graph test` (which uses your global installation of graph-cli and for some reason that looks like it's broken on WSL currently), instead use `yarn test` or `npm run test` (that will use the local, project-level instance of graph-cli, which works like a charm). For that you would of course need to have a `"test"` script in your `package.json` file which can be something as simple as + +```json +{ + "name": "demo-subgraph", + "version": "0.1.0", + "scripts": { + "test": "graph test", + ... + }, + "dependencies": { + "@graphprotocol/graph-cli": "^0.56.0", + "@graphprotocol/graph-ts": "^0.31.0", + "matchstick-as": "^0.6.0" + } +} +``` + +### Using Matchstick + +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). + +### CLI options + +This will run all tests in the test folder: + +```sh +graph test +``` + +This will run a test named gravity.test.ts and/or all test inside of a folder named gravity: + +```sh +graph test gravity +``` + +This will run only that specific test file: + +```sh +graph test path/to/file.test.ts +``` + +**Options:** + +```sh +-c, --coverage Run the tests in coverage mode +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) +-f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. +-h, --help Show usage information +-l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) +-r, --recompile Forces tests to be recompiled +-v, --version Choose the version of the rust binary that you want to be downloaded/used +``` + +### Docker + +From `graph-cli 0.25.2`, the `graph test` command supports running `matchstick` in a docker container with the `-d` flag. The docker implementation uses [bind mount](https://docs.docker.com/storage/bind-mounts/) so it does not have to rebuild the docker image every time the `graph test -d` command is executed. Alternatively you can follow the instructions from the [matchstick](https://github.com/LimeChain/matchstick#docker-) repository to run docker manually. + +❗ `graph test -d` forces `docker run` to run with flag `-t`. This must be removed to run inside non-interactive environments (like GitHub CI). + +❗ If you have previously ran `graph test` you may encounter the following error during docker build: + +```sh + error from sender: failed to xattr node_modules/binary-install-raw/bin/binary-: permission denied +``` + +In this case create a `.dockerignore` in the root folder and add `node_modules/binary-install-raw/bin` + +### Configuration + +Matchstick can be configured to use a custom tests, libs and manifest path via `matchstick.yaml` config file: + +```yaml +testsFolder: path/to/tests +libsFolder: path/to/libs +manifestPath: path/to/subgraph.yaml +``` + +### Demo Subgraph + +You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) + +### Video tutorials + +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) + +## Tests structure + +_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_ + +### describe() + +`describe(name: String , () => {})` - Defines a test group. + +**_Notes:_** + +- _Describes are not mandatory. You can still use test() the old way, outside of the describe() blocks_ + +Example: + +```typescript +import { describe, test } from "matchstick-as/assembly/index" +import { handleNewGravatar } from "../../src/gravity" + +describe("handleNewGravatar()", () => { + test("Should create a new Gravatar entity", () => { + ... + }) +}) +``` + +Nested `describe()` example: + +```typescript +import { describe, test } from "matchstick-as/assembly/index" +import { handleUpdatedGravatar } from "../../src/gravity" + +describe("handleUpdatedGravatar()", () => { + describe("When entity exists", () => { + test("updates the entity", () => { + ... + }) + }) + + describe("When entity does not exists", () => { + test("it creates a new entity", () => { + ... + }) + }) +}) +``` + +--- + +### test() + +`test(name: String, () =>, should_fail: bool)` - Defines a test case. You can use test() inside of describe() blocks or independently. + +Example: + +```typescript +import { describe, test } from "matchstick-as/assembly/index" +import { handleNewGravatar } from "../../src/gravity" + +describe("handleNewGravatar()", () => { + test("Should create a new Entity", () => { + ... + }) +}) +``` + +or + +```typescript +test("handleNewGravatar() should create a new entity", () => { + ... +}) + + +``` + +--- + +### beforeAll() + +Runs a code block before any of the tests in the file. If `beforeAll` is declared inside of a `describe` block, it runs at the beginning of that `describe` block. + +Examples: + +Code inside `beforeAll` will execute once before _all_ tests in the file. + +```typescript +import { describe, test, beforeAll } from "matchstick-as/assembly/index" +import { handleUpdatedGravatar, handleNewGravatar } from "../../src/gravity" +import { Gravatar } from "../../generated/schema" + +beforeAll(() => { + let gravatar = new Gravatar("0x0") + gravatar.displayName = “First Gravatar” + gravatar.save() + ... +}) + +describe("When the entity does not exist", () => { + test("it should create a new Gravatar with id 0x1", () => { + ... + }) +}) + +describe("When entity already exists", () => { + test("it should update the Gravatar with id 0x0", () => { + ... + }) +}) +``` + +Code inside `beforeAll` will execute once before all tests in the first describe block + +```typescript +import { describe, test, beforeAll } from "matchstick-as/assembly/index" +import { handleUpdatedGravatar, handleNewGravatar } from "../../src/gravity" +import { Gravatar } from "../../generated/schema" + +describe("handleUpdatedGravatar()", () => { + beforeAll(() => { + let gravatar = new Gravatar("0x0") + gravatar.displayName = “First Gravatar” + gravatar.save() + ... + }) + + test("updates Gravatar with id 0x0", () => { + ... + }) + + test("creates new Gravatar with id 0x1", () => { + ... + }) +}) +``` + +--- + +### afterAll() + +Runs a code block after all of the tests in the file. If `afterAll` is declared inside of a `describe` block, it runs at the end of that `describe` block. + +Example: + +Code inside `afterAll` will execute once after _all_ tests in the file. + +```typescript +import { describe, test, afterAll } from "matchstick-as/assembly/index" +import { handleUpdatedGravatar, handleNewGravatar } from "../../src/gravity" +import { store } from "@graphprotocol/graph-ts" + +afterAll(() => { + store.remove("Gravatar", "0x0") + ... +}) + +describe("handleNewGravatar, () => { + test("creates Gravatar with id 0x0", () => { + ... + }) +}) + +describe("handleUpdatedGravatar", () => { + test("updates Gravatar with id 0x0", () => { + ... + }) +}) +``` + +Code inside `afterAll` will execute once after all tests in the first describe block + +```typescript +import { describe, test, afterAll, clearStore } from "matchstick-as/assembly/index" +import { handleUpdatedGravatar, handleNewGravatar } from "../../src/gravity" + +describe("handleNewGravatar", () => { + afterAll(() => { + store.remove("Gravatar", "0x1") + ... + }) + + test("It creates a new entity with Id 0x0", () => { + ... + }) + + test("It creates a new entity with Id 0x1", () => { + ... + }) +}) + +describe("handleUpdatedGravatar", () => { + test("updates Gravatar with id 0x0", () => { + ... + }) +}) +``` + +--- + +### beforeEach() + +Runs a code block before every test. If `beforeEach` is declared inside of a `describe` block, it runs before each test in that `describe` block. + +Examples: Code inside `beforeEach` will execute before each tests. + +```typescript +import { describe, test, beforeEach, clearStore } from "matchstick-as/assembly/index" +import { handleNewGravatars } from "./utils" + +beforeEach(() => { + clearStore() // <-- clear the store before each test in the file +}) + +describe("handleNewGravatars, () => { + test("A test that requires a clean store", () => { + ... + }) + + test("Second that requires a clean store", () => { + ... + }) +}) + + ... +``` + +Code inside `beforeEach` will execute only before each test in the that describe + +```typescript +import { describe, test, beforeEach } from 'matchstick-as/assembly/index' +import { handleUpdatedGravatar, handleNewGravatar } from '../../src/gravity' + +describe('handleUpdatedGravatars', () => { + beforeEach(() => { + let gravatar = new Gravatar('0x0') + gravatar.displayName = 'First Gravatar' + gravatar.imageUrl = '' + gravatar.save() + }) + + test('Updates the displayName', () => { + assert.fieldEquals('Gravatar', '0x0', 'displayName', 'First Gravatar') + + // code that should update the displayName to 1st Gravatar + + assert.fieldEquals('Gravatar', '0x0', 'displayName', '1st Gravatar') + store.remove('Gravatar', '0x0') + }) + + test('Updates the imageUrl', () => { + assert.fieldEquals('Gravatar', '0x0', 'imageUrl', '') + + // code that should changes the imageUrl to https://www.gravatar.com/avatar/0x0 + + assert.fieldEquals('Gravatar', '0x0', 'imageUrl', 'https://www.gravatar.com/avatar/0x0') + store.remove('Gravatar', '0x0') + }) +}) +``` + +--- + +### afterEach() + +Runs a code block after every test. If `afterEach` is declared inside of a `describe` block, it runs after each test in that `describe` block. + +Examples: + +Code inside `afterEach` will execute after every test. + +```typescript +import { describe, test, beforeEach, afterEach } from "matchstick-as/assembly/index" +import { handleUpdatedGravatar, handleNewGravatar } from "../../src/gravity" + +beforeEach(() => { + let gravatar = new Gravatar("0x0") + gravatar.displayName = “First Gravatar” + gravatar.save() +}) + +afterEach(() => { + store.remove("Gravatar", "0x0") +}) + +describe("handleNewGravatar", () => { + ... +}) + +describe("handleUpdatedGravatar", () => { + test("Updates the displayName", () => { + assert.fieldEquals("Gravatar", "0x0", "displayName", "First Gravatar") + + // code that should update the displayName to 1st Gravatar + + assert.fieldEquals("Gravatar", "0x0", "displayName", "1st Gravatar") + }) + + test("Updates the imageUrl", () => { + assert.fieldEquals("Gravatar", "0x0", "imageUrl", "") + + // code that should changes the imageUrl to https://www.gravatar.com/avatar/0x0 + + assert.fieldEquals("Gravatar", "0x0", "imageUrl", "https://www.gravatar.com/avatar/0x0") + }) +}) +``` + +Code inside `afterEach` will execute after each test in that describe + +```typescript +import { describe, test, beforeEach, afterEach } from "matchstick-as/assembly/index" +import { handleUpdatedGravatar, handleNewGravatar } from "../../src/gravity" + +describe("handleNewGravatar", () => { + ... +}) + +describe("handleUpdatedGravatar", () => { + beforeEach(() => { + let gravatar = new Gravatar("0x0") + gravatar.displayName = "First Gravatar" + gravatar.imageUrl = "" + gravatar.save() + }) + + afterEach(() => { + store.remove("Gravatar", "0x0") + }) + + test("Updates the displayName", () => { + assert.fieldEquals("Gravatar", "0x0", "displayName", "First Gravatar") + + // code that should update the displayName to 1st Gravatar + + assert.fieldEquals("Gravatar", "0x0", "displayName", "1st Gravatar") + }) + + test("Updates the imageUrl", () => { + assert.fieldEquals("Gravatar", "0x0", "imageUrl", "") + + // code that should changes the imageUrl to https://www.gravatar.com/avatar/0x0 + + assert.fieldEquals("Gravatar", "0x0", "imageUrl", "https://www.gravatar.com/avatar/0x0") + }) +}) +``` + +## Asserts + +```typescript +fieldEquals(entityType: string, id: string, fieldName: string, expectedVal: string) + +equals(expected: ethereum.Value, actual: ethereum.Value) + +notInStore(entityType: string, id: string) + +addressEquals(address1: Address, address2: Address) + +bytesEquals(bytes1: Bytes, bytes2: Bytes) + +i32Equals(number1: i32, number2: i32) + +bigIntEquals(bigInt1: BigInt, bigInt2: BigInt) + +booleanEquals(bool1: boolean, bool2: boolean) + +stringEquals(string1: string, string2: string) + +arrayEquals(array1: Array, array2: Array) + +tupleEquals(tuple1: ethereum.Tuple, tuple2: ethereum.Tuple) + +assertTrue(value: boolean) + +assertNull(value: T) + +assertNotNull(value: T) + +entityCount(entityType: string, expectedCount: i32) +``` + +As of version 0.6.0, asserts support custom error messages as well + +```typescript +assert.fieldEquals('Gravatar', '0x123', 'id', '0x123', 'Id should be 0x123') +assert.equals(ethereum.Value.fromI32(1), ethereum.Value.fromI32(1), 'Value should equal 1') +assert.notInStore('Gravatar', '0x124', 'Gravatar should not be in store') +assert.addressEquals(Address.zero(), Address.zero(), 'Address should be zero') +assert.bytesEquals(Bytes.fromUTF8('0x123'), Bytes.fromUTF8('0x123'), 'Bytes should be equal') +assert.i32Equals(2, 2, 'I32 should equal 2') +assert.bigIntEquals(BigInt.fromI32(1), BigInt.fromI32(1), 'BigInt should equal 1') +assert.booleanEquals(true, true, 'Boolean should be true') +assert.stringEquals('1', '1', 'String should equal 1') +assert.arrayEquals([ethereum.Value.fromI32(1)], [ethereum.Value.fromI32(1)], 'Arrays should be equal') +assert.tupleEquals( + changetype([ethereum.Value.fromI32(1)]), + changetype([ethereum.Value.fromI32(1)]), + 'Tuples should be equal', +) +assert.assertTrue(true, 'Should be true') +assert.assertNull(null, 'Should be null') +assert.assertNotNull('not null', 'Should be not null') +assert.entityCount('Gravatar', 1, 'There should be 2 gravatars') +assert.dataSourceCount('GraphTokenLockWallet', 1, 'GraphTokenLockWallet template should have one data source') +assert.dataSourceExists( + 'GraphTokenLockWallet', + Address.zero().toHexString(), + 'GraphTokenLockWallet should have a data source for zero address', +) +``` + +## Write a Unit Test + +Let's see how a simple unit test would look like using the Gravatar examples in the [Demo Subgraph](https://github.com/LimeChain/demo-subgraph/blob/main/src/gravity.ts). + +Assuming we have the following handler function (along with two helper functions to make our life easier): + +```typescript +export function handleNewGravatar(event: NewGravatar): void { + let gravatar = new Gravatar(event.params.id.toHex()) + gravatar.owner = event.params.owner + gravatar.displayName = event.params.displayName + gravatar.imageUrl = event.params.imageUrl + gravatar.save() +} + +export function handleNewGravatars(events: NewGravatar[]): void { + events.forEach((event) => { + handleNewGravatar(event) + }) +} + +export function createNewGravatarEvent( + id: i32, + ownerAddress: string, + displayName: string, + imageUrl: string, +): NewGravatar { + let mockEvent = newMockEvent() + let newGravatarEvent = new NewGravatar( + mockEvent.address, + mockEvent.logIndex, + mockEvent.transactionLogIndex, + mockEvent.logType, + mockEvent.block, + mockEvent.transaction, + mockEvent.parameters, + ) + newGravatarEvent.parameters = new Array() + let idParam = new ethereum.EventParam('id', ethereum.Value.fromI32(id)) + let addressParam = new ethereum.EventParam( + 'ownerAddress', + ethereum.Value.fromAddress(Address.fromString(ownerAddress)), + ) + let displayNameParam = new ethereum.EventParam('displayName', ethereum.Value.fromString(displayName)) + let imageUrlParam = new ethereum.EventParam('imageUrl', ethereum.Value.fromString(imageUrl)) + + newGravatarEvent.parameters.push(idParam) + newGravatarEvent.parameters.push(addressParam) + newGravatarEvent.parameters.push(displayNameParam) + newGravatarEvent.parameters.push(imageUrlParam) + + return newGravatarEvent +} +``` + +We first have to create a test file in our project. This is an example of how that might look like: + +```typescript +import { clearStore, test, assert } from 'matchstick-as/assembly/index' +import { Gravatar } from '../../generated/schema' +import { NewGravatar } from '../../generated/Gravity/Gravity' +import { createNewGravatarEvent, handleNewGravatars } from '../mappings/gravity' + +test('Can call mappings with custom events', () => { + // Create a test entity and save it in the store as initial state (optional) + let gravatar = new Gravatar('gravatarId0') + gravatar.save() + + // Create mock events + let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') + let anotherGravatarEvent = createNewGravatarEvent(3546, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') + + // Call mapping functions passing the events we just created + handleNewGravatars([newGravatarEvent, anotherGravatarEvent]) + + // Assert the state of the store + assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') + assert.fieldEquals('Gravatar', '12345', 'owner', '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') + assert.fieldEquals('Gravatar', '3546', 'displayName', 'cap') + + // Clear the store in order to start the next test off on a clean slate + clearStore() +}) + +test('Next test', () => { + //... +}) +``` + +That's a lot to unpack! First off, an important thing to notice is that we're importing things from `matchstick-as`, our AssemblyScript helper library (distributed as an npm module). You can find the repository [here](https://github.com/LimeChain/matchstick-as). `matchstick-as` provides us with useful testing methods and also defines the `test()` function which we will use to build our test blocks. The rest of it is pretty straightforward - here's what happens: + +- We're setting up our initial state and adding one custom Gravatar entity; +- We define two `NewGravatar` event objects along with their data, using the `createNewGravatarEvent()` function; +- We're calling out handler methods for those events - `handleNewGravatars()` and passing in the list of our custom events; +- We assert the state of the store. How does that work? - We're passing a unique combination of Entity type and id. Then we check a specific field on that Entity and assert that it has the value we expect it to have. We're doing this both for the initial Gravatar Entity we added to the store, as well as the two Gravatar entities that gets added when the handler function is called; +- And lastly - we're cleaning the store using `clearStore()` so that our next test can start with a fresh and empty store object. We can define as many test blocks as we want. + +There we go - we've created our first test! 👏 + +Now in order to run our tests you simply need to run the following in your Subgraph root folder: + +`graph test Gravity` + +And if all goes well you should be greeted with the following: + +![Matchstick saying “All tests passed!”](/img/matchstick-tests-passed.png) + +## Common test scenarios + +### Hydrating the store with a certain state + +Users are able to hydrate the store with a known set of entities. Here's an example to initialise the store with a Gravatar entity: + +```typescript +let gravatar = new Gravatar('entryId') +gravatar.save() +``` + +### Calling a mapping function with an event + +A user can create a custom event and pass it to a mapping function that is bound to the store: + +```typescript +import { store } from 'matchstick-as/assembly/store' +import { NewGravatar } from '../../generated/Gravity/Gravity' +import { handleNewGravatars, createNewGravatarEvent } from './mapping' + +let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') + +handleNewGravatar(newGravatarEvent) +``` + +### Calling all of the mappings with event fixtures + +Users can call the mappings with test fixtures. + +```typescript +import { NewGravatar } from '../../generated/Gravity/Gravity' +import { store } from 'matchstick-as/assembly/store' +import { handleNewGravatars, createNewGravatarEvent } from './mapping' + +let newGravatarEvent = createNewGravatarEvent(12345, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') + +let anotherGravatarEvent = createNewGravatarEvent(3546, '0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7', 'cap', 'pac') + +handleNewGravatars([newGravatarEvent, anotherGravatarEvent]) +``` + +``` +export function handleNewGravatars(events: NewGravatar[]): void { + events.forEach(event => { + handleNewGravatar(event); + }); +} +``` + +### Mocking contract calls + +Users can mock contract calls: + +```typescript +import { addMetadata, assert, createMockedFunction, clearStore, test } from 'matchstick-as/assembly/index' +import { Gravity } from '../../generated/Gravity/Gravity' +import { Address, BigInt, ethereum } from '@graphprotocol/graph-ts' + +let contractAddress = Address.fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') +let expectedResult = Address.fromString('0x90cBa2Bbb19ecc291A12066Fd8329D65FA1f1947') +let bigIntParam = BigInt.fromString('1234') +createMockedFunction(contractAddress, 'gravatarToOwner', 'gravatarToOwner(uint256):(address)') + .withArgs([ethereum.Value.fromSignedBigInt(bigIntParam)]) + .returns([ethereum.Value.fromAddress(Address.fromString('0x90cBa2Bbb19ecc291A12066Fd8329D65FA1f1947'))]) + +let gravity = Gravity.bind(contractAddress) +let result = gravity.gravatarToOwner(bigIntParam) + +assert.equals(ethereum.Value.fromAddress(expectedResult), ethereum.Value.fromAddress(result)) +``` + +As demonstrated, in order to mock a contract call and hardcore a return value, the user must provide a contract address, function name, function signature, an array of arguments, and of course - the return value. + +Users can also mock function reverts: + +```typescript +let contractAddress = Address.fromString('0x89205A3A3b2A69De6Dbf7f01ED13B2108B2c43e7') +createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(string,string)') + .withArgs([ethereum.Value.fromAddress(contractAddress)]) + .reverts() +``` + +### Mocking IPFS files (from matchstick 0.4.1) + +Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file. + +NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow: + +`.test.ts` file: + +```typescript +import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index' +import { ipfs } from '@graphprotocol/graph-ts' +import { gravatarFromIpfs } from './utils' + +// Export ipfs.map() callback in order for matchstick to detect it +export { processGravatar } from './utils' + +test('ipfs.cat', () => { + mockIpfsFile('ipfsCatfileHash', 'tests/ipfs/cat.json') + + assert.entityCount(GRAVATAR_ENTITY_TYPE, 0) + + gravatarFromIpfs() + + assert.entityCount(GRAVATAR_ENTITY_TYPE, 1) + assert.fieldEquals(GRAVATAR_ENTITY_TYPE, '1', 'imageUrl', 'https://i.ytimg.com/vi/MELP46s8Cic/maxresdefault.jpg') + + clearStore() +}) + +test('ipfs.map', () => { + mockIpfsFile('ipfsMapfileHash', 'tests/ipfs/map.json') + + assert.entityCount(GRAVATAR_ENTITY_TYPE, 0) + + ipfs.map('ipfsMapfileHash', 'processGravatar', Value.fromString('Gravatar'), ['json']) + + assert.entityCount(GRAVATAR_ENTITY_TYPE, 3) + assert.fieldEquals(GRAVATAR_ENTITY_TYPE, '1', 'displayName', 'Gravatar1') + assert.fieldEquals(GRAVATAR_ENTITY_TYPE, '2', 'displayName', 'Gravatar2') + assert.fieldEquals(GRAVATAR_ENTITY_TYPE, '3', 'displayName', 'Gravatar3') +}) +``` + +`utils.ts` file: + +```typescript +import { Address, ethereum, JSONValue, Value, ipfs, json, Bytes } from "@graphprotocol/graph-ts" +import { Gravatar } from "../../generated/schema" + +... + +// ipfs.map callback +export function processGravatar(value: JSONValue, userData: Value): void { + // See the JSONValue documentation for details on dealing + // with JSON values + let obj = value.toObject() + let id = obj.get('id') + + if (!id) { + return + } + + // Callbacks can also created entities + let gravatar = new Gravatar(id.toString()) + gravatar.displayName = userData.toString() + id.toString() + gravatar.save() +} + +// function that calls ipfs.cat +export function gravatarFromIpfs(): void { + let rawData = ipfs.cat("ipfsCatfileHash") + + if (!rawData) { + return + } + + let jsonData = json.fromBytes(rawData as Bytes).toObject() + + let id = jsonData.get('id') + let url = jsonData.get("imageUrl") + + if (!id || !url) { + return + } + + let gravatar = new Gravatar(id.toString()) + gravatar.imageUrl = url.toString() + gravatar.save() +} +``` + +### Asserting the state of the store + +Users are able to assert the final (or midway) state of the store through asserting entities. In order to do this, the user has to supply an Entity type, the specific ID of an Entity, a name of a field on that Entity, and the expected value of the field. Here's a quick example: + +```typescript +import { assert } from 'matchstick-as/assembly/index' +import { Gravatar } from '../generated/schema' + +let gravatar = new Gravatar('gravatarId0') +gravatar.save() + +assert.fieldEquals('Gravatar', 'gravatarId0', 'id', 'gravatarId0') +``` + +Running the assert.fieldEquals() function will check for equality of the given field against the given expected value. The test will fail and an error message will be outputted if the values are **NOT** equal. Otherwise the test will pass successfully. + +### Interacting with Event metadata + +Users can use default transaction metadata, which could be returned as an ethereum.Event by using the `newMockEvent()` function. The following example shows how you can read/write to those fields on the Event object: + +```typescript +// Read +let logType = newGravatarEvent.logType + +// Write +let UPDATED_ADDRESS = '0xB16081F360e3847006dB660bae1c6d1b2e17eC2A' +newGravatarEvent.address = Address.fromString(UPDATED_ADDRESS) +``` + +### Asserting variable equality + +```typescript +assert.equals(ethereum.Value.fromString("hello"); ethereum.Value.fromString("hello")); +``` + +### Asserting that an Entity is **not** in the store + +Users can assert that an entity does not exist in the store. The function takes an entity type and an id. If the entity is in fact in the store, the test will fail with a relevant error message. Here's a quick example of how to use this functionality: + +```typescript +assert.notInStore('Gravatar', '23') +``` + +### Printing the whole store, or single entities from it (for debug purposes) + +You can print the whole store to the console using this helper function: + +```typescript +import { logStore } from 'matchstick-as/assembly/store' + +logStore() +``` + +As of version 0.6.0, `logStore` no longer prints derived fields, instead users can use the new `logEntity` function. Of course `logEntity` can be used to print any entity, not just ones that have derived fields. `logEntity` takes the entity type, entity id and a `showRelated` flag to indicate if users want to print the related derived entities. + +``` +import { logEntity } from 'matchstick-as/assembly/store' + + +logEntity("Gravatar", 23, true) +``` + +### Expected failure + +Users can have expected test failures, using the shouldFail flag on the test() functions: + +```typescript +test( + 'Should throw an error', + () => { + throw new Error() + }, + true, +) +``` + +If the test is marked with shouldFail = true but DOES NOT fail, that will show up as an error in the logs and the test block will fail. Also, if it's marked with shouldFail = false (the default state), the test executor will crash. + +### Logging + +Having custom logs in the unit tests is exactly the same as logging in the mappings. The difference is that the log object needs to be imported from matchstick-as rather than graph-ts. Here's a simple example with all non-critical log types: + +```typescript +import { test } from "matchstick-as/assembly/index"; +import { log } from "matchstick-as/assembly/log"; + +test("Success", () => { + log.success("Success!". []); +}); +test("Error", () => { + log.error("Error :( ", []); +}); +test("Debug", () => { + log.debug("Debugging...", []); +}); +test("Info", () => { + log.info("Info!", []); +}); +test("Warning", () => { + log.warning("Warning!", []); +}); +``` + +Users can also simulate a critical failure, like so: + +```typescript +test('Blow everything up', () => { + log.critical('Boom!') +}) +``` + +Logging critical errors will stop the execution of the tests and blow everything up. After all - we want to make sure you're code doesn't have critical logs in deployment, and you should notice right away if that were to happen. + +### Testing derived fields + +Testing derived fields is a feature which allows users to set a field on a certain entity and have another entity be updated automatically if it derives one of its fields from the first entity. + +Before version `0.6.0` it was possible to get the derived entities by accessing them as entity fields/properties, like so: + +```typescript +let entity = ExampleEntity.load('id') +let derivedEntity = entity.derived_entity +``` + +As of version `0.6.0`, this is done by using the `loadRelated` function of graph-node, the derived entities can be accessed the same way as in the handlers. + +```typescript +test('Derived fields example test', () => { + let mainAccount = GraphAccount.load('12')! + + assert.assertNull(mainAccount.get('nameSignalTransactions')) + assert.assertNull(mainAccount.get('operatorOf')) + + let operatedAccount = GraphAccount.load('1')! + operatedAccount.operators = [mainAccount.id] + operatedAccount.save() + + mockNameSignalTransaction('1234', mainAccount.id) + mockNameSignalTransaction('2', mainAccount.id) + + mainAccount = GraphAccount.load('12')! + + assert.assertNull(mainAccount.get('nameSignalTransactions')) + assert.assertNull(mainAccount.get('operatorOf')) + + const nameSignalTransactions = mainAccount.nameSignalTransactions.load() + const operatorsOfMainAccount = mainAccount.operatorOf.load() + + assert.i32Equals(2, nameSignalTransactions.length) + assert.i32Equals(1, operatorsOfMainAccount.length) + + assert.stringEquals('1', operatorsOfMainAccount[0].id) + + mockNameSignalTransaction('2345', mainAccount.id) + + let nst = NameSignalTransaction.load('1234')! + nst.signer = '11' + nst.save() + + store.remove('NameSignalTransaction', '2') + + mainAccount = GraphAccount.load('12')! + assert.i32Equals(1, mainAccount.nameSignalTransactions.load().length) +}) +``` + +### Testing `loadInBlock` + +As of version `0.6.0`, users can test `loadInBlock` by using the `mockInBlockStore`, it allows mocking entities in the block cache. + +```typescript +import { afterAll, beforeAll, describe, mockInBlockStore, test } from 'matchstick-as' +import { Gravatar } from '../../generated/schema' + +describe('loadInBlock', () => { + beforeAll(() => { + mockInBlockStore('Gravatar', 'gravatarId0', gravatar) + }) + + afterAll(() => { + clearInBlockStore() + }) + + test('Can use entity.loadInBlock() to retrieve entity from cache store in the current block', () => { + let retrievedGravatar = Gravatar.loadInBlock('gravatarId0') + assert.stringEquals('gravatarId0', retrievedGravatar!.get('id')!.toString()) + }) + + test("Returns null when calling entity.loadInBlock() if an entity doesn't exist in the current block", () => { + let retrievedGravatar = Gravatar.loadInBlock('IDoNotExist') + assert.assertNull(retrievedGravatar) + }) +}) +``` + +### Testing dynamic data sources + +Testing dynamic data sources can be be done by mocking the return value of the `context()`, `address()` and `network()` functions of the dataSource namespace. These functions currently return the following: `context()` - returns an empty entity (DataSourceContext), `address()` - returns `0x0000000000000000000000000000000000000000`, `network()` - returns `mainnet`. The `create(...)` and `createWithContext(...)` functions are mocked to do nothing so they don't need to be called in the tests at all. Changes to the return values can be done through the functions of the `dataSourceMock` namespace in `matchstick-as` (version 0.3.0+). + +Example below: + +First we have the following event handler (which has been intentionally repurposed to showcase datasource mocking): + +```typescript +export function handleApproveTokenDestinations(event: ApproveTokenDestinations): void { + let tokenLockWallet = TokenLockWallet.load(dataSource.address().toHexString())! + if (dataSource.network() == 'rinkeby') { + tokenLockWallet.tokenDestinationsApproved = true + } + let context = dataSource.context() + if (context.get('contextVal')!.toI32() > 0) { + tokenLockWallet.setBigInt('tokensReleased', BigInt.fromI32(context.get('contextVal')!.toI32())) + } + tokenLockWallet.save() +} +``` + +And then we have the test using one of the methods in the dataSourceMock namespace to set a new return value for all of the dataSource functions: + +```typescript +import { assert, test, newMockEvent, dataSourceMock } from 'matchstick-as/assembly/index' +import { BigInt, DataSourceContext, Value } from '@graphprotocol/graph-ts' + +import { handleApproveTokenDestinations } from '../../src/token-lock-wallet' +import { ApproveTokenDestinations } from '../../generated/templates/GraphTokenLockWallet/GraphTokenLockWallet' +import { TokenLockWallet } from '../../generated/schema' + +test('Data source simple mocking example', () => { + let addressString = '0xA16081F360e3847006dB660bae1c6d1b2e17eC2A' + let address = Address.fromString(addressString) + + let wallet = new TokenLockWallet(address.toHexString()) + wallet.save() + let context = new DataSourceContext() + context.set('contextVal', Value.fromI32(325)) + dataSourceMock.setReturnValues(addressString, 'rinkeby', context) + let event = changetype(newMockEvent()) + + assert.assertTrue(!wallet.tokenDestinationsApproved) + + handleApproveTokenDestinations(event) + + wallet = TokenLockWallet.load(address.toHexString())! + assert.assertTrue(wallet.tokenDestinationsApproved) + assert.bigIntEquals(wallet.tokensReleased, BigInt.fromI32(325)) + + dataSourceMock.resetValues() +}) +``` + +Notice that dataSourceMock.resetValues() is called at the end. That's because the values are remembered when they are changed and need to be reset if you want to go back to the default values. + +### Testing dynamic data source creation + +As of version `0.6.0`, it is possible to test if a new data source has been created from a template. This feature supports both ethereum/contract and file/ipfs templates. There are four functions for this: + +- `assert.dataSourceCount(templateName, expectedCount)` can be used to assert the expected count of data sources from the specified template +- `assert.dataSourceExists(templateName, address/ipfsHash)` asserts that a data source with the specified identifier (could be a contract address or IPFS file hash) from a specified template was created +- `logDataSources(templateName)` prints all data sources from the specified template to the console for debugging purposes +- `readFile(path)` reads a JSON file that represents an IPFS file and returns the content as Bytes + +#### Testing `ethereum/contract` templates + +```typescript +test('ethereum/contract dataSource creation example', () => { + // Assert there are no dataSources created from GraphTokenLockWallet template + assert.dataSourceCount('GraphTokenLockWallet', 0) + + // Create a new GraphTokenLockWallet datasource with address 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A + GraphTokenLockWallet.create(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2A')) + + // Assert the dataSource has been created + assert.dataSourceCount('GraphTokenLockWallet', 1) + + // Add a second dataSource with context + let context = new DataSourceContext() + context.set('contextVal', Value.fromI32(325)) + + GraphTokenLockWallet.createWithContext(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'), context) + + // Assert there are now 2 dataSources + assert.dataSourceCount('GraphTokenLockWallet', 2) + + // Assert that a dataSource with address "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" was created + // Keep in mind that `Address` type is transformed to lower case when decoded, so you have to pass the address as all lower case when asserting if it exists + assert.dataSourceExists('GraphTokenLockWallet', '0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'.toLowerCase()) + + logDataSources('GraphTokenLockWallet') +}) +``` + +##### Example `logDataSource` output + +```bash +🛠 { + "0xa16081f360e3847006db660bae1c6d1b2e17ec2a": { + "kind": "ethereum/contract", + "name": "GraphTokenLockWallet", + "address": "0xa16081f360e3847006db660bae1c6d1b2e17ec2a", + "context": null + }, + "0xa16081f360e3847006db660bae1c6d1b2e17ec2b": { + "kind": "ethereum/contract", + "name": "GraphTokenLockWallet", + "address": "0xa16081f360e3847006db660bae1c6d1b2e17ec2b", + "context": { + "contextVal": { + "type": "Int", + "data": 325 + } + } + } +} +``` + +#### Testing `file/ipfs` templates + +Similarly to contract dynamic data sources, users can test test file data sources and their handlers + +##### Example `subgraph.yaml` + +```yaml +... +templates: + - kind: file/ipfs + name: GraphTokenLockMetadata + network: mainnet + mapping: + kind: ethereum/events + apiVersion: 0.0.9 + language: wasm/assemblyscript + file: ./src/token-lock-wallet.ts + handler: handleMetadata + entities: + - TokenLockMetadata + abis: + - name: GraphTokenLockWallet + file: ./abis/GraphTokenLockWallet.json +``` + +##### Example `schema.graphql` + +```graphql +""" +Token Lock Wallets which hold locked GRT +""" +type TokenLockMetadata @entity { + "The address of the token lock wallet" + id: ID! + "Start time of the release schedule" + startTime: BigInt! + "End time of the release schedule" + endTime: BigInt! + "Number of periods between start time and end time" + periods: BigInt! + "Time when the releases start" + releaseStartTime: BigInt! +} +``` + +##### Example `metadata.json` + +```json +{ + "startTime": 1, + "endTime": 1, + "periods": 1, + "releaseStartTime": 1 +} +``` + +##### Example handler + +```typescript +export function handleMetadata(content: Bytes): void { + // dataSource.stringParams() returns the File DataSource CID + // stringParam() will be mocked in the handler test + // for more info https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files + let tokenMetadata = new TokenLockMetadata(dataSource.stringParam()) + const value = json.fromBytes(content).toObject() + + if (value) { + const startTime = value.get('startTime') + const endTime = value.get('endTime') + const periods = value.get('periods') + const releaseStartTime = value.get('releaseStartTime') + + if (startTime && endTime && periods && releaseStartTime) { + tokenMetadata.startTime = startTime.toBigInt() + tokenMetadata.endTime = endTime.toBigInt() + tokenMetadata.periods = periods.toBigInt() + tokenMetadata.releaseStartTime = releaseStartTime.toBigInt() + } + + tokenMetadata.save() + } +} +``` + +##### Example test + +```typescript +import { assert, test, dataSourceMock, readFile } from 'matchstick-as' +import { Address, BigInt, Bytes, DataSourceContext, ipfs, json, store, Value } from '@graphprotocol/graph-ts' + +import { handleMetadata } from '../../src/token-lock-wallet' +import { TokenLockMetadata } from '../../generated/schema' +import { GraphTokenLockMetadata } from '../../generated/templates' + +test('file/ipfs dataSource creation example', () => { + // Generate the dataSource CID from the ipfsHash + ipfs path file + // For example QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example.json + const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' + const CID = `${ipfshash}/example.json` + + // Create a new dataSource using the generated CID + GraphTokenLockMetadata.create(CID) + + // Assert the dataSource has been created + assert.dataSourceCount('GraphTokenLockMetadata', 1) + assert.dataSourceExists('GraphTokenLockMetadata', CID) + logDataSources('GraphTokenLockMetadata') + + // Now we have to mock the dataSource metadata and specifically dataSource.stringParam() + // dataSource.stringParams actually uses the value of dataSource.address(), so we will mock the address using dataSourceMock from matchstick-as + // First we will reset the values and then use dataSourceMock.setAddress() to set the CID + dataSourceMock.resetValues() + dataSourceMock.setAddress(CID) + + // Now we need to generate the Bytes to pass to the dataSource handler + // For this case we introduced a new function readFile, that reads a local json and returns the content as Bytes + const content = readFile(`path/to/metadata.json`) + handleMetadata(content) + + // Now we will test if a TokenLockMetadata was created + const metadata = TokenLockMetadata.load(CID) + + assert.bigIntEquals(metadata!.endTime, BigInt.fromI32(1)) + assert.bigIntEquals(metadata!.periods, BigInt.fromI32(1)) + assert.bigIntEquals(metadata!.releaseStartTime, BigInt.fromI32(1)) + assert.bigIntEquals(metadata!.startTime, BigInt.fromI32(1)) +}) +``` + +## Test Coverage + +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. + +The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. + +### Prerequisites + +To run the test coverage functionality provided in **Matchstick**, there are a few things you need to prepare beforehand: + +#### Export your handlers + +In order for **Matchstick** to check which handlers are being run, those handlers need to be exported from the **test file**. So for instance in our example, in our gravity.test.ts file we have the following handler being imported: + +```typescript +import { handleNewGravatar } from '../../src/gravity' +``` + +In order for that function to be visible (for it to be included in the `wat` file **by name**) we need to also export it, like this: + +```typescript +export { handleNewGravatar } +``` + +### Usage + +Once that's all set up, to run the test coverage tool, simply run: + +```sh +graph test -- -c +``` + +You could also add a custom `coverage` command to your `package.json` file, like so: + +```typescript + "scripts": { + /.../ + "coverage": "graph test -- -c" + }, +``` + +That will execute the coverage tool and you should see something like this in the terminal: + +```sh +$ graph test -c +Skipping download/install step because binary already exists at /Users/petko/work/demo-subgraph/node_modules/binary-install-raw/bin/0.4.0 + +___ ___ _ _ _ _ _ +| \/ | | | | | | | (_) | | +| . . | __ _| |_ ___| |__ ___| |_ _ ___| | __ +| |\/| |/ _` | __/ __| '_ \/ __| __| |/ __| |/ / +| | | | (_| | || (__| | | \__ \ |_| | (__| < +\_| |_/\__,_|\__\___|_| |_|___/\__|_|\___|_|\_\ + +Compiling... + +Running in coverage report mode. + ️ +Reading generated test modules... 🔎️ + +Generating coverage report 📝 + +Handlers for source 'Gravity': +Handler 'handleNewGravatar' is tested. +Handler 'handleUpdatedGravatar' is not tested. +Handler 'handleCreateGravatar' is tested. +Test coverage: 66.7% (2/3 handlers). + +Handlers for source 'GraphTokenLockWallet': +Handler 'handleTokensReleased' is not tested. +Handler 'handleTokensWithdrawn' is not tested. +Handler 'handleTokensRevoked' is not tested. +Handler 'handleManagerUpdated' is not tested. +Handler 'handleApproveTokenDestinations' is not tested. +Handler 'handleRevokeTokenDestinations' is not tested. +Test coverage: 0.0% (0/6 handlers). + +Global test coverage: 22.2% (2/9 handlers). +``` + +### Test run time duration in the log output + +The log output includes the test run duration. Here's an example: + +`[Thu, 31 Mar 2022 13:54:54 +0300] Program executed in: 42.270ms.` + +## Common compiler errors + +> Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined + +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/subgraphs/developing/creating/graph-ts/api/#logging-api) + +> ERROR TS2554: Expected ? arguments, but got ?. +> +> return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); +> +> in ~lib/matchstick-as/assembly/defaults.ts(18,12) +> +> ERROR TS2554: Expected ? arguments, but got ?. +> +> return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); +> +> in ~lib/matchstick-as/assembly/defaults.ts(24,12) + +The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. + +## Additional Resources + +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). + +## Feedback + +If you have any questions, feedback, feature requests or just want to reach out, the best place would be The Graph Discord where we have a dedicated channel for Matchstick, called 🔥| unit-testing. diff --git a/website/src/pages/sw/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/sw/subgraphs/developing/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..3b2b1bbc70ae --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/deploying/multiple-networks.mdx @@ -0,0 +1,242 @@ +--- +title: Deploying a Subgraph to Multiple Networks +sidebarTitle: Deploying to Multiple Networks +--- + +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). + +## Deploying the Subgraph to multiple networks + +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +This is what your networks config file should look like: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Now we can run one of the following commands: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Using subgraph.yaml template + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio Subgraph archive policy + +A Subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The Subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. + +Every Subgraph affected with this policy has an option to bring the version in question back. + +## Checking Subgraph health + +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/sw/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/sw/subgraphs/developing/deploying/using-subgraph-studio.mdx new file mode 100644 index 000000000000..77d10212c770 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -0,0 +1,131 @@ +--- +title: Deploying Using Subgraph Studio +--- + +Learn how to deploy your Subgraph to Subgraph Studio. + +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. + +## Subgraph Studio Overview + +In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: + +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs +- Restrict your API keys to specific domains and allow only certain Indexers to query with them +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network +- Manage your billing + +## Install The Graph CLI + +Before deploying, you must install The Graph CLI. + +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use The Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. + +### Install with yarn + +```bash +yarn global add @graphprotocol/graph-cli +``` + +### Install with npm + +```bash +npm install -g @graphprotocol/graph-cli +``` + +## Get Started + +1. Open [Subgraph Studio](https://thegraph.com/studio/). +2. Connect your wallet to sign in. + - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. + +> Important: You need an API key to query Subgraphs + +### How to Create a Subgraph in Subgraph Studio + + + +> For additional written detail, review the [Quick Start](/subgraphs/quick-start/). + +### Subgraph Compatibility with The Graph Network + +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. + +## Initialize Your Subgraph + +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: + +```bash +graph init +``` + +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: + +![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) + +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. + +## Graph Auth + +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. + +Then, use the following command to authenticate from the CLI: + +```bash +graph auth +``` + +## Deploying a Subgraph + +Once you are ready, you can deploy your Subgraph to Subgraph Studio. + +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. + +Use the following CLI command to deploy your Subgraph: + +```bash +graph deploy +``` + +After running this command, the CLI will ask for a version label. + +- It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as `v1`, `version1`, or `asdf`. +- The labels you create will be visible in Graph Explorer and can be used by curators to decide if they want to signal on a specific version or not, so choose them wisely. + +## Testing Your Subgraph + +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. + +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. + +## Publish Your Subgraph + +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). + +## Versioning Your Subgraph with the CLI + +If you want to update your Subgraph, you can do the following: + +- You can deploy a new version to Studio using the CLI (it will only be private at this point). +- Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. + +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. + +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). + +## Automatic Archiving of Subgraph Versions + +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. + +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. + +![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/sw/subgraphs/developing/developer-faq.mdx b/website/src/pages/sw/subgraphs/developing/developer-faq.mdx new file mode 100644 index 000000000000..e45141294523 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/developer-faq.mdx @@ -0,0 +1,148 @@ +--- +title: Developer FAQ +sidebarTitle: FAQ +--- + +This page summarizes some of the most common questions for developers building on The Graph. + +## Subgraph Related + +### 1. What is a Subgraph? + +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. + +### 2. What is the first step to create a Subgraph? + +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). + +### 3. Can I still create a Subgraph if my smart contracts don't have events? + +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. + +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. + +### 4. Can I change the GitHub account associated with my Subgraph? + +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. + +### 5. How do I update a Subgraph on mainnet? + +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. + +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? + +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. + +Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). + +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? + +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. + +You can also use `graph add` command to add a new dataSource. + +### 12. In what order are the event, block, and call handlers triggered for a data source? + +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. + +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. + +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? + +You can run the following command: + +```sh +docker pull graphprotocol/graph-node:latest +``` + +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. + +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? + +If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. + +### 15. Can I delete my Subgraph? + +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. + +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/supported-networks/). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? + +Yes. You can do this by importing `graph-ts` as per the example below: + +```javascript +import { dataSource } from '@graphprotocol/graph-ts' + +dataSource.network() +dataSource.address() +``` + +### 18. Do you support block and call handlers on Sepolia? + +Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. + +## Indexing & Querying Related + +### 19. Is it possible to specify what block to start indexing on? + +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) + +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync + +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) + +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? + +Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: + +```sh +curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql +``` + +### 22. Is there a limit to how many objects The Graph can return per query? + +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: + +```graphql +someCollection(first: 1000, skip: ) { ... } +``` + +### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? + +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. + +## Miscellaneous + +### 24. Is it possible to use Apollo Federation on top of graph-node? + +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. + +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? + +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/src/pages/sw/subgraphs/developing/introduction.mdx b/website/src/pages/sw/subgraphs/developing/introduction.mdx new file mode 100644 index 000000000000..06bc2b76104d --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/introduction.mdx @@ -0,0 +1,31 @@ +--- +title: Introduction to Subgraph Development +sidebarTitle: Introduction +--- + +To start coding right away, go to [Developer Quick Start](/subgraphs/quick-start/). + +## Overview + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. + +### Developer Actions + +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. + +### What are Subgraphs? + +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/sw/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/sw/subgraphs/developing/managing/deleting-a-subgraph.mdx new file mode 100644 index 000000000000..b8c2330ca49d --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -0,0 +1,31 @@ +--- +title: Deleting a Subgraph +--- + +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). + +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. + +## Step-by-Step + +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). + +2. Click on the three-dots to the right of the "publish" button. + +3. Click on the option to "delete this Subgraph": + + ![Delete-subgraph](/img/Delete-subgraph.png) + +4. Depending on the Subgraph's status, you will be prompted with various options. + + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. + +### Important Reminders + +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/sw/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/sw/subgraphs/developing/managing/transferring-a-subgraph.mdx new file mode 100644 index 000000000000..e80bde3fa6d2 --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -0,0 +1,42 @@ +--- +title: Transferring a Subgraph +--- + +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +## Reminders + +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. + +## View Your Subgraph as an NFT + +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +## Step-by-Step + +To transfer ownership of a Subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the Subgraph to: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) diff --git a/website/src/pages/sw/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/sw/subgraphs/developing/publishing/publishing-a-subgraph.mdx new file mode 100644 index 000000000000..2bc0ec5f514c --- /dev/null +++ b/website/src/pages/sw/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -0,0 +1,95 @@ +--- +title: Publishing a Subgraph to the Decentralized Network +sidebarTitle: Publishing to the Decentralized Network +--- + +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. + +When you publish a Subgraph to the decentralized network, you make it available for: + +- [Curators](/resources/roles/curating/) to begin curating it. +- [Indexers](/indexing/overview/) to begin indexing it. + + + +Check out the list of [supported networks](/supported-networks/). + +## Publishing from Subgraph Studio + +1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard +2. Click on the **Publish** button +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). + +All published versions of an existing Subgraph can: + +- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). + +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. + +### Updating metadata for a published Subgraph + +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. +- It's important to note that this process will not create a new version since your deployment has not changed. + +## Publishing from the CLI + +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). + +1. Open the `graph-cli`. +2. Use the following commands: `graph codegen && graph build` then `graph publish`. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +### Customizing your deployment + +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: + +``` +USAGE + $ graph publish [SUBGRAPH-MANIFEST] [-h] [--protocol-network arbitrum-one|arbitrum-sepolia --subgraph-id ] [-i ] [--ipfs-hash ] [--webapp-url + ] + +FLAGS + -h, --help Show CLI help. + -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node. + --ipfs-hash= IPFS hash of the subgraph manifest to deploy. + --protocol-network=